diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DIGSI 5 Today and Discover the Benefits of the Versatile Engineering Tool for SIPROTEC 5 Devices.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DIGSI 5 Today and Discover the Benefits of the Versatile Engineering Tool for SIPROTEC 5 Devices.md
deleted file mode 100644
index bf251c24ff03f9d99ff4e372505cf617cc4f5765..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DIGSI 5 Today and Discover the Benefits of the Versatile Engineering Tool for SIPROTEC 5 Devices.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download and Install DIGSI 5 - The Engineering Software for SIPROTEC 5 Protection Relays
-
DIGSI 5 is a versatile engineering tool for parameterizing, commissioning and operating all SIPROTEC 5 protection devices. It has an innovative user interface that includes context-sensitive user instructions and a simple connection to the device via USB. In this article, we will show you how to download and install DIGSI 5 on your computer.
You can download the latest version of DIGSI 5 from the Siemens website . There are three options available:
-
-
Trial version: This is a free version of DIGSI 5 Premium that can be used for 30 days without functional restrictions. It includes the latest versions of IEC 61850 System Configurator and SIGRA.
-
Update version: This is a software update for existing users of DIGSI 5 who want to upgrade to the latest version. It requires a valid license key.
-
Full version: This is a complete installation package for new users of DIGSI 5 who want to install it on a clean system. It also includes a trial version of DIGSI 5 Premium, usable for 30 days without functional restrictions.
-
-
To download DIGSI 5, you need to register or log in with your Siemens account and accept the terms of use. You can also find the product information, manuals, readme files and hotfixes for DIGSI 5 on the same page.
-
Step 2: Install DIGSI 5
-
After downloading the DIGSI 5 package, you need to unzip it and run the setup.exe file. Follow the instructions on the screen to complete the installation process. You may need to restart your computer after the installation.
-
If you are installing DIGSI 5 for the first time, you will need to activate it with a license key. You can request a license key from Siemens or use the trial version for 30 days. If you are updating from an earlier version of DIGSI 5, you can use your existing license key.
-
Step 3: Connect and Configure SIPROTEC 5 Devices
-
Once you have installed and activated DIGSI 5, you can connect your SIPROTEC 5 devices to your computer via USB or Ethernet. You can use DIGSI 5 to parameterize, commission and operate your devices easily and efficiently. You can also use the IEC 61850 System Configurator and SIGRA tools to configure and analyze communication networks and data.
-
For more information on how to use DIGSI 5, please refer to the manuals and online help available in the software.
-
Step 4: Test and Troubleshoot SIPROTEC 5 Devices
-
After configuring your SIPROTEC 5 devices, you can test and troubleshoot them using DIGSI 5. You can use the following features to ensure the proper functioning of your devices:
-
-
Device test: This feature allows you to perform various tests on your devices, such as output test, input test, LED test, display test and communication test. You can also run predefined test cases or create your own test cases.
-
Device diagnosis: This feature allows you to monitor the status and performance of your devices, such as voltage, current, power, frequency, temperature and memory usage. You can also view the event logs, fault records and disturbance records of your devices.
-
Device simulation: This feature allows you to simulate the behavior of your devices in different scenarios, such as normal operation, fault occurrence and protection tripping. You can also use the SIPROTEC Digital Twin plugin to connect your devices to a virtual power system model and simulate realistic scenarios.
-
-
For more information on how to test and troubleshoot SIPROTEC 5 devices, please refer to the manuals and online help available in the software.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 64 Bit Free Crack and Unleash the Power of RAR.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 64 Bit Free Crack and Unleash the Power of RAR.md
deleted file mode 100644
index 2db21df912093936c110a5da511198fa4640e28a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 64 Bit Free Crack and Unleash the Power of RAR.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
How to Download WinRAR 64 Bit Free Crack and Use It on Your PC
-
WinRAR is a popular and powerful file compression and archiving software. It can create and extract RAR, ZIP, and other archive formats. It can also split large files into smaller volumes, encrypt and password-protect archives, repair damaged files, and more. WinRAR is widely used by millions of users around the world for various purposes.
However, WinRAR is not a free software. You need to buy a license to use it legally on your PC. The official price of WinRAR is $29 for a single-user license or $21 per user for a multi-user license. These prices can be too high for some users who just want to use WinRAR occasionally or for personal purposes.
-
That's why some people look for ways to download WinRAR 64 bit free crack and use it without paying anything. A crack is a software tool that modifies or bypasses the original code of a program to make it work without a license or activation. By using a crack, you can get access to the full features of WinRAR without paying anything.
-
But is it safe and legal to download WinRAR 64 bit free crack? How can you do it and what are the risks involved? In this article, we will answer these questions and show you how to download WinRAR 64 bit free crack and use it on your PC.
-
Is It Safe and Legal to Download WinRAR 64 Bit Free Crack?
-
The short answer is no. Downloading WinRAR 64 bit free crack is neither safe nor legal. Here are some reasons why:
-
-
Downloading WinRAR 64 bit free crack is illegal because it violates the terms and conditions of WinRAR. You are not allowed to use WinRAR without a valid license. If you do so, you are committing software piracy, which is a crime that can result in fines or even jail time.
-
Downloading WinRAR 64 bit free crack is unsafe because it can expose your PC to malware and viruses. Cracks are often distributed by hackers or malicious websites that can infect your PC with malware and viruses. These can damage your files, steal your personal information, or even take control of your PC.
-
Downloading WinRAR 64 bit free crack is unreliable because it can cause errors and glitches. Cracks are not official updates or patches from WinRAR. They are often outdated or incompatible with the latest versions of WinRAR or Windows. This can cause errors and glitches that can affect the performance and functionality of WinRAR.
-
Downloading WinRAR 64 bit free crack is unethical because it harms the developers and creators of WinRAR. WinRAR invests a lot of time, money, and resources to develop and improve WinRAR. By using a crack, you are depriving them of their rightful income and recognition.
-
-
Therefore, we do not recommend downloading WinRAR 64 bit free crack and using it on your PC. It is not worth the risk and hassle. Instead, we suggest using one of the legal and safe alternatives that we will discuss in the next section.
-
-
How to Download WinRAR 64 Bit Free Crack and Use It on Your PC
-
If you still want to download WinRAR 64 bit free crack and use it on your PC, despite the risks and consequences involved, here are the steps you need to follow:
-
-
Go to a website that offers cracks for WinRAR or other software. There are many websites that claim to offer cracks for WinRAR or other software, but most of them are fake or malicious. You need to be careful and do some research before downloading anything from these websites. Some examples of websites that offer cracks for WinRAR are Yasir252 , Techworm , WizCase , etc.
-
Select the version of WinRAR that you want to download. Depending on the website you choose, you may find different versions of WinRAR available for download. For example, you may find WinRAR 6.21, 6.11, 6.02, etc., for Windows 11, 10, 8.1, 8, 7, etc., in both 32 bit and 64 bit versions. Choose the version that suits your needs and preferences.
-
Download ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dispensary Management Software Free Download [PORTABLE].md b/spaces/1gistliPinn/ChatGPT4/Examples/Dispensary Management Software Free Download [PORTABLE].md
deleted file mode 100644
index b53ba7a16b32417f822f8ac5df7ebbf999239aad..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Dispensary Management Software Free Download [PORTABLE].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-If an application is rejected for failure to provide required information, ... Our technology serves as an elegant retail marijuana POS with powerful dispensary management tools. We can ... Download the report template from the OMMA website. 1fdad05405
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Download Real Car Parking 3D and Become a Parking Master.md b/spaces/1phancelerku/anime-remove-background/Download Real Car Parking 3D and Become a Parking Master.md
deleted file mode 100644
index b49a38efc77e7045fbe2627e6722dfc4d1c30045..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Real Car Parking 3D and Become a Parking Master.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Real Car Parking and Driving Simulator 3D Download: A Review
-
Do you love driving cars and parking them in realistic scenarios? Do you want to test your skills and have fun at the same time? If you answered yes, then you should download real car parking and driving simulator 3d, one of the best car simulation games available on the market. In this article, we will review the features, benefits, and tips of this amazing game, and show you how to download it on your device. Let's get started!
-
Features of Real Car Parking and Driving Simulator 3D
-
Real car parking and driving simulator 3d is a game that offers you a realistic and immersive driving experience. Here are some of the features that make this game stand out:
-
real car parking and driving simulator 3d download
Realistic cars and physics: You can choose from a variety of cars, from sports cars to SUVs, each with its own characteristics and performance. The game also uses realistic physics to simulate the car's movement, weight, speed, and braking.
-
Challenging parking courses and levels: You can test your parking skills in different environments, such as city streets, parking lots, airports, docks, factories, and more. Each level has its own objectives, obstacles, and time limits. You can earn stars based on your performance and unlock new levels.
-
Amazing graphics and sound effects: The game has stunning 3D graphics that create a lifelike atmosphere. You can also enjoy the realistic sound effects of the engine, horn, brakes, tires, and collisions.
-
Customizable controls and camera angles: You can adjust the controls according to your preference, whether you want to use buttons, steering wheel, or tilt. You can also switch between different camera views, such as top-down, rear-view, or cockpit.
-
Offline and online modes: You can play the game offline without an internet connection, or you can play online with other players from around the world. You can compete in multiplayer races, join clans, chat with friends, and rank on the leaderboard.
-
-
How to Download Real Car Parking and Driving Simulator 3D
-
The game is available for free on various platforms. Here is how to download it on your device:
-
For Android devices
-
You can download the game from the Google Play Store by following these steps:
-
-
Open the Google Play Store app on your device.
-
Search for "real car parking 3d" or click [here](^1^).
-
Select the game from the list of results.
-
Tap on "Install" and wait for the download to finish.
-
Tap on "Open" to launch the game.
-
-
For iOS devices
-
You can download the game from the App Store by following these steps:
-
-
Open the App Store app on your device.
-
Search for "real car parking 3d" or click [here](^2^).
-
Select the game from the list of results.
-
Tap on "Get" and enter your Apple ID password if prompted.
-
Wait for the download to finish.
-
Tap on the game icon to launch the game.
-
-
For Windows devices
-
You can download the game from the Microsoft Store by following these steps:
-
-
Open the Microsoft Store app on your device.
-
Search for "real car parking 3d" or click [here].
-
Select the game from the list of results.
-
Click on "Get" and sign in with your Microsoft account if prompted.
-
Wait for the download to finish.
-
Click on "Play" to launch the game.
-
-
Tips and Tricks for Playing Real Car Parking and Driving Simulator 3D
-
Now that you have downloaded the game, you might be wondering how to play it and improve your skills. Here are some tips and tricks that will help you master the game:
-
Practice your parking skills in free mode
-
The game has a free mode where you can drive around without any time limit or objectives. This is a great way to get familiar with the controls, the car's behavior, and the environment. You can also practice parking in different spots and angles, and learn from your mistakes.
-
Use the brake and steering buttons wisely
-
The game has two buttons for braking and steering, which are located on the bottom left and right corners of the screen. You can use them to control the speed and direction of your car. However, you should not overuse them or press them too hard, as this might cause your car to skid, spin, or crash. You should also release them when you are not using them, as this will save your fuel and prevent overheating.
-
Follow the arrows and avoid obstacles
-
The game has arrows that guide you to your parking spot. You should follow them carefully and pay attention to the distance indicator, which shows how far you are from your destination. You should also avoid hitting any obstacles, such as cones, barriers, walls, or other cars, as this will damage your car and reduce your score. You can use the mini-map on the top right corner of the screen to see your surroundings and plan your route.
-
real car parking 3d simulator app
-real car parking and driving simulator 3d game
-real car parking and racing simulator 3d
-real car parking and drifting simulator 3d
-real car parking and driving school simulator 3d
-real car parking and driving test simulator 3d
-real car parking and driving challenge simulator 3d
-real car parking and driving skills simulator 3d
-real car parking and driving adventure simulator 3d
-real car parking and driving extreme simulator 3d
-real car parking and driving city simulator 3d
-real car parking and driving offroad simulator 3d
-real car parking and driving highway simulator 3d
-real car parking and driving airport simulator 3d
-real car parking and driving police simulator 3d
-real car parking and driving taxi simulator 3d
-real car parking and driving truck simulator 3d
-real car parking and driving bus simulator 3d
-real car parking and driving suv simulator 3d
-real car parking and driving sports car simulator 3d
-real car parking and driving classic car simulator 3d
-real car parking and driving luxury car simulator 3d
-real car parking and driving muscle car simulator 3d
-real car parking and driving supercar simulator 3d
-real car parking and driving hypercar simulator 3d
-real car parking and driving electric car simulator 3d
-real car parking and driving hybrid car simulator 3d
-real car parking and driving smart car simulator 3d
-real car parking and driving mini car simulator 3d
-real car parking and driving monster truck simulator 3d
-real car parking and driving tractor simulator 3d
-real car parking and driving loader simulator 3d
-real car parking and driving forklift simulator 3d
-real car parking and driving crane simulator 3d
-real car parking and driving tow truck simulator 3d
-real car parking and driving fire truck simulator 3d
-real car parking and driving ambulance simulator 3d
-real car parking and driving limo simulator 3d
-real car parking and driving jeep simulator 3d
-real car parking and driving pickup truck simulator 3d
-download free real car parking and driving simulator 3d for android
-download free real car parking and driving simulator 3d for ios
-download free real car parking and driving simulator 3d for pc
-download free real car parking and driving simulator 3d for windows
-download free real car parking and driving simulator 3d for mac
-download free real car parking and driving simulator 3d for linux
-download free real car parking and driving simulator 3d apk
-download free real car parking and driving simulator 3d mod apk
-download free real car parking and driving simulator 3d hack apk
-download free real car parking and driving simulator 3d unlimited money apk
-
Collect coins and gems to unlock new cars
-
The game has coins and gems that you can collect by driving around or completing levels. You can use them to buy new cars or upgrade your existing ones. Each car has its own stats, such as speed, acceleration, handling, and braking. You should try different cars and find the one that suits your style and preference.
-
Try different camera views to find the best angle
-
The game has four camera views that you can switch between by tapping on the camera icon on the top left corner of the screen. They are: top-down, rear-view, cockpit, and side-view. Each view has its own advantages and disadvantages, depending on the situation and your preference. You should experiment with different views and find the one that gives you the best visibility and comfort.
-
Conclusion
-
Real car parking and driving simulator 3d is a game that will challenge your driving and parking skills in a realistic and fun way. It has many features that make it stand out from other car simulation games, such as realistic cars and physics, challenging parking courses and levels, amazing graphics and sound effects, customizable controls and camera angles, offline and online modes, and more. You can download it for free on various platforms, such as Android, iOS, and Windows devices. If you are looking for a game that will keep you entertained for hours, then you should definitely give real car parking and driving simulator 3d a try!
-
If you liked this article, please share it with your friends and leave a comment below. We would love to hear your feedback and suggestions. Also, if you have any questions about the game or need more tips and tricks, feel free to ask us. We will be happy to help you!
-
Frequently Asked Questions
-
-
Q: How do I change the language of the game?
-
A: You can change the language of the game by tapping on the settings icon on the main menu screen. Then, select "Language" from the list of options. You can choose from English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, or Vietnamese.
-
Q: How do I save my progress in the game?
-
A: The game automatically saves your progress every time you complete a level or exit the game. You can also manually save your progress by tapping on the settings icon on the main menu screen. Then, select "Save Game" from the list of options. You can also load your saved game by selecting "Load Game" from the same menu.
-
Q: How do I reset my progress in the game?
-
A: You can reset your progress in the game by tapping on the settings icon on the main menu screen. Then, select "Reset Game" from the list of options. This will erase all your data and start the game from scratch. Be careful, as this action cannot be undone.
-
Q: How do I contact the developers of the game?
-
A: You can contact the developers of the game by tapping on the settings icon on the main menu screen. Then, select "Contact Us" from the list of options. You can send them an email with your feedback, suggestions, or issues. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, or YouTube.
-
Q: How do I rate and review the game?
-
A: You can rate and review the game by tapping on the settings icon on the main menu screen. Then, select "Rate Us" from the list of options. This will redirect you to the store page of the game, where you can give it a star rating and write a comment. Your feedback is very important for us and helps us improve the game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/commands/env.py b/spaces/1toTree/lora_test/ppdiffusers/commands/env.py
deleted file mode 100644
index 4cb2bcfe9032bb62692dbbdc17316c962dbc5787..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/commands/env.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import platform
-from argparse import ArgumentParser
-
-from .. import __version__ as version
-from ..utils import is_paddle_available, is_paddlenlp_available
-from . import BasePPDiffusersCLICommand
-
-
-def info_command_factory(_):
- return EnvironmentCommand()
-
-
-class EnvironmentCommand(BasePPDiffusersCLICommand):
- @staticmethod
- def register_subcommand(parser: ArgumentParser):
- download_parser = parser.add_parser("env")
- download_parser.set_defaults(func=info_command_factory)
-
- def run(self):
-
- pd_version = "not installed"
- pd_cuda_available = "NA"
- if is_paddle_available():
- import paddle
-
- pd_version = paddle.__version__
- pd_cuda_available = paddle.device.is_compiled_with_cuda()
-
- paddlenlp_version = "not installed"
- if is_paddlenlp_available:
- import paddlenlp
-
- paddlenlp_version = paddlenlp.__version__
-
- info = {
- "`ppdiffusers` version": version,
- "Platform": platform.platform(),
- "Python version": platform.python_version(),
- "Paddle version (GPU?)": f"{pd_version} ({pd_cuda_available})",
- "PaddleNLP version": paddlenlp_version,
- "Using GPU in script?": "",
- "Using distributed or parallel set-up in script?": "",
- }
-
- print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n")
- print(self.format_dict(info))
-
- return info
-
- @staticmethod
- def format_dict(d):
- return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n"
diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/prepare_data.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/prepare_data.py
deleted file mode 100644
index aa385d0ac13550e1ae5513f7a20b35997a5c3ea6..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/stylegan/prepare_data.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import argparse
-from io import BytesIO
-import multiprocessing
-from functools import partial
-
-import os
-from PIL import Image
-import lmdb
-from tqdm import tqdm
-from torchvision import datasets
-from torchvision.transforms import functional as trans_fn
-
-
-def resize_and_convert(img, size, resample, quality=100):
- img = trans_fn.resize(img, size, resample)
- img = trans_fn.center_crop(img, size)
- buffer = BytesIO()
- img.save(buffer, format="jpeg", quality=quality)
- val = buffer.getvalue()
-
- return val
-
-
-def resize_multiple(
- img, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS, quality=100
-):
- imgs = []
-
- for size in sizes:
- imgs.append(resize_and_convert(img, size, resample, quality))
-
- return imgs
-
-
-def resize_worker(img_file, sizes, resample):
- i, file = img_file
- img = Image.open(file)
- img = img.convert("RGB")
- out = resize_multiple(img, sizes=sizes, resample=resample)
-
- return i, out
-
-
-def prepare(
- env, dataset, n_worker, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS
-):
- resize_fn = partial(resize_worker, sizes=sizes, resample=resample)
-
- files = sorted(dataset.imgs, key=lambda x: x[0])
- files = [(i, file) for i, (file, label) in enumerate(files)]
- total = 0
-
- with multiprocessing.Pool(n_worker) as pool:
- for i, imgs in tqdm(pool.imap_unordered(resize_fn, files)):
- for size, img in zip(sizes, imgs):
- key = f"{size}-{str(i).zfill(5)}".encode("utf-8")
-
- with env.begin(write=True) as txn:
- txn.put(key, img)
-
- total += 1
-
- with env.begin(write=True) as txn:
- txn.put("length".encode("utf-8"), str(total).encode("utf-8"))
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Preprocess images for model training")
- parser.add_argument("--out", type=str, help="filename of the result lmdb dataset")
- parser.add_argument(
- "--size",
- type=str,
- default="128,256,512,1024",
- help="resolutions of images for the dataset",
- )
- parser.add_argument(
- "--n_worker",
- type=int,
- default=8,
- help="number of workers for preparing dataset",
- )
- parser.add_argument(
- "--resample",
- type=str,
- default="lanczos",
- help="resampling methods for resizing images",
- )
- parser.add_argument("path", type=str, help="path to the image dataset")
-
- args = parser.parse_args()
-
- if not os.path.exists(args.out):
- os.makedirs(args.out)
-
- resample_map = {"lanczos": Image.LANCZOS, "bilinear": Image.BILINEAR}
- resample = resample_map[args.resample]
-
- sizes = [int(s.strip()) for s in args.size.split(",")]
-
- print(f"Make dataset of image sizes:", ", ".join(str(s) for s in sizes))
-
- imgset = datasets.ImageFolder(args.path)
-
- with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env:
- prepare(env, imgset, args.n_worker, sizes=sizes, resample=resample)
diff --git a/spaces/52Hz/CMFNet_deraindrop/main_test_CMFNet.py b/spaces/52Hz/CMFNet_deraindrop/main_test_CMFNet.py
deleted file mode 100644
index c175ec3eeddff845d3d3439c7d34f44ac2c98b92..0000000000000000000000000000000000000000
--- a/spaces/52Hz/CMFNet_deraindrop/main_test_CMFNet.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import argparse
-import cv2
-import glob
-import numpy as np
-from collections import OrderedDict
-from skimage import img_as_ubyte
-import os
-import torch
-import requests
-from PIL import Image
-import torchvision.transforms.functional as TF
-import torch.nn.functional as F
-from natsort import natsorted
-from model.CMFNet import CMFNet
-
-
-def main():
- parser = argparse.ArgumentParser(description='Demo Image Deraindrop')
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
- parser.add_argument('--result_dir', default='results/', type=str, help='Directory for results')
- parser.add_argument('--weights',
- default='experiments/pretrained_models/deraindrop_model.pth', type=str,
- help='Path to weights')
-
- args = parser.parse_args()
-
- inp_dir = args.input_dir
- out_dir = args.result_dir
-
- os.makedirs(out_dir, exist_ok=True)
-
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
-
- if len(files) == 0:
- raise Exception(f"No files found at {inp_dir}")
-
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- # Load corresponding models architecture and weights
- model = CMFNet()
- model = model.to(device)
- model.eval()
- load_checkpoint(model, args.weights)
-
-
- mul = 8
- for file_ in files:
- img = Image.open(file_).convert('RGB')
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
-
- # Pad the input if not_multiple_of 8
- h, w = input_.shape[2], input_.shape[3]
- H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul
- padh = H - h if h % mul != 0 else 0
- padw = W - w if w % mul != 0 else 0
- input_ = F.pad(input_, (0, padw, 0, padh), 'reflect')
-
- with torch.no_grad():
- restored = model(input_)
-
- restored = torch.clamp(restored, 0, 1)
- restored = restored[:, :, :h, :w]
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
- restored = img_as_ubyte(restored[0])
-
- f = os.path.splitext(os.path.split(file_)[-1])[0]
- save_img((os.path.join(out_dir, f + '.png')), restored)
-
-
-def save_img(filepath, img):
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
-
-
-def load_checkpoint(model, weights):
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
- try:
- model.load_state_dict(checkpoint["state_dict"])
- except:
- state_dict = checkpoint["state_dict"]
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- name = k[7:] # remove `module.`
- new_state_dict[name] = v
- model.load_state_dict(new_state_dict)
-
-def clean_folder(folder):
- for filename in os.listdir(folder):
- file_path = os.path.join(folder, filename)
- try:
- if os.path.isfile(file_path) or os.path.islink(file_path):
- os.unlink(file_path)
- elif os.path.isdir(file_path):
- shutil.rmtree(file_path)
- except Exception as e:
- print('Failed to delete %s. Reason: %s' % (file_path, e))
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/7hao/bingo/src/lib/hooks/use-enter-submit.tsx
deleted file mode 100644
index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/lib/hooks/use-enter-submit.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import { useRef, type RefObject } from 'react'
-
-export function useEnterSubmit(): {
- formRef: RefObject
- onKeyDown: (event: React.KeyboardEvent) => void
-} {
- const formRef = useRef(null)
-
- const handleKeyDown = (
- event: React.KeyboardEvent
- ): void => {
- if (
- event.key === 'Enter' &&
- !event.shiftKey &&
- !event.nativeEvent.isComposing
- ) {
- formRef.current?.requestSubmit()
- event.preventDefault()
- }
- }
-
- return { formRef, onKeyDown: handleKeyDown }
-}
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/losses/balancer.py b/spaces/AIConsultant/MusicGen/audiocraft/losses/balancer.py
deleted file mode 100644
index 8a0ac8adebab8cdee8f82351965195dc02800d18..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/losses/balancer.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import flashy
-import torch
-from torch import autograd
-
-
-class Balancer:
- """Loss balancer.
-
- The loss balancer combines losses together to compute gradients for the backward.
- Given `y = f(...)`, and a number of losses `l1(y, ...)`, `l2(y, ...)`, with `...`
- not having any dependence on `f`, the balancer can efficiently normalize the partial gradients
- `d l1 / d y`, `d l2 / dy` before summing them in order to achieve a desired ratio between
- the losses. For instance if `weights = {'l1': 2, 'l2': 1}`, 66% of the gradient
- going into `f(...)` will come from `l1` on average, and 33% from `l2`. This allows for an easy
- interpration of the weights even if the intrisic scale of `l1`, `l2` ... is unknown.
-
- Noting `g1 = d l1 / dy`, etc., the balanced gradient `G` will be
- (with `avg` an exponential moving average over the updates),
-
- G = sum_i total_norm * g_i / avg(||g_i||) * w_i / sum(w_i)
-
- If `balance_grads` is False, this is deactivated, and instead the gradient will just be the
- standard sum of the partial gradients with the given weights.
-
- A call to the backward method of the balancer will compute the the partial gradients,
- combining all the losses and potentially rescaling the gradients,
- which can help stabilize the training and reason about multiple losses with varying scales.
- The obtained gradient with respect to `y` is then back-propagated to `f(...)`.
-
- Expected usage:
-
- weights = {'loss_a': 1, 'loss_b': 4}
- balancer = Balancer(weights, ...)
- losses: dict = {}
- losses['loss_a'] = compute_loss_a(x, y)
- losses['loss_b'] = compute_loss_b(x, y)
- if model.training():
- effective_loss = balancer.backward(losses, x)
-
- Args:
- weights (dict[str, float]): Weight coefficient for each loss. The balancer expect the losses keys
- from the backward method to match the weights keys to assign weight to each of the provided loss.
- balance_grads (bool): Whether to rescale gradients so that weights reflect the fraction of the
- overall gradient, rather than a constant multiplier.
- total_norm (float): Reference norm when rescaling gradients, ignored otherwise.
- emay_decay (float): EMA decay for averaging the norms.
- per_batch_item (bool): Whether to compute the averaged norm per batch item or not. This only holds
- when rescaling the gradients.
- epsilon (float): Epsilon value for numerical stability.
- monitor (bool): If True, stores in `self.metrics` the relative ratio between the norm of the gradients
- coming from each loss, when calling `backward()`.
- """
- def __init__(self, weights: tp.Dict[str, float], balance_grads: bool = True, total_norm: float = 1.,
- ema_decay: float = 0.999, per_batch_item: bool = True, epsilon: float = 1e-12,
- monitor: bool = False):
- self.weights = weights
- self.per_batch_item = per_batch_item
- self.total_norm = total_norm or 1.
- self.averager = flashy.averager(ema_decay or 1.)
- self.epsilon = epsilon
- self.monitor = monitor
- self.balance_grads = balance_grads
- self._metrics: tp.Dict[str, tp.Any] = {}
-
- @property
- def metrics(self):
- return self._metrics
-
- def backward(self, losses: tp.Dict[str, torch.Tensor], input: torch.Tensor) -> torch.Tensor:
- """Compute the backward and return the effective train loss, e.g. the loss obtained from
- computing the effective weights. If `balance_grads` is True, the effective weights
- are the one that needs to be applied to each gradient to respect the desired relative
- scale of gradients coming from each loss.
-
- Args:
- losses (Dict[str, torch.Tensor]): dictionary with the same keys as `self.weights`.
- input (torch.Tensor): the input of the losses, typically the output of the model.
- This should be the single point of dependence between the losses
- and the model being trained.
- """
- norms = {}
- grads = {}
- for name, loss in losses.items():
- # Compute partial derivative of the less with respect to the input.
- grad, = autograd.grad(loss, [input], retain_graph=True)
- if self.per_batch_item:
- # We do not average the gradient over the batch dimension.
- dims = tuple(range(1, grad.dim()))
- norm = grad.norm(dim=dims, p=2).mean()
- else:
- norm = grad.norm(p=2)
- norms[name] = norm
- grads[name] = grad
-
- count = 1
- if self.per_batch_item:
- count = len(grad)
- # Average norms across workers. Theoretically we should average the
- # squared norm, then take the sqrt, but it worked fine like that.
- avg_norms = flashy.distrib.average_metrics(self.averager(norms), count)
- # We approximate the total norm of the gradient as the sums of the norms.
- # Obviously this can be very incorrect if all gradients are aligned, but it works fine.
- total = sum(avg_norms.values())
-
- self._metrics = {}
- if self.monitor:
- # Store the ratio of the total gradient represented by each loss.
- for k, v in avg_norms.items():
- self._metrics[f'ratio_{k}'] = v / total
-
- total_weights = sum([self.weights[k] for k in avg_norms])
- assert total_weights > 0.
- desired_ratios = {k: w / total_weights for k, w in self.weights.items()}
-
- out_grad = torch.zeros_like(input)
- effective_loss = torch.tensor(0., device=input.device, dtype=input.dtype)
- for name, avg_norm in avg_norms.items():
- if self.balance_grads:
- # g_balanced = g / avg(||g||) * total_norm * desired_ratio
- scale = desired_ratios[name] * self.total_norm / (self.epsilon + avg_norm)
- else:
- # We just do regular weighted sum of the gradients.
- scale = self.weights[name]
- out_grad.add_(grads[name], alpha=scale)
- effective_loss += scale * losses[name].detach()
- # Send the computed partial derivative with respect to the output of the model to the model.
- input.backward(out_grad)
- return effective_loss
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/train.py b/spaces/AIConsultant/MusicGen/audiocraft/train.py
deleted file mode 100644
index 22dd117830bb403829d0a60b1b95e120d1e6978b..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/train.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Entry point for dora to launch solvers for running training loops.
-See more info on how to use dora: https://github.com/facebookresearch/dora
-"""
-
-import logging
-import multiprocessing
-import os
-import sys
-import typing as tp
-
-from dora import git_save, hydra_main, XP
-import flashy
-import hydra
-import omegaconf
-
-from .environment import AudioCraftEnvironment
-from .utils.cluster import get_slurm_parameters
-
-logger = logging.getLogger(__name__)
-
-
-def resolve_config_dset_paths(cfg):
- """Enable Dora to load manifest from git clone repository."""
- # manifest files for the different splits
- for key, value in cfg.datasource.items():
- if isinstance(value, str):
- cfg.datasource[key] = git_save.to_absolute_path(value)
-
-
-def get_solver(cfg):
- from . import solvers
- # Convert batch size to batch size for each GPU
- assert cfg.dataset.batch_size % flashy.distrib.world_size() == 0
- cfg.dataset.batch_size //= flashy.distrib.world_size()
- for split in ['train', 'valid', 'evaluate', 'generate']:
- if hasattr(cfg.dataset, split) and hasattr(cfg.dataset[split], 'batch_size'):
- assert cfg.dataset[split].batch_size % flashy.distrib.world_size() == 0
- cfg.dataset[split].batch_size //= flashy.distrib.world_size()
- resolve_config_dset_paths(cfg)
- solver = solvers.get_solver(cfg)
- return solver
-
-
-def get_solver_from_xp(xp: XP, override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None,
- restore: bool = True, load_best: bool = True,
- ignore_state_keys: tp.List[str] = [], disable_fsdp: bool = True):
- """Given a XP, return the Solver object.
-
- Args:
- xp (XP): Dora experiment for which to retrieve the solver.
- override_cfg (dict or None): If not None, should be a dict used to
- override some values in the config of `xp`. This will not impact
- the XP signature or folder. The format is different
- than the one used in Dora grids, nested keys should actually be nested dicts,
- not flattened, e.g. `{'optim': {'batch_size': 32}}`.
- restore (bool): If `True` (the default), restore state from the last checkpoint.
- load_best (bool): If `True` (the default), load the best state from the checkpoint.
- ignore_state_keys (list[str]): List of sources to ignore when loading the state, e.g. `optimizer`.
- disable_fsdp (bool): if True, disables FSDP entirely. This will
- also automatically skip loading the EMA. For solver specific
- state sources, like the optimizer, you might want to
- use along `ignore_state_keys=['optimizer']`. Must be used with `load_best=True`.
- """
- logger.info(f"Loading solver from XP {xp.sig}. "
- f"Overrides used: {xp.argv}")
- cfg = xp.cfg
- if override_cfg is not None:
- cfg = omegaconf.OmegaConf.merge(cfg, omegaconf.DictConfig(override_cfg))
- if disable_fsdp and cfg.fsdp.use:
- cfg.fsdp.use = False
- assert load_best is True
- # ignoring some keys that were FSDP sharded like model, ema, and best_state.
- # fsdp_best_state will be used in that case. When using a specific solver,
- # one is responsible for adding the relevant keys, e.g. 'optimizer'.
- # We could make something to automatically register those inside the solver, but that
- # seem overkill at this point.
- ignore_state_keys = ignore_state_keys + ['model', 'ema', 'best_state']
-
- try:
- with xp.enter():
- solver = get_solver(cfg)
- if restore:
- solver.restore(load_best=load_best, ignore_state_keys=ignore_state_keys)
- return solver
- finally:
- hydra.core.global_hydra.GlobalHydra.instance().clear()
-
-
-def get_solver_from_sig(sig: str, *args, **kwargs):
- """Return Solver object from Dora signature, i.e. to play with it from a notebook.
- See `get_solver_from_xp` for more information.
- """
- xp = main.get_xp_from_sig(sig)
- return get_solver_from_xp(xp, *args, **kwargs)
-
-
-def init_seed_and_system(cfg):
- import numpy as np
- import torch
- import random
- from audiocraft.modules.transformer import set_efficient_attention_backend
-
- multiprocessing.set_start_method(cfg.mp_start_method)
- logger.debug('Setting mp start method to %s', cfg.mp_start_method)
- random.seed(cfg.seed)
- np.random.seed(cfg.seed)
- # torch also initialize cuda seed if available
- torch.manual_seed(cfg.seed)
- torch.set_num_threads(cfg.num_threads)
- os.environ['MKL_NUM_THREADS'] = str(cfg.num_threads)
- os.environ['OMP_NUM_THREADS'] = str(cfg.num_threads)
- logger.debug('Setting num threads to %d', cfg.num_threads)
- set_efficient_attention_backend(cfg.efficient_attention_backend)
- logger.debug('Setting efficient attention backend to %s', cfg.efficient_attention_backend)
-
-
-@hydra_main(config_path='../config', config_name='config', version_base='1.1')
-def main(cfg):
- init_seed_and_system(cfg)
-
- # Setup logging both to XP specific folder, and to stderr.
- log_name = '%s.log.{rank}' % cfg.execute_only if cfg.execute_only else 'solver.log.{rank}'
- flashy.setup_logging(level=str(cfg.logging.level).upper(), log_name=log_name)
- # Initialize distributed training, no need to specify anything when using Dora.
- flashy.distrib.init()
- solver = get_solver(cfg)
- if cfg.show:
- solver.show()
- return
-
- if cfg.execute_only:
- assert cfg.execute_inplace or cfg.continue_from is not None, \
- "Please explicitly specify the checkpoint to continue from with continue_from= " + \
- "when running with execute_only or set execute_inplace to True."
- solver.restore(replay_metrics=False) # load checkpoint
- solver.run_one_stage(cfg.execute_only)
- return
-
- return solver.run()
-
-
-main.dora.dir = AudioCraftEnvironment.get_dora_dir()
-main._base_cfg.slurm = get_slurm_parameters(main._base_cfg.slurm)
-
-if main.dora.shared is not None and not os.access(main.dora.shared, os.R_OK):
- print("No read permission on dora.shared folder, ignoring it.", file=sys.stderr)
- main.dora.shared = None
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/app.py b/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/app.py
deleted file mode 100644
index efd0275e9f265945ef312f431a7ef4ead82e80c4..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import streamlit as st
-import gradio as gr
-import IPython
-import streamlit as st
-import streamlit.components.v1 as components
-from IPython.display import IFrame
-
-#quantum imports:
-import qiskit
-from qiskit import QuantumCircuit, QuantumRegister, execute
-
-src='' # URL parameter to change the iframe url
-
-def SetIframeURL(option_selected):
- if (option_selected=='QCEngine'):
- src='https://oreilly-qc.github.io?p=2-1'
- if (option_selected=='Grok'):
- src='https://javafxpert.github.io/grok-bloch/'
- if (option_selected=='Playground'):
- src='https://davidbkemp.github.io/quantum-gate-playground/'
- if (option_selected=='Circuit'):
- src='https://algassert.com/quirk#circuit={%22cols%22:[[%22H%22],[%22Bloch%22],[%22Measure%22]]}'
-
- # Render iframe contents
- #st.set_page_config(layout="wide")
- width = st.sidebar.slider("Width", 200, 1500, 800, 100)
- height = st.sidebar.slider("Height", 200, 1500, 900, 100)
- st.components.v1.iframe(src, width, height, scrolling=True)
-
-# query params exist
-try:
- options = ['QCEngine', 'Grok', 'Playground', 'Circuit']
- query_params = st.experimental_get_query_params()
- query_option = query_params['option'][0] #throws an exception when visiting http://host:port
- option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option))
- if option_selected:
- st.experimental_set_query_params(option=option_selected)
- SetIframeURL(option_selected)
-
-# run when query params don't exist. e.g on first launch
-except: # catch exception and set query param to predefined value
- options = ['QCEngine', 'Grok', 'Playground', 'Circuit']
- st.experimental_set_query_params(option=options[1]) # defaults to dog
- query_params = st.experimental_get_query_params()
- query_option = query_params['option'][0]
- option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option))
- if option_selected:
- st.experimental_set_query_params(option=option_selected)
- SetIframeURL(option_selected)
-
-def LoadGradioAIModels():
- title = "AI Quantum - QGAN and QCEngine"
- description = "Using Superposition Advantage from Quantum for QGAN AI."
- article = ""
-
- examples = [
- ["Scientific breakthroughs in treatment of HIV/AIDS may be solved in our lifetime using a procedure called [MASK] modulation which strengthens the immune system to fight the disease."],["A disease called [MASK] disease involves progressive memory loss and has new treatments to improve memory and delay progression of the disease."],["[MASK] refers to the uncontrolled growth of abnormal cells in the body. With chemotherapy and radiation therapy have improvements and replacements that destroy cancer cells before they become resistant to current treatment methods."],["The hereditary disease [MASK] is caused by mucus abnormally thick preventing lungs and pancreas from doing their jobs correctly."],["[MASK] or atherosclerosis is the buildup of cholesterol, fatty cells, and inflammatory deposits in the arteries. Stem cells, mechanical devices, and lowering cholesterol and blood pressure levels are helping prevention."]]
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/work_dirs/yolov6_s_df2_0.4/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/work_dirs/yolov6_s_df2_0.4/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Abdullah-Habib/Text_to_Speech_Urdu/app.py b/spaces/Abdullah-Habib/Text_to_Speech_Urdu/app.py
deleted file mode 100644
index bc0013cd89b182dd6d722b823d970b89f085d0dc..0000000000000000000000000000000000000000
--- a/spaces/Abdullah-Habib/Text_to_Speech_Urdu/app.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import torch
-from transformers import SpeechT5ForTextToSpeech, SpeechT5Processor, SpeechT5HifiGan
-import soundfile as sf
-import gradio as gr
-import scipy.io.wavfile as wav
-import numpy as np
-import wave
-from datasets import load_dataset, Audio, config
-from IPython.display import Audio
-
-# Load the TTS model from the Hugging Face Hub
-checkpoint = "Abdullah-Habib/urdu_speech_tt" # Replace with your actual model name
-processor = SpeechT5Processor.from_pretrained(checkpoint)
-model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint)
-tokenizer = processor.tokenizer
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-
-
-# Buckwalter to Unicode mapping
-buck2uni = {
- u"\u0627":"a",
- u"\u0627":"a",
- u"\u0675":"a",
- u"\u0673":"a",
- u"\u0630":"a",
- u"\u0622":"aa",
- u"\u0628":"b",
- u"\u067E":"p",
- u"\u062A":"t",
- u"\u0637":"t",
- u"\u0679":"t",
- u"\u062C":"j",
- u"\u0633":"s",
- u"\u062B":"s",
- u"\u0635":"s",
- u"\u0686":"ch",
- u"\u062D":"h",
- u"\u0647":"h",
- u"\u0629":"h",
- u"\u06DF":"h",
- u"\u062E":"kh",
- u"\u062F":"d",
- u"\u0688":"d",
- u"\u0630":"z",
- u"\u0632":"z",
- u"\u0636":"z",
- u"\u0638":"z",
- u"\u068E":"z",
- u"\u0631":"r",
- u"\u0691":"r",
- u"\u0634":"sh",
- u"\u063A":"gh",
- u"\u0641":"f",
- u"\u06A9":"k",
- u"\u0642":"k",
- u"\u06AF":"g",
- u"\u0644":"l",
- u"\u0645":"m",
- u"\u0646":"n",
- u"\u06BA":"n",
- u"\u0648":"o",
- u"\u0649":"y",
- u"\u0626":"y",
- u"\u06CC":"y",
- u"\u06D2":"e",
- u"\u06C1":"h",
- u"\u064A":"e" ,
- u"\u06C2":"ah" ,
- u"\u06BE":"h" ,
- u"\u0639":"a" ,
- u"\u0643":"k" ,
- u"\u0621":"a",
- u"\u0624":"o",
- u"\u060C":"" #seperator ulta comma
- }
-def transString(string, reverse=0):
- """Given a Unicode string, transliterate into Buckwalter. To go from
- Buckwalter back to Unicode, set reverse=1"""
- for k, v in buck2uni.items():
- if not reverse:
- string = string.replace(k, v)
- else:
- string = string.replace(v, k)
- return string
-
-
-def generate_audio(text):
- # Convert input text to Roman Urdu
- roman_urdu = transString(text)
-
- # Tokenize the input text
- inputs = processor(text=roman_urdu, return_tensors="pt", type = "numpy")
-
- # Generate audio from the SpeechT5 model
-
-
-
- # speaker_embeddings = torch.tensor(np.load("speaker_embeddings.npy"))
-
- speaker_embeddings = torch.load("speaker_embeddings_29.pt")
- # speaker_embeddings= torch.tensor([[-0.0917, -0.0461, 0.0347, 0.0341, 0.0197, -0.0438, -0.0377, -0.0212, 0.0361, 0.0220, -0.0676, -0.0731, 0.0827, 0.0132, 0.0187, 0.0577, -0.0026, 0.0618, 0.0088, 0.0159, 0.0344, 0.0243, -0.0164, -0.0430, -0.0556, -0.0044, -0.0413, -0.0003, 0.0310, 0.0369, -0.0034, 0.0424, 0.0474, 0.0102, 0.0392, -0.0611, 0.0405, 0.0652, -0.0386, -0.0638, 0.0255, -0.0411, 0.0398, 0.0490, 0.0297, -0.1218, -0.0206, 0.0146,-0.0649, 0.0550, 0.0177, 0.0407, 0.0017, -0.0113, -0.0990, -0.0015,0.0158, 0.0481, 0.0286, 0.0300, 0.0346, -0.0104, -0.0142, -0.0005,0.0264, 0.0412, 0.0227, -0.0389, -0.0489, -0.0750, 0.0238, 0.0101,0.0171, 0.0141, 0.0224, 0.0344, 0.0402, 0.0336, -0.0641, -0.0818, -0.0731, -0.0470, -0.0512, -0.0602, -0.0344, -0.0442, -0.0541, 0.0097, 0.0198, 0.0482, 0.0323, -0.0885, 0.0210, -0.0798, 0.0417, -0.0436, 0.0402, 0.0256, -0.0641, -0.0668, -0.0023, -0.0706, -0.0928, 0.0121, 0.0355, -0.0376, 0.0522, 0.0482, 0.0200, 0.0290, -0.0698, -0.0232, 0.0878, 0.0044, 0.0559, 0.0581, -0.0718, 0.0095, -0.0538, 0.0125, 0.0023, -0.0562, 0.0424, 0.0261, -0.0498, 0.0255, -0.0840, 0.0331, 0.0406, 0.0162, -0.0522, 0.0218, 0.0323, 0.0359, 0.0128, -0.0891, -0.0569, 0.0031, -0.0694, -0.0102, 0.0118, 0.0033, 0.0127, 0.0589, -0.0783, 0.0179, 0.0200, -0.0371, 0.0325, -0.1033, 0.0483, -0.0343, -0.0714, 0.0102, 0.0665, 0.0278, 0.0285, -0.0653, -0.0834, 0.0196, 0.0399, 0.0085, 0.0246, -0.0400, 0.0215, 0.0083, 0.0302, 0.0204, 0.0360, 0.0309, -0.0306, -0.0828, 0.0142, -0.0614, -0.0103, 0.0372, -0.0456, 0.0291, 0.0565, -0.0271, 0.0518, -0.0671, 0.0012, -0.0048, -0.0565, -0.0092, 0.0336, 0.0476, -0.0351, -0.0698, 0.0487, 0.0313, -0.0491, 0.0401, 0.0246, 0.0178, 0.0405, 0.0012, 0.0311, -0.0041, 0.0367, 0.0330, -0.0609, 0.0099, -0.0097, 0.0173, 0.0494, -0.0305, 0.0272, -0.0349, 0.0025, -0.0697, -0.0414, 0.0604, -0.0707, 0.0420, 0.0380, -0.0731, 0.0546, 0.0339, -0.0758, 0.0365, -0.0712, -0.0140, 0.0365, 0.0477, 0.0796, 0.0572, 0.0212, 0.0098, 0.0133, 0.0261, 0.0329, -0.0269, 0.0437, -0.0359, 0.0296, 0.0180, -0.0008, 0.0668, -0.0448, 0.0269, -0.0734, 0.0194, -0.0494, 0.0432, 0.0449, 0.0442, 0.0389, 0.0530, 0.0420, 0.0021, 0.0084, -0.0820, -0.0081, 0.0326, 0.0265, 0.0536, -0.0714, 0.0188, 0.0298, -0.0737, 0.0110, 0.0340, 0.0016, 0.0262, 0.0179, 0.0109, 0.0426, -0.0538, 0.0649, 0.0160, 0.0146, -0.0419, -0.0851, 0.0138, 0.0399, 0.0445, -0.0849, -0.0425, 0.0293, 0.0477, 0.0108, -0.0941, -0.0386, 0.0600, 0.0089, 0.0557,-0.0892, 0.0026, 0.0192, 0.0136, -0.0207, -0.0023, 0.0163, 0.0263, -0.0112, 0.0245, 0.0411, 0.0285, 0.0267, 0.0297, 0.0213, -0.0577, 0.0169, 0.0592, 0.0227, 0.0290, 0.0074, 0.0197, 0.0282, 0.0368,0.0064, 0.0092, -0.0896, -0.0693, -0.0295, 0.0316, -0.0674, 0.0645,-0.0655, 0.0355, -0.0389, 0.0134, 0.0299, -0.0534, 0.0537, 0.0900, -0.0770, -0.0666, -0.0600, -0.0019, 0.0276, 0.0590, -0.0705, 0.0222, 0.0517, -0.0089, 0.0063, -0.0270, 0.0185, -0.0626, -0.0065, 0.0187,-0.0670, 0.0216, 0.0356, 0.0384, -0.0268, -0.0628, -0.0443, -0.0195, -0.0495, 0.1405, 0.0274, -0.0455, -0.0068, 0.0686, -0.0756, -0.0073, -0.0981, 0.0025, 0.0383, 0.0157, 0.0651, 0.0252, -0.0665, 0.0054, 0.0223, 0.0509, 0.0101, 0.0454, -0.0527, 0.0252, -0.0157, -0.0022, 0.0526, 0.0224, 0.0494, 0.0293, -0.0808, -0.1220, 0.0196, 0.0135, 0.0303, -0.0467, 0.0411, -0.0639, 0.0358, 0.0499, 0.0425, 0.0169, -0.0579, 0.0388, 0.0414, -0.0101, 0.0490, -0.0773, 0.0478, -0.0238, -0.0142, -0.0508, 0.0018, -0.0085, 0.0198, 0.0126, 0.0133, -0.0554, -0.0583, -0.0699, -0.0167, 0.0131, 0.0288, -0.0132, 0.0343, -0.0476, -0.0039, -0.0825, -0.1180, -0.0570, -0.0590, 0.0233, 0.0500, -0.0328, -0.0426, 0.0241, 0.0441, 0.0372, 0.0488, -0.0366, -0.0233, -0.0118, -0.0256, 0.0254, 0.0041, 0.0119, 0.0423, 0.0178, -0.0245, -0.0769, 0.0056, 0.0428, 0.0341, -0.0009, -0.0197, 0.0395, 0.0247, 0.0090, 0.0098, -0.0083, 0.0346, 0.0411, 0.0416, 0.0413, 0.0312, 0.0054, 0.0390, -0.0571, -0.0403, 0.0441, -0.0132, 0.0117, 0.0467, 0.0516,-0.0639, 0.0296, 0.0337, -0.0557, 0.0110, 0.0277, -0.0026, 0.0347, 0.0301, 0.0056, -0.0572, -0.0663, 0.0124, -0.0065, 0.0222, 0.0441,-0.0570, -0.0519, 0.0132, 0.0323, 0.0401, 0.0357, -0.0555, 0.0310,0.0028, -0.0102, -0.0598, 0.0153, -0.0438, 0.0268, -0.0097, 0.0388,-0.0330, -0.0277, -0.0581, -0.0389, 0.0099, 0.0371, -0.0455, 0.0553, 0.0753, -0.0154, -0.0385, 0.0359, 0.0403, 0.0464, 0.0499, -0.0365]])
-
-
-
- speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
-
- return speech
-
-def text_to_speech(text):
- # Generate audio
- audio_output = generate_audio(text)
-
- output_path = "output.wav"
- sf.write(output_path, audio_output.numpy(), 16000, "PCM_16")
-
- return output_path
-
-
-examples = [
- ['اگر رشتے داری ہے تو پیسے کی'],
- ['میری تعلیم جیکی کی ہے۔']
-]
-
-
-interface = gr.Interface(fn=text_to_speech, inputs="text", outputs="audio", verbose = True, title="Urdu TTS",
- description = "A simple Urdu Text to Speech Application. It is not by any means perfect and will not work for all text. You can sometimes expect it to generate random noise on an input of your choice. Right now it works successfully on very basic urdu text, such the ones in the example.", examples = examples)
-interface.launch()
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DfeHub.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DfeHub.py
deleted file mode 100644
index d40e03803130ff4169f66bfe4f9cd2e90239f784..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DfeHub.py
+++ /dev/null
@@ -1,77 +0,0 @@
-from __future__ import annotations
-
-import json
-import re
-import time
-
-import requests
-
-from ..typing import Any, CreateResult
-from .base_provider import BaseProvider
-
-
-class DfeHub(BaseProvider):
- url = "https://chat.dfehub.com/"
- supports_stream = True
- supports_gpt_35_turbo = True
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool, **kwargs: Any) -> CreateResult:
-
- headers = {
- "authority" : "chat.dfehub.com",
- "accept" : "*/*",
- "accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
- "content-type" : "application/json",
- "origin" : "https://chat.dfehub.com",
- "referer" : "https://chat.dfehub.com/",
- "sec-ch-ua" : '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- "sec-ch-ua-mobile" : "?0",
- "sec-ch-ua-platform": '"macOS"',
- "sec-fetch-dest" : "empty",
- "sec-fetch-mode" : "cors",
- "sec-fetch-site" : "same-origin",
- "user-agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
- "x-requested-with" : "XMLHttpRequest",
- }
-
- json_data = {
- "messages" : messages,
- "model" : "gpt-3.5-turbo",
- "temperature" : kwargs.get("temperature", 0.5),
- "presence_penalty" : kwargs.get("presence_penalty", 0),
- "frequency_penalty" : kwargs.get("frequency_penalty", 0),
- "top_p" : kwargs.get("top_p", 1),
- "stream" : True
- }
-
- response = requests.post("https://chat.dfehub.com/api/openai/v1/chat/completions",
- headers=headers, json=json_data, timeout=3)
-
- for chunk in response.iter_lines():
- if b"detail" in chunk:
- delay = re.findall(r"\d+\.\d+", chunk.decode())
- delay = float(delay[-1])
- time.sleep(delay)
- yield from DfeHub.create_completion(model, messages, stream, **kwargs)
- if b"content" in chunk:
- data = json.loads(chunk.decode().split("data: ")[1])
- yield (data["choices"][0]["delta"]["content"])
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ("presence_penalty", "int"),
- ("frequency_penalty", "int"),
- ("top_p", "int"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py
deleted file mode 100644
index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock,
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
-
- def forward(self, x):
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(self, path=None, non_negative=True, **kwargs):
- features = kwargs["features"] if "features" in kwargs else 256
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
-
- def forward(self, x):
- return super().forward(x).squeeze(dim=1)
-
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/swipe/Swipe.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/swipe/Swipe.d.ts
deleted file mode 100644
index 7f1f79b0473a48491014feb5a681948a2a12aab6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/swipe/Swipe.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { Swipe } from '../../../plugins/gestures';
-export default Swipe;
\ No newline at end of file
diff --git a/spaces/AlanMars/QYL-AI-Space/assets/custom.js b/spaces/AlanMars/QYL-AI-Space/assets/custom.js
deleted file mode 100644
index af69a893e5bbf36d6d2f78ede4c71c49967ec987..0000000000000000000000000000000000000000
--- a/spaces/AlanMars/QYL-AI-Space/assets/custom.js
+++ /dev/null
@@ -1,607 +0,0 @@
-
-// custom javascript here
-
-const MAX_HISTORY_LENGTH = 32;
-
-var key_down_history = [];
-var currentIndex = -1;
-var user_input_ta;
-
-var gradioContainer = null;
-var user_input_ta = null;
-var user_input_tb = null;
-var userInfoDiv = null;
-var appTitleDiv = null;
-var chatbot = null;
-var chatbotWrap = null;
-var apSwitch = null;
-var empty_botton = null;
-var messageBotDivs = null;
-// var renderLatex = null;
-var loginUserForm = null;
-var logginUser = null;
-
-var userLogged = false;
-var usernameGotten = false;
-var shouldRenderLatex = false;
-var historyLoaded = false;
-
-var ga = document.getElementsByTagName("gradio-app");
-var targetNode = ga[0];
-var isInIframe = (window.self !== window.top);
-var language = navigator.language.slice(0,2);
-
-var forView_i18n = {
- 'zh': "仅供查看",
- 'en': "For viewing only",
- 'ja': "閲覧専用",
- 'fr': "Pour consultation seulement",
- 'es': "Solo para visualización",
-};
-
-// gradio 页面加载好了么??? 我能动你的元素了么??
-function gradioLoaded(mutations) {
- for (var i = 0; i < mutations.length; i++) {
- if (mutations[i].addedNodes.length) {
- loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form")
- gradioContainer = document.querySelector(".gradio-container");
- user_input_tb = document.getElementById('user_input_tb');
- userInfoDiv = document.getElementById("user_info");
- appTitleDiv = document.getElementById("app_title");
- chatbot = document.querySelector('#chuanhu_chatbot');
- chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap');
- apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- // renderLatex = document.querySelector("#render_latex_checkbox > label > input");
- empty_botton = document.getElementById("empty_btn")
-
- if (loginUserForm) {
- localStorage.setItem("userLogged", true);
- userLogged = true;
- }
-
- if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没?
- adjustDarkMode();
- }
- if (user_input_tb) { // user_input_tb 加载出来了没?
- selectHistory();
- }
- if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没?
- if (!usernameGotten) {
- getUserInfo();
- }
- setTimeout(showOrHideUserInfo(), 2000);
- }
- if (chatbot) { // chatbot 加载出来了没?
- setChatbotHeight();
- }
- if (chatbotWrap) {
- if (!historyLoaded) {
- loadHistoryHtml();
- }
- setChatbotScroll();
- }
- // if (renderLatex) { // renderLatex 加载出来了没?
- // shouldRenderLatex = renderLatex.checked;
- // updateMathJax();
- // }
- if (empty_botton) {
- emptyHistory();
- }
- }
- }
-}
-
-function webLocale() {
- console.log("webLocale", language);
- if (forView_i18n.hasOwnProperty(language)) {
- var forView = forView_i18n[language];
- var forViewStyle = document.createElement('style');
- forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }';
- document.head.appendChild(forViewStyle);
- // console.log("added forViewStyle", forView);
- }
-}
-
-function selectHistory() {
- user_input_ta = user_input_tb.querySelector("textarea");
- if (user_input_ta) {
- observer.disconnect(); // 停止监听
- // 在 textarea 上监听 keydown 事件
- user_input_ta.addEventListener("keydown", function (event) {
- var value = user_input_ta.value.trim();
- // 判断按下的是否为方向键
- if (event.code === 'ArrowUp' || event.code === 'ArrowDown') {
- // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作
- if (value && key_down_history.indexOf(value) === -1)
- return;
- // 对于需要响应的动作,阻止默认行为。
- event.preventDefault();
- var length = key_down_history.length;
- if (length === 0) {
- currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置
- return;
- }
- if (currentIndex === -1) {
- currentIndex = length;
- }
- if (event.code === 'ArrowUp' && currentIndex > 0) {
- currentIndex--;
- user_input_ta.value = key_down_history[currentIndex];
- } else if (event.code === 'ArrowDown' && currentIndex < length - 1) {
- currentIndex++;
- user_input_ta.value = key_down_history[currentIndex];
- }
- user_input_ta.selectionStart = user_input_ta.value.length;
- user_input_ta.selectionEnd = user_input_ta.value.length;
- const input_event = new InputEvent("input", { bubbles: true, cancelable: true });
- user_input_ta.dispatchEvent(input_event);
- } else if (event.code === "Enter") {
- if (value) {
- currentIndex = -1;
- if (key_down_history.indexOf(value) === -1) {
- key_down_history.push(value);
- if (key_down_history.length > MAX_HISTORY_LENGTH) {
- key_down_history.shift();
- }
- }
- }
- }
- });
- }
-}
-
-var username = null;
-function getUserInfo() {
- if (usernameGotten) {
- return;
- }
- userLogged = localStorage.getItem('userLogged');
- if (userLogged) {
- username = userInfoDiv.innerText;
- if (username) {
- if (username.includes("getting user info…")) {
- setTimeout(getUserInfo, 500);
- return;
- } else if (username === " ") {
- localStorage.removeItem("username");
- localStorage.removeItem("userLogged")
- userLogged = false;
- usernameGotten = true;
- return;
- } else {
- username = username.match(/User:\s*(.*)/)[1] || username;
- localStorage.setItem("username", username);
- usernameGotten = true;
- clearHistoryHtml();
- }
- }
- }
-}
-
-function toggleUserInfoVisibility(shouldHide) {
- if (userInfoDiv) {
- if (shouldHide) {
- userInfoDiv.classList.add("hideK");
- } else {
- userInfoDiv.classList.remove("hideK");
- }
- }
-}
-function showOrHideUserInfo() {
- var sendBtn = document.getElementById("submit_btn");
-
- // Bind mouse/touch events to show/hide user info
- appTitleDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- userInfoDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- sendBtn.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
-
- appTitleDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- userInfoDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- sendBtn.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
-
- appTitleDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- userInfoDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- sendBtn.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
-
- appTitleDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- userInfoDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- sendBtn.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000); // Delay 1 second to hide user info
- };
-
- // Hide user info after 2 second
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 2000);
-}
-
-function toggleDarkMode(isEnabled) {
- if (isEnabled) {
- gradioContainer.classList.add("dark");
- document.body.style.setProperty("background-color", "var(--neutral-950)", "important");
- } else {
- gradioContainer.classList.remove("dark");
- document.body.style.backgroundColor = "";
- }
-}
-function adjustDarkMode() {
- const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)");
-
- // 根据当前颜色模式设置初始状态
- apSwitch.checked = darkModeQuery.matches;
- toggleDarkMode(darkModeQuery.matches);
- // 监听颜色模式变化
- darkModeQuery.addEventListener("change", (e) => {
- apSwitch.checked = e.matches;
- toggleDarkMode(e.matches);
- });
- // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- apSwitch.addEventListener("change", (e) => {
- toggleDarkMode(e.target.checked);
- });
-}
-
-function setChatbotHeight() {
- const screenWidth = window.innerWidth;
- const statusDisplay = document.querySelector('#status_display');
- const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0;
- const wrap = chatbot.querySelector('.wrap');
- const vh = window.innerHeight * 0.01;
- document.documentElement.style.setProperty('--vh', `${vh}px`);
- if (isInIframe) {
- chatbot.style.height = `520px`;
- wrap.style.maxHeight = `calc(520px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`
- } else {
- if (screenWidth <= 320) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else if (screenWidth <= 499) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- }
- }
-}
-function setChatbotScroll() {
- var scrollHeight = chatbotWrap.scrollHeight;
- chatbotWrap.scrollTo(0,scrollHeight)
-}
-var rangeInputs = null;
-var numberInputs = null;
-function setSlider() {
- rangeInputs = document.querySelectorAll('input[type="range"]');
- numberInputs = document.querySelectorAll('input[type="number"]')
- setSliderRange();
- rangeInputs.forEach(rangeInput => {
- rangeInput.addEventListener('input', setSliderRange);
- });
- numberInputs.forEach(numberInput => {
- numberInput.addEventListener('input', setSliderRange);
- })
-}
-function setSliderRange() {
- var range = document.querySelectorAll('input[type="range"]');
- range.forEach(range => {
- range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%';
- });
-}
-
-function addChuanhuButton(botElement) {
- var rawMessage = null;
- var mdMessage = null;
- rawMessage = botElement.querySelector('.raw-message');
- mdMessage = botElement.querySelector('.md-message');
- if (!rawMessage) {
- var buttons = botElement.querySelectorAll('button.chuanhu-btn');
- for (var i = 0; i < buttons.length; i++) {
- buttons[i].parentNode.removeChild(buttons[i]);
- }
- return;
- }
- var copyButton = null;
- var toggleButton = null;
- copyButton = botElement.querySelector('button.copy-bot-btn');
- toggleButton = botElement.querySelector('button.toggle-md-btn');
- if (copyButton) copyButton.remove();
- if (toggleButton) toggleButton.remove();
-
- // Copy bot button
- var copyButton = document.createElement('button');
- copyButton.classList.add('chuanhu-btn');
- copyButton.classList.add('copy-bot-btn');
- copyButton.setAttribute('aria-label', 'Copy');
- copyButton.innerHTML = copyIcon;
- copyButton.addEventListener('click', () => {
- const textToCopy = rawMessage.innerText;
- navigator.clipboard
- .writeText(textToCopy)
- .then(() => {
- copyButton.innerHTML = copiedIcon;
- setTimeout(() => {
- copyButton.innerHTML = copyIcon;
- }, 1500);
- })
- .catch(() => {
- console.error("copy failed");
- });
- });
- botElement.appendChild(copyButton);
-
- // Toggle button
- var toggleButton = document.createElement('button');
- toggleButton.classList.add('chuanhu-btn');
- toggleButton.classList.add('toggle-md-btn');
- toggleButton.setAttribute('aria-label', 'Toggle');
- var renderMarkdown = mdMessage.classList.contains('hideM');
- toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon;
- toggleButton.addEventListener('click', () => {
- renderMarkdown = mdMessage.classList.contains('hideM');
- if (renderMarkdown){
- renderMarkdownText(botElement);
- toggleButton.innerHTML=rawIcon;
- } else {
- removeMarkdownText(botElement);
- toggleButton.innerHTML=mdIcon;
- }
- });
- botElement.insertBefore(toggleButton, copyButton);
-}
-
-function addCopyCodeButton(pre) {
- var code = null;
- var firstChild = null;
- code = pre.querySelector('code');
- if (!code) return;
- firstChild = code.querySelector('div');
- if (!firstChild) return;
- var oldCopyButton = null;
- oldCopyButton = code.querySelector('button.copy-code-btn');
- // if (oldCopyButton) oldCopyButton.remove();
- if (oldCopyButton) return; // 没太有用,新生成的对话中始终会被pre覆盖,导致按钮消失,这段代码不启用……
- var codeButton = document.createElement('button');
- codeButton.classList.add('copy-code-btn');
- codeButton.textContent = '\uD83D\uDCCE';
-
- code.insertBefore(codeButton, firstChild);
- codeButton.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild);
- navigator.clipboard
- .writeText(range.toString())
- .then(() => {
- codeButton.textContent = '\u2714';
- setTimeout(function () {
- codeButton.textContent = '\uD83D\uDCCE';
- }, 2000);
- })
- .catch(e => {
- console.error(e);
- codeButton.textContent = '\u2716';
- });
- });
-}
-
-function renderMarkdownText(message) {
- var mdDiv = message.querySelector('.md-message');
- if (mdDiv) mdDiv.classList.remove('hideM');
- var rawDiv = message.querySelector('.raw-message');
- if (rawDiv) rawDiv.classList.add('hideM');
-}
-function removeMarkdownText(message) {
- var rawDiv = message.querySelector('.raw-message');
- if (rawDiv) rawDiv.classList.remove('hideM');
- var mdDiv = message.querySelector('.md-message');
- if (mdDiv) mdDiv.classList.add('hideM');
-}
-
-var rendertime = 0; // for debugging
-var mathjaxUpdated = false;
-
-function renderMathJax() {
- messageBotDivs = document.querySelectorAll('.message.bot .md-message');
- for (var i = 0; i < messageBotDivs.length; i++) {
- var mathJaxSpan = messageBotDivs[i].querySelector('.MathJax_Preview');
- if (!mathJaxSpan && shouldRenderLatex && !mathjaxUpdated) {
- MathJax.Hub.Queue(["Typeset", MathJax.Hub, messageBotDivs[i]]);
- rendertime +=1; // for debugging
- // console.log("renderingMathJax", i)
- }
- }
- mathjaxUpdated = true;
- // console.log("MathJax Rendered")
-}
-
-function removeMathjax() {
- // var jax = MathJax.Hub.getAllJax();
- // for (var i = 0; i < jax.length; i++) {
- // // MathJax.typesetClear(jax[i]);
- // jax[i].Text(newmath)
- // jax[i].Reprocess()
- // }
- // 我真的不会了啊啊啊,mathjax并没有提供转换为原先文本的办法。
- mathjaxUpdated = true;
- // console.log("MathJax removed!");
-}
-
-function updateMathJax() {
- // renderLatex.addEventListener("change", function() {
- // shouldRenderLatex = renderLatex.checked;
- // if (!mathjaxUpdated) {
- // if (shouldRenderLatex) {
- // renderMathJax();
- // } else {
- // console.log("MathJax Disabled")
- // removeMathjax();
- // }
- // } else {
- // if (!shouldRenderLatex) {
- // mathjaxUpdated = false; // reset
- // }
- // }
- // });
- if (shouldRenderLatex && !mathjaxUpdated) {
- renderMathJax();
- }
- mathjaxUpdated = false;
-}
-
-let timeoutId;
-let isThrottled = false;
-var mmutation
-// 监听所有元素中 bot message 的变化,用来查找需要渲染的mathjax, 并为 bot 消息添加复制按钮。
-var mObserver = new MutationObserver(function (mutationsList) {
- for (mmutation of mutationsList) {
- if (mmutation.type === 'childList') {
- for (var node of mmutation.addedNodes) {
- if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') {
- if (shouldRenderLatex) {
- renderMathJax();
- mathjaxUpdated = false;
- }
- saveHistoryHtml();
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton);
- }
- if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') {
- setSlider();
- }
- }
- for (var node of mmutation.removedNodes) {
- if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') {
- if (shouldRenderLatex) {
- renderMathJax();
- mathjaxUpdated = false;
- }
- saveHistoryHtml();
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton);
- }
- }
- } else if (mmutation.type === 'attributes') {
- if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') {
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot pre').forEach(addCopyCodeButton); // 目前写的是有点问题的,会导致加button次数过多,但是bot对话内容生成时又是不断覆盖pre的……
- if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_
- isThrottled = true;
- clearTimeout(timeoutId);
- timeoutId = setTimeout(() => {
- isThrottled = false;
- if (shouldRenderLatex) {
- renderMathJax();
- mathjaxUpdated = false;
- }
- document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton);
- saveHistoryHtml();
- }, 500);
- }
- }
- }
-});
-mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true });
-
-var loadhistorytime = 0; // for debugging
-function saveHistoryHtml() {
- var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap');
- localStorage.setItem('chatHistory', historyHtml.innerHTML);
- // console.log("History Saved")
- historyLoaded = false;
-}
-function loadHistoryHtml() {
- var historyHtml = localStorage.getItem('chatHistory');
- if (!historyHtml) {
- historyLoaded = true;
- return; // no history, do nothing
- }
- userLogged = localStorage.getItem('userLogged');
- if (userLogged){
- historyLoaded = true;
- return; // logged in, do nothing
- }
- if (!historyLoaded) {
- var tempDiv = document.createElement('div');
- tempDiv.innerHTML = historyHtml;
- var buttons = tempDiv.querySelectorAll('button.chuanhu-btn');
- for (var i = 0; i < buttons.length; i++) {
- buttons[i].parentNode.removeChild(buttons[i]);
- }
- var fakeHistory = document.createElement('div');
- fakeHistory.classList.add('history-message');
- fakeHistory.innerHTML = tempDiv.innerHTML;
- webLocale();
- chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
- // var fakeHistory = document.createElement('div');
- // fakeHistory.classList.add('history-message');
- // fakeHistory.innerHTML = historyHtml;
- // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
- historyLoaded = true;
- console.log("History Loaded");
- loadhistorytime += 1; // for debugging
- } else {
- historyLoaded = false;
- }
-}
-function clearHistoryHtml() {
- localStorage.removeItem("chatHistory");
- historyMessages = chatbotWrap.querySelector('.history-message');
- if (historyMessages) {
- chatbotWrap.removeChild(historyMessages);
- console.log("History Cleared");
- }
-}
-function emptyHistory() {
- empty_botton.addEventListener("click", function () {
- clearHistoryHtml();
- });
-}
-
-// 监视页面内部 DOM 变动
-var observer = new MutationObserver(function (mutations) {
- gradioLoaded(mutations);
-});
-observer.observe(targetNode, { childList: true, subtree: true });
-
-// 监视页面变化
-window.addEventListener("DOMContentLoaded", function () {
- isInIframe = (window.self !== window.top);
- historyLoaded = false;
- shouldRenderLatex = !!document.querySelector('script[src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-MML-AM_CHTML"]');
-});
-window.addEventListener('resize', setChatbotHeight);
-window.addEventListener('scroll', setChatbotHeight);
-window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode);
-
-// button svg code
-const copyIcon = '';
-const copiedIcon = '';
-const mdIcon = '';
-const rawIcon = '';
diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/countless/test.py b/spaces/AlexWang/lama/saicinpainting/evaluation/masks/countless/test.py
deleted file mode 100644
index 7809beb7aeeb3bcb10d03093a564917b1f2b4786..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/countless/test.py
+++ /dev/null
@@ -1,195 +0,0 @@
-from copy import deepcopy
-
-import numpy as np
-
-import countless2d
-import countless3d
-
-def test_countless2d():
- def test_all_cases(fn, test_zero):
- case1 = np.array([ [ 1, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1)) # all different
- case2 = np.array([ [ 1, 1 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # two are same
- case1z = np.array([ [ 0, 1 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # all different
- case2z = np.array([ [ 0, 0 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # two are same
- case3 = np.array([ [ 1, 1 ], [ 2, 2 ] ]).reshape((2,2,1,1)) # two groups are same
- case4 = np.array([ [ 1, 2 ], [ 2, 2 ] ]).reshape((2,2,1,1)) # 3 are the same
- case5 = np.array([ [ 5, 5 ], [ 5, 5 ] ]).reshape((2,2,1,1)) # all are the same
-
- is_255_handled = np.array([ [ 255, 255 ], [ 1, 2 ] ], dtype=np.uint8).reshape((2,2,1,1))
-
- test = lambda case: fn(case)
-
- if test_zero:
- assert test(case1z) == [[[[3]]]] # d
- assert test(case2z) == [[[[0]]]] # a==b
- else:
- assert test(case1) == [[[[4]]]] # d
- assert test(case2) == [[[[1]]]] # a==b
-
- assert test(case3) == [[[[1]]]] # a==b
- assert test(case4) == [[[[2]]]] # b==c
- assert test(case5) == [[[[5]]]] # a==b
-
- assert test(is_255_handled) == [[[[255]]]]
-
- assert fn(case1).dtype == case1.dtype
-
- test_all_cases(countless2d.simplest_countless, False)
- test_all_cases(countless2d.quick_countless, False)
- test_all_cases(countless2d.quickest_countless, False)
- test_all_cases(countless2d.stippled_countless, False)
-
-
-
- methods = [
- countless2d.zero_corrected_countless,
- countless2d.countless,
- countless2d.countless_if,
- # countless2d.counting, # counting doesn't respect order so harder to write a test
- ]
-
- for fn in methods:
- print(fn.__name__)
- test_all_cases(fn, True)
-
-def test_stippled_countless2d():
- a = np.array([ [ 1, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- b = np.array([ [ 0, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- c = np.array([ [ 1, 0 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- d = np.array([ [ 1, 2 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- e = np.array([ [ 1, 2 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- f = np.array([ [ 0, 0 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- g = np.array([ [ 0, 2 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- h = np.array([ [ 0, 2 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- i = np.array([ [ 1, 0 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- j = np.array([ [ 1, 2 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- k = np.array([ [ 1, 0 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- l = np.array([ [ 1, 0 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- m = np.array([ [ 0, 2 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- n = np.array([ [ 0, 0 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- o = np.array([ [ 0, 0 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- z = np.array([ [ 0, 0 ], [ 0, 0 ] ]).reshape((2,2,1,1))
-
- test = countless2d.stippled_countless
-
- # Note: We only tested non-matching cases above,
- # cases f,g,h,i,j,k prove their duals work as well
- # b/c if two pixels are black, either one can be chosen
- # if they are different or the same.
-
- assert test(a) == [[[[4]]]]
- assert test(b) == [[[[4]]]]
- assert test(c) == [[[[4]]]]
- assert test(d) == [[[[4]]]]
- assert test(e) == [[[[1]]]]
- assert test(f) == [[[[4]]]]
- assert test(g) == [[[[4]]]]
- assert test(h) == [[[[2]]]]
- assert test(i) == [[[[4]]]]
- assert test(j) == [[[[1]]]]
- assert test(k) == [[[[1]]]]
- assert test(l) == [[[[1]]]]
- assert test(m) == [[[[2]]]]
- assert test(n) == [[[[3]]]]
- assert test(o) == [[[[4]]]]
- assert test(z) == [[[[0]]]]
-
- bc = np.array([ [ 0, 2 ], [ 2, 4 ] ]).reshape((2,2,1,1))
- bd = np.array([ [ 0, 2 ], [ 3, 2 ] ]).reshape((2,2,1,1))
- cd = np.array([ [ 0, 2 ], [ 3, 3 ] ]).reshape((2,2,1,1))
-
- assert test(bc) == [[[[2]]]]
- assert test(bd) == [[[[2]]]]
- assert test(cd) == [[[[3]]]]
-
- ab = np.array([ [ 1, 1 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- ac = np.array([ [ 1, 2 ], [ 1, 0 ] ]).reshape((2,2,1,1))
- ad = np.array([ [ 1, 0 ], [ 3, 1 ] ]).reshape((2,2,1,1))
-
- assert test(ab) == [[[[1]]]]
- assert test(ac) == [[[[1]]]]
- assert test(ad) == [[[[1]]]]
-
-def test_countless3d():
- def test_all_cases(fn):
- alldifferent = [
- [
- [1,2],
- [3,4],
- ],
- [
- [5,6],
- [7,8]
- ]
- ]
- allsame = [
- [
- [1,1],
- [1,1],
- ],
- [
- [1,1],
- [1,1]
- ]
- ]
-
- assert fn(np.array(alldifferent)) == [[[8]]]
- assert fn(np.array(allsame)) == [[[1]]]
-
- twosame = deepcopy(alldifferent)
- twosame[1][1][0] = 2
-
- assert fn(np.array(twosame)) == [[[2]]]
-
- threemixed = [
- [
- [3,3],
- [1,2],
- ],
- [
- [2,4],
- [4,3]
- ]
- ]
- assert fn(np.array(threemixed)) == [[[3]]]
-
- foursame = [
- [
- [4,4],
- [1,2],
- ],
- [
- [2,4],
- [4,3]
- ]
- ]
-
- assert fn(np.array(foursame)) == [[[4]]]
-
- fivesame = [
- [
- [5,4],
- [5,5],
- ],
- [
- [2,4],
- [5,5]
- ]
- ]
-
- assert fn(np.array(fivesame)) == [[[5]]]
-
- def countless3d_generalized(img):
- return countless3d.countless_generalized(img, (2,2,2))
- def countless3d_dynamic_generalized(img):
- return countless3d.dynamic_countless_generalized(img, (2,2,2))
-
- methods = [
- countless3d.countless3d,
- countless3d.dynamic_countless3d,
- countless3d_generalized,
- countless3d_dynamic_generalized,
- ]
-
- for fn in methods:
- test_all_cases(fn)
\ No newline at end of file
diff --git a/spaces/Aloento/9Nine-PITS/text/symbols.py b/spaces/Aloento/9Nine-PITS/text/symbols.py
deleted file mode 100644
index 3507aa00ac3a051c844f3525e7c1454978c5c635..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/symbols.py
+++ /dev/null
@@ -1,14 +0,0 @@
-"""
-Defines the set of symbols used in text input to the model.
-"""
-
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ '
-
-_extra = "ˌ%$"
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters) + list(_extra)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/multi_token_clip.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/multi_token_clip.py
deleted file mode 100644
index 4388771b840df36ffa3a986dc9a2ad81ac7ee425..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/multi_token_clip.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""
-The main idea for this code is to provide a way for users to not need to bother with the hassle of multiple tokens for a concept by typing
-a photo of _0 _1 ... and so on
-and instead just do
-a photo of
-which gets translated to the above. This needs to work for both inference and training.
-For inference,
-the tokenizer encodes the text. So, we would want logic for our tokenizer to replace the placeholder token with
-it's underlying vectors
-For training,
-we would want to abstract away some logic like
-1. Adding tokens
-2. Updating gradient mask
-3. Saving embeddings
-to our Util class here.
-so
-TODO:
-1. have tokenizer keep track of concept, multiconcept pairs and replace during encode call x
-2. have mechanism for adding tokens x
-3. have mech for saving emebeddings x
-4. get mask to update x
-5. Loading tokens from embedding x
-6. Integrate to training x
-7. Test
-"""
-import copy
-import random
-
-from transformers import CLIPTokenizer
-
-
-class MultiTokenCLIPTokenizer(CLIPTokenizer):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.token_map = {}
-
- def try_adding_tokens(self, placeholder_token, *args, **kwargs):
- num_added_tokens = super().add_tokens(placeholder_token, *args, **kwargs)
- if num_added_tokens == 0:
- raise ValueError(
- f"The tokenizer already contains the token {placeholder_token}. Please pass a different"
- " `placeholder_token` that is not already in the tokenizer."
- )
-
- def add_placeholder_tokens(self, placeholder_token, *args, num_vec_per_token=1, **kwargs):
- output = []
- if num_vec_per_token == 1:
- self.try_adding_tokens(placeholder_token, *args, **kwargs)
- output.append(placeholder_token)
- else:
- output = []
- for i in range(num_vec_per_token):
- ith_token = placeholder_token + f"_{i}"
- self.try_adding_tokens(ith_token, *args, **kwargs)
- output.append(ith_token)
- # handle cases where there is a new placeholder token that contains the current placeholder token but is larger
- for token in self.token_map:
- if token in placeholder_token:
- raise ValueError(
- f"The tokenizer already has placeholder token {token} that can get confused with"
- f" {placeholder_token}keep placeholder tokens independent"
- )
- self.token_map[placeholder_token] = output
-
- def replace_placeholder_tokens_in_text(self, text, vector_shuffle=False, prop_tokens_to_load=1.0):
- """
- Here, we replace the placeholder tokens in text recorded in token_map so that the text_encoder
- can encode them
- vector_shuffle was inspired by https://github.com/rinongal/textual_inversion/pull/119
- where shuffling tokens were found to force the model to learn the concepts more descriptively.
- """
- if isinstance(text, list):
- output = []
- for i in range(len(text)):
- output.append(self.replace_placeholder_tokens_in_text(text[i], vector_shuffle=vector_shuffle))
- return output
- for placeholder_token in self.token_map:
- if placeholder_token in text:
- tokens = self.token_map[placeholder_token]
- tokens = tokens[: 1 + int(len(tokens) * prop_tokens_to_load)]
- if vector_shuffle:
- tokens = copy.copy(tokens)
- random.shuffle(tokens)
- text = text.replace(placeholder_token, " ".join(tokens))
- return text
-
- def __call__(self, text, *args, vector_shuffle=False, prop_tokens_to_load=1.0, **kwargs):
- return super().__call__(
- self.replace_placeholder_tokens_in_text(
- text, vector_shuffle=vector_shuffle, prop_tokens_to_load=prop_tokens_to_load
- ),
- *args,
- **kwargs,
- )
-
- def encode(self, text, *args, vector_shuffle=False, prop_tokens_to_load=1.0, **kwargs):
- return super().encode(
- self.replace_placeholder_tokens_in_text(
- text, vector_shuffle=vector_shuffle, prop_tokens_to_load=prop_tokens_to_load
- ),
- *args,
- **kwargs,
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/camera.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/camera.py
deleted file mode 100644
index 7ef0d66070223a80eed59da8d842389fed0c7aef..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/camera.py
+++ /dev/null
@@ -1,147 +0,0 @@
-# Copyright 2023 Open AI and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Tuple
-
-import numpy as np
-import torch
-
-
-@dataclass
-class DifferentiableProjectiveCamera:
- """
- Implements a batch, differentiable, standard pinhole camera
- """
-
- origin: torch.Tensor # [batch_size x 3]
- x: torch.Tensor # [batch_size x 3]
- y: torch.Tensor # [batch_size x 3]
- z: torch.Tensor # [batch_size x 3]
- width: int
- height: int
- x_fov: float
- y_fov: float
- shape: Tuple[int]
-
- def __post_init__(self):
- assert self.x.shape[0] == self.y.shape[0] == self.z.shape[0] == self.origin.shape[0]
- assert self.x.shape[1] == self.y.shape[1] == self.z.shape[1] == self.origin.shape[1] == 3
- assert len(self.x.shape) == len(self.y.shape) == len(self.z.shape) == len(self.origin.shape) == 2
-
- def resolution(self):
- return torch.from_numpy(np.array([self.width, self.height], dtype=np.float32))
-
- def fov(self):
- return torch.from_numpy(np.array([self.x_fov, self.y_fov], dtype=np.float32))
-
- def get_image_coords(self) -> torch.Tensor:
- """
- :return: coords of shape (width * height, 2)
- """
- pixel_indices = torch.arange(self.height * self.width)
- coords = torch.stack(
- [
- pixel_indices % self.width,
- torch.div(pixel_indices, self.width, rounding_mode="trunc"),
- ],
- axis=1,
- )
- return coords
-
- @property
- def camera_rays(self):
- batch_size, *inner_shape = self.shape
- inner_batch_size = int(np.prod(inner_shape))
-
- coords = self.get_image_coords()
- coords = torch.broadcast_to(coords.unsqueeze(0), [batch_size * inner_batch_size, *coords.shape])
- rays = self.get_camera_rays(coords)
-
- rays = rays.view(batch_size, inner_batch_size * self.height * self.width, 2, 3)
-
- return rays
-
- def get_camera_rays(self, coords: torch.Tensor) -> torch.Tensor:
- batch_size, *shape, n_coords = coords.shape
- assert n_coords == 2
- assert batch_size == self.origin.shape[0]
-
- flat = coords.view(batch_size, -1, 2)
-
- res = self.resolution()
- fov = self.fov()
-
- fracs = (flat.float() / (res - 1)) * 2 - 1
- fracs = fracs * torch.tan(fov / 2)
-
- fracs = fracs.view(batch_size, -1, 2)
- directions = (
- self.z.view(batch_size, 1, 3)
- + self.x.view(batch_size, 1, 3) * fracs[:, :, :1]
- + self.y.view(batch_size, 1, 3) * fracs[:, :, 1:]
- )
- directions = directions / directions.norm(dim=-1, keepdim=True)
- rays = torch.stack(
- [
- torch.broadcast_to(self.origin.view(batch_size, 1, 3), [batch_size, directions.shape[1], 3]),
- directions,
- ],
- dim=2,
- )
- return rays.view(batch_size, *shape, 2, 3)
-
- def resize_image(self, width: int, height: int) -> "DifferentiableProjectiveCamera":
- """
- Creates a new camera for the resized view assuming the aspect ratio does not change.
- """
- assert width * self.height == height * self.width, "The aspect ratio should not change."
- return DifferentiableProjectiveCamera(
- origin=self.origin,
- x=self.x,
- y=self.y,
- z=self.z,
- width=width,
- height=height,
- x_fov=self.x_fov,
- y_fov=self.y_fov,
- )
-
-
-def create_pan_cameras(size: int) -> DifferentiableProjectiveCamera:
- origins = []
- xs = []
- ys = []
- zs = []
- for theta in np.linspace(0, 2 * np.pi, num=20):
- z = np.array([np.sin(theta), np.cos(theta), -0.5])
- z /= np.sqrt(np.sum(z**2))
- origin = -z * 4
- x = np.array([np.cos(theta), -np.sin(theta), 0.0])
- y = np.cross(z, x)
- origins.append(origin)
- xs.append(x)
- ys.append(y)
- zs.append(z)
- return DifferentiableProjectiveCamera(
- origin=torch.from_numpy(np.stack(origins, axis=0)).float(),
- x=torch.from_numpy(np.stack(xs, axis=0)).float(),
- y=torch.from_numpy(np.stack(ys, axis=0)).float(),
- z=torch.from_numpy(np.stack(zs, axis=0)).float(),
- width=size,
- height=size,
- x_fov=0.7,
- y_fov=0.7,
- shape=(1, len(xs)),
- )
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py
deleted file mode 100644
index b9e5524a6d8352201ae24b57560437b93de2ae80..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py
+++ /dev/null
@@ -1,18 +0,0 @@
-_base_ = './htc_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'))
-data = dict(samples_per_gpu=1, workers_per_gpu=1)
-# learning policy
-lr_config = dict(step=[16, 19])
-runner = dict(type='EpochBasedRunner', max_epochs=20)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpn_carafe.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpn_carafe.py
deleted file mode 100644
index 302e6576df9914e49166539108d6048b78c1fe71..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/fpn_carafe.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule, build_upsample_layer, xavier_init
-from mmcv.ops.carafe import CARAFEPack
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN_CARAFE(nn.Module):
- """FPN_CARAFE is a more flexible implementation of FPN. It allows more
- choice for upsample methods during the top-down pathway.
-
- It can reproduce the performance of ICCV 2019 paper
- CARAFE: Content-Aware ReAssembly of FEatures
- Please refer to https://arxiv.org/abs/1905.02188 for more details.
-
- Args:
- in_channels (list[int]): Number of channels for each input feature map.
- out_channels (int): Output channels of feature pyramids.
- num_outs (int): Number of output stages.
- start_level (int): Start level of feature pyramids.
- (Default: 0)
- end_level (int): End level of feature pyramids.
- (Default: -1 indicates the last level).
- norm_cfg (dict): Dictionary to construct and config norm layer.
- activate (str): Type of activation function in ConvModule
- (Default: None indicates w/o activation).
- order (dict): Order of components in ConvModule.
- upsample (str): Type of upsample layer.
- upsample_cfg (dict): Dictionary to construct and config upsample layer.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- norm_cfg=None,
- act_cfg=None,
- order=('conv', 'norm', 'act'),
- upsample_cfg=dict(
- type='carafe',
- up_kernel=5,
- up_group=1,
- encoder_kernel=3,
- encoder_dilation=1)):
- super(FPN_CARAFE, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.with_bias = norm_cfg is None
- self.upsample_cfg = upsample_cfg.copy()
- self.upsample = self.upsample_cfg.get('type')
- self.relu = nn.ReLU(inplace=False)
-
- self.order = order
- assert order in [('conv', 'norm', 'act'), ('act', 'conv', 'norm')]
-
- assert self.upsample in [
- 'nearest', 'bilinear', 'deconv', 'pixel_shuffle', 'carafe', None
- ]
- if self.upsample in ['deconv', 'pixel_shuffle']:
- assert hasattr(
- self.upsample_cfg,
- 'upsample_kernel') and self.upsample_cfg.upsample_kernel > 0
- self.upsample_kernel = self.upsample_cfg.pop('upsample_kernel')
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
- self.upsample_modules = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- norm_cfg=norm_cfg,
- bias=self.with_bias,
- act_cfg=act_cfg,
- inplace=False,
- order=self.order)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- norm_cfg=self.norm_cfg,
- bias=self.with_bias,
- act_cfg=act_cfg,
- inplace=False,
- order=self.order)
- if i != self.backbone_end_level - 1:
- upsample_cfg_ = self.upsample_cfg.copy()
- if self.upsample == 'deconv':
- upsample_cfg_.update(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=self.upsample_kernel,
- stride=2,
- padding=(self.upsample_kernel - 1) // 2,
- output_padding=(self.upsample_kernel - 1) // 2)
- elif self.upsample == 'pixel_shuffle':
- upsample_cfg_.update(
- in_channels=out_channels,
- out_channels=out_channels,
- scale_factor=2,
- upsample_kernel=self.upsample_kernel)
- elif self.upsample == 'carafe':
- upsample_cfg_.update(channels=out_channels, scale_factor=2)
- else:
- # suppress warnings
- align_corners = (None
- if self.upsample == 'nearest' else False)
- upsample_cfg_.update(
- scale_factor=2,
- mode=self.upsample,
- align_corners=align_corners)
- upsample_module = build_upsample_layer(upsample_cfg_)
- self.upsample_modules.append(upsample_module)
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_out_levels = (
- num_outs - self.backbone_end_level + self.start_level)
- if extra_out_levels >= 1:
- for i in range(extra_out_levels):
- in_channels = (
- self.in_channels[self.backbone_end_level -
- 1] if i == 0 else out_channels)
- extra_l_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- norm_cfg=norm_cfg,
- bias=self.with_bias,
- act_cfg=act_cfg,
- inplace=False,
- order=self.order)
- if self.upsample == 'deconv':
- upsampler_cfg_ = dict(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=self.upsample_kernel,
- stride=2,
- padding=(self.upsample_kernel - 1) // 2,
- output_padding=(self.upsample_kernel - 1) // 2)
- elif self.upsample == 'pixel_shuffle':
- upsampler_cfg_ = dict(
- in_channels=out_channels,
- out_channels=out_channels,
- scale_factor=2,
- upsample_kernel=self.upsample_kernel)
- elif self.upsample == 'carafe':
- upsampler_cfg_ = dict(
- channels=out_channels,
- scale_factor=2,
- **self.upsample_cfg)
- else:
- # suppress warnings
- align_corners = (None
- if self.upsample == 'nearest' else False)
- upsampler_cfg_ = dict(
- scale_factor=2,
- mode=self.upsample,
- align_corners=align_corners)
- upsampler_cfg_['type'] = self.upsample
- upsample_module = build_upsample_layer(upsampler_cfg_)
- extra_fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- norm_cfg=self.norm_cfg,
- bias=self.with_bias,
- act_cfg=act_cfg,
- inplace=False,
- order=self.order)
- self.upsample_modules.append(upsample_module)
- self.fpn_convs.append(extra_fpn_conv)
- self.lateral_convs.append(extra_l_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- """Initialize the weights of module."""
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- xavier_init(m, distribution='uniform')
- for m in self.modules():
- if isinstance(m, CARAFEPack):
- m.init_weights()
-
- def slice_as(self, src, dst):
- """Slice ``src`` as ``dst``
-
- Note:
- ``src`` should have the same or larger size than ``dst``.
-
- Args:
- src (torch.Tensor): Tensors to be sliced.
- dst (torch.Tensor): ``src`` will be sliced to have the same
- size as ``dst``.
-
- Returns:
- torch.Tensor: Sliced tensor.
- """
- assert (src.size(2) >= dst.size(2)) and (src.size(3) >= dst.size(3))
- if src.size(2) == dst.size(2) and src.size(3) == dst.size(3):
- return src
- else:
- return src[:, :, :dst.size(2), :dst.size(3)]
-
- def tensor_add(self, a, b):
- """Add tensors ``a`` and ``b`` that might have different sizes."""
- if a.size() == b.size():
- c = a + b
- else:
- c = a + self.slice_as(b, a)
- return c
-
- def forward(self, inputs):
- """Forward function."""
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = []
- for i, lateral_conv in enumerate(self.lateral_convs):
- if i <= self.backbone_end_level - self.start_level:
- input = inputs[min(i + self.start_level, len(inputs) - 1)]
- else:
- input = laterals[-1]
- lateral = lateral_conv(input)
- laterals.append(lateral)
-
- # build top-down path
- for i in range(len(laterals) - 1, 0, -1):
- if self.upsample is not None:
- upsample_feat = self.upsample_modules[i - 1](laterals[i])
- else:
- upsample_feat = laterals[i]
- laterals[i - 1] = self.tensor_add(laterals[i - 1], upsample_feat)
-
- # build outputs
- num_conv_outs = len(self.fpn_convs)
- outs = []
- for i in range(num_conv_outs):
- out = self.fpn_convs[i](laterals[i])
- outs.append(out)
- return tuple(outs)
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/images.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/images.py
deleted file mode 100644
index 350ea617267926b4f53f9fa0486d3e005f931be6..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/images.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import os
-import time
-
-import requests
-from extensions.openai.errors import ServiceUnavailableError
-
-
-def generations(prompt: str, size: str, response_format: str, n: int):
- # Stable Diffusion callout wrapper for txt2img
- # Low effort implementation for compatibility. With only "prompt" being passed and assuming DALL-E
- # the results will be limited and likely poor. SD has hundreds of models and dozens of settings.
- # If you want high quality tailored results you should just use the Stable Diffusion API directly.
- # it's too general an API to try and shape the result with specific tags like negative prompts
- # or "masterpiece", etc. SD configuration is beyond the scope of this API.
- # At this point I will not add the edits and variations endpoints (ie. img2img) because they
- # require changing the form data handling to accept multipart form data, also to properly support
- # url return types will require file management and a web serving files... Perhaps later!
- base_model_size = 512 if 'SD_BASE_MODEL_SIZE' not in os.environ else int(os.environ.get('SD_BASE_MODEL_SIZE', 512))
- sd_defaults = {
- 'sampler_name': 'DPM++ 2M Karras', # vast improvement
- 'steps': 30,
- }
-
- width, height = [int(x) for x in size.split('x')] # ignore the restrictions on size
-
- # to hack on better generation, edit default payload.
- payload = {
- 'prompt': prompt, # ignore prompt limit of 1000 characters
- 'width': width,
- 'height': height,
- 'batch_size': n,
- }
- payload.update(sd_defaults)
-
- scale = min(width, height) / base_model_size
- if scale >= 1.2:
- # for better performance with the default size (1024), and larger res.
- scaler = {
- 'width': width // scale,
- 'height': height // scale,
- 'hr_scale': scale,
- 'enable_hr': True,
- 'hr_upscaler': 'Latent',
- 'denoising_strength': 0.68,
- }
- payload.update(scaler)
-
- resp = {
- 'created': int(time.time()),
- 'data': []
- }
- from extensions.openai.script import params
- # TODO: support SD_WEBUI_AUTH username:password pair.
- sd_url = f"{os.environ.get('SD_WEBUI_URL', params.get('sd_webui_url', ''))}/sdapi/v1/txt2img"
-
- response = requests.post(url=sd_url, json=payload)
- r = response.json()
- if response.status_code != 200 or 'images' not in r:
- print(r)
- raise ServiceUnavailableError(r.get('error', 'Unknown error calling Stable Diffusion'), code=response.status_code, internal_message=r.get('errors', None))
- # r['parameters']...
- for b64_json in r['images']:
- if response_format == 'b64_json':
- resp['data'].extend([{'b64_json': b64_json}])
- else:
- resp['data'].extend([{'url': f'data:image/png;base64,{b64_json}'}]) # yeah it's lazy. requests.get() will not work with this
-
- return resp
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Anonymous-sub/Rerender/README.md b/spaces/Anonymous-sub/Rerender/README.md
deleted file mode 100644
index 760355be129d7355d9b7c1b323fda088598fcacb..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Rerender
-emoji: ⚡
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Asifpa6/emotion-analyzer-app/emotion_analysis.py b/spaces/Asifpa6/emotion-analyzer-app/emotion_analysis.py
deleted file mode 100644
index da10b692a9ecc2fc25e0e9dd6515e748235de76d..0000000000000000000000000000000000000000
--- a/spaces/Asifpa6/emotion-analyzer-app/emotion_analysis.py
+++ /dev/null
@@ -1,17 +0,0 @@
-
-from transformers import RobertaTokenizerFast, TFRobertaForSequenceClassification, pipeline
-
-tokenizer = RobertaTokenizerFast.from_pretrained("arpanghoshal/EmoRoBERTa")
-model = TFRobertaForSequenceClassification.from_pretrained("arpanghoshal/EmoRoBERTa")
-
-emotion = pipeline('sentiment-analysis',
- model='arpanghoshal/EmoRoBERTa')
-
-
-def get_emotion(text):
- emotion_labels = emotion(text)
- emotion_detail = [item['label'] for item in emotion_labels]
- print("The detected emotion is:", emotion_detail)
- confidence_score = str(round([item['score'] for item in emotion_labels][0]*100, 2)) + "%"
- print("The confidence score is:", confidence_score)
- return emotion_detail, confidence_score
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/wheel.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/wheel.py
deleted file mode 100644
index e5e3f34ed81453ce759c6ade8b2def733e9063e2..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/wheel.py
+++ /dev/null
@@ -1,136 +0,0 @@
-"""Support functions for working with wheel files.
-"""
-
-import logging
-from email.message import Message
-from email.parser import Parser
-from typing import Tuple
-from zipfile import BadZipFile, ZipFile
-
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.exceptions import UnsupportedWheel
-
-VERSION_COMPATIBLE = (1, 0)
-
-
-logger = logging.getLogger(__name__)
-
-
-def parse_wheel(wheel_zip: ZipFile, name: str) -> Tuple[str, Message]:
- """Extract information from the provided wheel, ensuring it meets basic
- standards.
-
- Returns the name of the .dist-info directory and the parsed WHEEL metadata.
- """
- try:
- info_dir = wheel_dist_info_dir(wheel_zip, name)
- metadata = wheel_metadata(wheel_zip, info_dir)
- version = wheel_version(metadata)
- except UnsupportedWheel as e:
- raise UnsupportedWheel("{} has an invalid wheel, {}".format(name, str(e)))
-
- check_compatibility(version, name)
-
- return info_dir, metadata
-
-
-def wheel_dist_info_dir(source: ZipFile, name: str) -> str:
- """Returns the name of the contained .dist-info directory.
-
- Raises AssertionError or UnsupportedWheel if not found, >1 found, or
- it doesn't match the provided name.
- """
- # Zip file path separators must be /
- subdirs = {p.split("/", 1)[0] for p in source.namelist()}
-
- info_dirs = [s for s in subdirs if s.endswith(".dist-info")]
-
- if not info_dirs:
- raise UnsupportedWheel(".dist-info directory not found")
-
- if len(info_dirs) > 1:
- raise UnsupportedWheel(
- "multiple .dist-info directories found: {}".format(", ".join(info_dirs))
- )
-
- info_dir = info_dirs[0]
-
- info_dir_name = canonicalize_name(info_dir)
- canonical_name = canonicalize_name(name)
- if not info_dir_name.startswith(canonical_name):
- raise UnsupportedWheel(
- ".dist-info directory {!r} does not start with {!r}".format(
- info_dir, canonical_name
- )
- )
-
- return info_dir
-
-
-def read_wheel_metadata_file(source: ZipFile, path: str) -> bytes:
- try:
- return source.read(path)
- # BadZipFile for general corruption, KeyError for missing entry,
- # and RuntimeError for password-protected files
- except (BadZipFile, KeyError, RuntimeError) as e:
- raise UnsupportedWheel(f"could not read {path!r} file: {e!r}")
-
-
-def wheel_metadata(source: ZipFile, dist_info_dir: str) -> Message:
- """Return the WHEEL metadata of an extracted wheel, if possible.
- Otherwise, raise UnsupportedWheel.
- """
- path = f"{dist_info_dir}/WHEEL"
- # Zip file path separators must be /
- wheel_contents = read_wheel_metadata_file(source, path)
-
- try:
- wheel_text = wheel_contents.decode()
- except UnicodeDecodeError as e:
- raise UnsupportedWheel(f"error decoding {path!r}: {e!r}")
-
- # FeedParser (used by Parser) does not raise any exceptions. The returned
- # message may have .defects populated, but for backwards-compatibility we
- # currently ignore them.
- return Parser().parsestr(wheel_text)
-
-
-def wheel_version(wheel_data: Message) -> Tuple[int, ...]:
- """Given WHEEL metadata, return the parsed Wheel-Version.
- Otherwise, raise UnsupportedWheel.
- """
- version_text = wheel_data["Wheel-Version"]
- if version_text is None:
- raise UnsupportedWheel("WHEEL is missing Wheel-Version")
-
- version = version_text.strip()
-
- try:
- return tuple(map(int, version.split(".")))
- except ValueError:
- raise UnsupportedWheel(f"invalid Wheel-Version: {version!r}")
-
-
-def check_compatibility(version: Tuple[int, ...], name: str) -> None:
- """Raises errors or warns if called with an incompatible Wheel-Version.
-
- pip should refuse to install a Wheel-Version that's a major series
- ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when
- installing a version only minor version ahead (e.g 1.2 > 1.1).
-
- version: a 2-tuple representing a Wheel-Version (Major, Minor)
- name: name of wheel or package to raise exception about
-
- :raises UnsupportedWheel: when an incompatible Wheel-Version is given
- """
- if version[0] > VERSION_COMPATIBLE[0]:
- raise UnsupportedWheel(
- "{}'s Wheel-Version ({}) is not compatible with this version "
- "of pip".format(name, ".".join(map(str, version)))
- )
- elif version > VERSION_COMPATIBLE:
- logger.warning(
- "Installing from a newer Wheel-Version (%s)",
- ".".join(map(str, version)),
- )
diff --git a/spaces/Ayaka2022/anime-aesthetic-predict/README.md b/spaces/Ayaka2022/anime-aesthetic-predict/README.md
deleted file mode 100644
index fd7639570aaafef17ae7a59785b64feb60f136c1..0000000000000000000000000000000000000000
--- a/spaces/Ayaka2022/anime-aesthetic-predict/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Aesthetic Predict
-emoji: ❤️🖼️
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-aesthetic-predict
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py
deleted file mode 100644
index e75a05791e26fcbfa58dbfd4b149ffdb6f5e7159..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py
+++ /dev/null
@@ -1,334 +0,0 @@
-"""
- pygments.lexers
- ~~~~~~~~~~~~~~~
-
- Pygments lexers.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import sys
-import types
-from fnmatch import fnmatch
-from os.path import basename
-
-from pip._vendor.pygments.lexers._mapping import LEXERS
-from pip._vendor.pygments.modeline import get_filetype_from_buffer
-from pip._vendor.pygments.plugin import find_plugin_lexers
-from pip._vendor.pygments.util import ClassNotFound, guess_decode
-
-COMPAT = {
- 'Python3Lexer': 'PythonLexer',
- 'Python3TracebackLexer': 'PythonTracebackLexer',
-}
-
-__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
- 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT)
-
-_lexer_cache = {}
-
-def _load_lexers(module_name):
- """Load a lexer (and all others in the module too)."""
- mod = __import__(module_name, None, None, ['__all__'])
- for lexer_name in mod.__all__:
- cls = getattr(mod, lexer_name)
- _lexer_cache[cls.name] = cls
-
-
-def get_all_lexers(plugins=True):
- """Return a generator of tuples in the form ``(name, aliases,
- filenames, mimetypes)`` of all know lexers.
-
- If *plugins* is true (the default), plugin lexers supplied by entrypoints
- are also returned. Otherwise, only builtin ones are considered.
- """
- for item in LEXERS.values():
- yield item[1:]
- if plugins:
- for lexer in find_plugin_lexers():
- yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes
-
-
-def find_lexer_class(name):
- """Lookup a lexer class by name.
-
- Return None if not found.
- """
- if name in _lexer_cache:
- return _lexer_cache[name]
- # lookup builtin lexers
- for module_name, lname, aliases, _, _ in LEXERS.values():
- if name == lname:
- _load_lexers(module_name)
- return _lexer_cache[name]
- # continue with lexers from setuptools entrypoints
- for cls in find_plugin_lexers():
- if cls.name == name:
- return cls
-
-
-def find_lexer_class_by_name(_alias):
- """Lookup a lexer class by alias.
-
- Like `get_lexer_by_name`, but does not instantiate the class.
-
- .. versionadded:: 2.2
- """
- if not _alias:
- raise ClassNotFound('no lexer for alias %r found' % _alias)
- # lookup builtin lexers
- for module_name, name, aliases, _, _ in LEXERS.values():
- if _alias.lower() in aliases:
- if name not in _lexer_cache:
- _load_lexers(module_name)
- return _lexer_cache[name]
- # continue with lexers from setuptools entrypoints
- for cls in find_plugin_lexers():
- if _alias.lower() in cls.aliases:
- return cls
- raise ClassNotFound('no lexer for alias %r found' % _alias)
-
-
-def get_lexer_by_name(_alias, **options):
- """Get a lexer by an alias.
-
- Raises ClassNotFound if not found.
- """
- if not _alias:
- raise ClassNotFound('no lexer for alias %r found' % _alias)
-
- # lookup builtin lexers
- for module_name, name, aliases, _, _ in LEXERS.values():
- if _alias.lower() in aliases:
- if name not in _lexer_cache:
- _load_lexers(module_name)
- return _lexer_cache[name](**options)
- # continue with lexers from setuptools entrypoints
- for cls in find_plugin_lexers():
- if _alias.lower() in cls.aliases:
- return cls(**options)
- raise ClassNotFound('no lexer for alias %r found' % _alias)
-
-
-def load_lexer_from_file(filename, lexername="CustomLexer", **options):
- """Load a lexer from a file.
-
- This method expects a file located relative to the current working
- directory, which contains a Lexer class. By default, it expects the
- Lexer to be name CustomLexer; you can specify your own class name
- as the second argument to this function.
-
- Users should be very careful with the input, because this method
- is equivalent to running eval on the input file.
-
- Raises ClassNotFound if there are any problems importing the Lexer.
-
- .. versionadded:: 2.2
- """
- try:
- # This empty dict will contain the namespace for the exec'd file
- custom_namespace = {}
- with open(filename, 'rb') as f:
- exec(f.read(), custom_namespace)
- # Retrieve the class `lexername` from that namespace
- if lexername not in custom_namespace:
- raise ClassNotFound('no valid %s class found in %s' %
- (lexername, filename))
- lexer_class = custom_namespace[lexername]
- # And finally instantiate it with the options
- return lexer_class(**options)
- except OSError as err:
- raise ClassNotFound('cannot read %s: %s' % (filename, err))
- except ClassNotFound:
- raise
- except Exception as err:
- raise ClassNotFound('error when loading custom lexer: %s' % err)
-
-
-def find_lexer_class_for_filename(_fn, code=None):
- """Get a lexer for a filename.
-
- If multiple lexers match the filename pattern, use ``analyse_text()`` to
- figure out which one is more appropriate.
-
- Returns None if not found.
- """
- matches = []
- fn = basename(_fn)
- for modname, name, _, filenames, _ in LEXERS.values():
- for filename in filenames:
- if fnmatch(fn, filename):
- if name not in _lexer_cache:
- _load_lexers(modname)
- matches.append((_lexer_cache[name], filename))
- for cls in find_plugin_lexers():
- for filename in cls.filenames:
- if fnmatch(fn, filename):
- matches.append((cls, filename))
-
- if isinstance(code, bytes):
- # decode it, since all analyse_text functions expect unicode
- code = guess_decode(code)
-
- def get_rating(info):
- cls, filename = info
- # explicit patterns get a bonus
- bonus = '*' not in filename and 0.5 or 0
- # The class _always_ defines analyse_text because it's included in
- # the Lexer class. The default implementation returns None which
- # gets turned into 0.0. Run scripts/detect_missing_analyse_text.py
- # to find lexers which need it overridden.
- if code:
- return cls.analyse_text(code) + bonus, cls.__name__
- return cls.priority + bonus, cls.__name__
-
- if matches:
- matches.sort(key=get_rating)
- # print "Possible lexers, after sort:", matches
- return matches[-1][0]
-
-
-def get_lexer_for_filename(_fn, code=None, **options):
- """Get a lexer for a filename.
-
- If multiple lexers match the filename pattern, use ``analyse_text()`` to
- figure out which one is more appropriate.
-
- Raises ClassNotFound if not found.
- """
- res = find_lexer_class_for_filename(_fn, code)
- if not res:
- raise ClassNotFound('no lexer for filename %r found' % _fn)
- return res(**options)
-
-
-def get_lexer_for_mimetype(_mime, **options):
- """Get a lexer for a mimetype.
-
- Raises ClassNotFound if not found.
- """
- for modname, name, _, _, mimetypes in LEXERS.values():
- if _mime in mimetypes:
- if name not in _lexer_cache:
- _load_lexers(modname)
- return _lexer_cache[name](**options)
- for cls in find_plugin_lexers():
- if _mime in cls.mimetypes:
- return cls(**options)
- raise ClassNotFound('no lexer for mimetype %r found' % _mime)
-
-
-def _iter_lexerclasses(plugins=True):
- """Return an iterator over all lexer classes."""
- for key in sorted(LEXERS):
- module_name, name = LEXERS[key][:2]
- if name not in _lexer_cache:
- _load_lexers(module_name)
- yield _lexer_cache[name]
- if plugins:
- yield from find_plugin_lexers()
-
-
-def guess_lexer_for_filename(_fn, _text, **options):
- """
- Lookup all lexers that handle those filenames primary (``filenames``)
- or secondary (``alias_filenames``). Then run a text analysis for those
- lexers and choose the best result.
-
- usage::
-
- >>> from pygments.lexers import guess_lexer_for_filename
- >>> guess_lexer_for_filename('hello.html', '<%= @foo %>')
-
- >>> guess_lexer_for_filename('hello.html', '
{{ title|e }}
')
-
- >>> guess_lexer_for_filename('style.css', 'a { color: = $link ?> }')
-
- """
- fn = basename(_fn)
- primary = {}
- matching_lexers = set()
- for lexer in _iter_lexerclasses():
- for filename in lexer.filenames:
- if fnmatch(fn, filename):
- matching_lexers.add(lexer)
- primary[lexer] = True
- for filename in lexer.alias_filenames:
- if fnmatch(fn, filename):
- matching_lexers.add(lexer)
- primary[lexer] = False
- if not matching_lexers:
- raise ClassNotFound('no lexer for filename %r found' % fn)
- if len(matching_lexers) == 1:
- return matching_lexers.pop()(**options)
- result = []
- for lexer in matching_lexers:
- rv = lexer.analyse_text(_text)
- if rv == 1.0:
- return lexer(**options)
- result.append((rv, lexer))
-
- def type_sort(t):
- # sort by:
- # - analyse score
- # - is primary filename pattern?
- # - priority
- # - last resort: class name
- return (t[0], primary[t[1]], t[1].priority, t[1].__name__)
- result.sort(key=type_sort)
-
- return result[-1][1](**options)
-
-
-def guess_lexer(_text, **options):
- """Guess a lexer by strong distinctions in the text (eg, shebang)."""
-
- if not isinstance(_text, str):
- inencoding = options.get('inencoding', options.get('encoding'))
- if inencoding:
- _text = _text.decode(inencoding or 'utf8')
- else:
- _text, _ = guess_decode(_text)
-
- # try to get a vim modeline first
- ft = get_filetype_from_buffer(_text)
-
- if ft is not None:
- try:
- return get_lexer_by_name(ft, **options)
- except ClassNotFound:
- pass
-
- best_lexer = [0.0, None]
- for lexer in _iter_lexerclasses():
- rv = lexer.analyse_text(_text)
- if rv == 1.0:
- return lexer(**options)
- if rv > best_lexer[0]:
- best_lexer[:] = (rv, lexer)
- if not best_lexer[0] or best_lexer[1] is None:
- raise ClassNotFound('no lexer matching the text found')
- return best_lexer[1](**options)
-
-
-class _automodule(types.ModuleType):
- """Automatically import lexers."""
-
- def __getattr__(self, name):
- info = LEXERS.get(name)
- if info:
- _load_lexers(info[0])
- cls = _lexer_cache[info[1]]
- setattr(self, name, cls)
- return cls
- if name in COMPAT:
- return getattr(self, COMPAT[name])
- raise AttributeError(name)
-
-
-oldmod = sys.modules[__name__]
-newmod = _automodule(__name__)
-newmod.__dict__.update(oldmod.__dict__)
-sys.modules[__name__] = newmod
-del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py
deleted file mode 100644
index 1506d66bf4e93afb60ad46c23f234b31c46b3a7e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py
+++ /dev/null
@@ -1,642 +0,0 @@
-import railroad
-from pip._vendor import pyparsing
-import typing
-from typing import (
- List,
- NamedTuple,
- Generic,
- TypeVar,
- Dict,
- Callable,
- Set,
- Iterable,
-)
-from jinja2 import Template
-from io import StringIO
-import inspect
-
-
-jinja2_template_source = """\
-
-
-
- {% if not head %}
-
- {% else %}
- {{ head | safe }}
- {% endif %}
-
-
-{{ body | safe }}
-{% for diagram in diagrams %}
-
-
{{ diagram.title }}
-
{{ diagram.text }}
-
- {{ diagram.svg }}
-
-
-{% endfor %}
-
-
-"""
-
-template = Template(jinja2_template_source)
-
-# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
-NamedDiagram = NamedTuple(
- "NamedDiagram",
- [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)],
-)
-"""
-A simple structure for associating a name with a railroad diagram
-"""
-
-T = TypeVar("T")
-
-
-class EachItem(railroad.Group):
- """
- Custom railroad item to compose a:
- - Group containing a
- - OneOrMore containing a
- - Choice of the elements in the Each
- with the group label indicating that all must be matched
- """
-
- all_label = "[ALL]"
-
- def __init__(self, *items):
- choice_item = railroad.Choice(len(items) - 1, *items)
- one_or_more_item = railroad.OneOrMore(item=choice_item)
- super().__init__(one_or_more_item, label=self.all_label)
-
-
-class AnnotatedItem(railroad.Group):
- """
- Simple subclass of Group that creates an annotation label
- """
-
- def __init__(self, label: str, item):
- super().__init__(item=item, label="[{}]".format(label) if label else label)
-
-
-class EditablePartial(Generic[T]):
- """
- Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been
- constructed.
- """
-
- # We need this here because the railroad constructors actually transform the data, so can't be called until the
- # entire tree is assembled
-
- def __init__(self, func: Callable[..., T], args: list, kwargs: dict):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- @classmethod
- def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]":
- """
- If you call this function in the same way that you would call the constructor, it will store the arguments
- as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3)
- """
- return EditablePartial(func=func, args=list(args), kwargs=kwargs)
-
- @property
- def name(self):
- return self.kwargs["name"]
-
- def __call__(self) -> T:
- """
- Evaluate the partial and return the result
- """
- args = self.args.copy()
- kwargs = self.kwargs.copy()
-
- # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g.
- # args=['list', 'of', 'things'])
- arg_spec = inspect.getfullargspec(self.func)
- if arg_spec.varargs in self.kwargs:
- args += kwargs.pop(arg_spec.varargs)
-
- return self.func(*args, **kwargs)
-
-
-def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str:
- """
- Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams
- :params kwargs: kwargs to be passed in to the template
- """
- data = []
- for diagram in diagrams:
- if diagram.diagram is None:
- continue
- io = StringIO()
- diagram.diagram.writeSvg(io.write)
- title = diagram.name
- if diagram.index == 0:
- title += " (root)"
- data.append({"title": title, "text": "", "svg": io.getvalue()})
-
- return template.render(diagrams=data, **kwargs)
-
-
-def resolve_partial(partial: "EditablePartial[T]") -> T:
- """
- Recursively resolves a collection of Partials into whatever type they are
- """
- if isinstance(partial, EditablePartial):
- partial.args = resolve_partial(partial.args)
- partial.kwargs = resolve_partial(partial.kwargs)
- return partial()
- elif isinstance(partial, list):
- return [resolve_partial(x) for x in partial]
- elif isinstance(partial, dict):
- return {key: resolve_partial(x) for key, x in partial.items()}
- else:
- return partial
-
-
-def to_railroad(
- element: pyparsing.ParserElement,
- diagram_kwargs: typing.Optional[dict] = None,
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> List[NamedDiagram]:
- """
- Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram
- creation if you want to access the Railroad tree before it is converted to HTML
- :param element: base element of the parser being diagrammed
- :param diagram_kwargs: kwargs to pass to the Diagram() constructor
- :param vertical: (optional) - int - limit at which number of alternatives should be
- shown vertically instead of horizontally
- :param show_results_names - bool to indicate whether results name annotations should be
- included in the diagram
- :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled
- surrounding box
- """
- # Convert the whole tree underneath the root
- lookup = ConverterState(diagram_kwargs=diagram_kwargs or {})
- _to_diagram_element(
- element,
- lookup=lookup,
- parent=None,
- vertical=vertical,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- root_id = id(element)
- # Convert the root if it hasn't been already
- if root_id in lookup:
- if not element.customName:
- lookup[root_id].name = ""
- lookup[root_id].mark_for_extraction(root_id, lookup, force=True)
-
- # Now that we're finished, we can convert from intermediate structures into Railroad elements
- diags = list(lookup.diagrams.values())
- if len(diags) > 1:
- # collapse out duplicate diags with the same name
- seen = set()
- deduped_diags = []
- for d in diags:
- # don't extract SkipTo elements, they are uninformative as subdiagrams
- if d.name == "...":
- continue
- if d.name is not None and d.name not in seen:
- seen.add(d.name)
- deduped_diags.append(d)
- resolved = [resolve_partial(partial) for partial in deduped_diags]
- else:
- # special case - if just one diagram, always display it, even if
- # it has no name
- resolved = [resolve_partial(partial) for partial in diags]
- return sorted(resolved, key=lambda diag: diag.index)
-
-
-def _should_vertical(
- specification: int, exprs: Iterable[pyparsing.ParserElement]
-) -> bool:
- """
- Returns true if we should return a vertical list of elements
- """
- if specification is None:
- return False
- else:
- return len(_visible_exprs(exprs)) >= specification
-
-
-class ElementState:
- """
- State recorded for an individual pyparsing Element
- """
-
- # Note: this should be a dataclass, but we have to support Python 3.5
- def __init__(
- self,
- element: pyparsing.ParserElement,
- converted: EditablePartial,
- parent: EditablePartial,
- number: int,
- name: str = None,
- parent_index: typing.Optional[int] = None,
- ):
- #: The pyparsing element that this represents
- self.element: pyparsing.ParserElement = element
- #: The name of the element
- self.name: typing.Optional[str] = name
- #: The output Railroad element in an unconverted state
- self.converted: EditablePartial = converted
- #: The parent Railroad element, which we store so that we can extract this if it's duplicated
- self.parent: EditablePartial = parent
- #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
- self.number: int = number
- #: The index of this inside its parent
- self.parent_index: typing.Optional[int] = parent_index
- #: If true, we should extract this out into a subdiagram
- self.extract: bool = False
- #: If true, all of this element's children have been filled out
- self.complete: bool = False
-
- def mark_for_extraction(
- self, el_id: int, state: "ConverterState", name: str = None, force: bool = False
- ):
- """
- Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram
- :param el_id: id of the element
- :param state: element/diagram state tracker
- :param name: name to use for this element's text
- :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the
- root element when we know we're finished
- """
- self.extract = True
-
- # Set the name
- if not self.name:
- if name:
- # Allow forcing a custom name
- self.name = name
- elif self.element.customName:
- self.name = self.element.customName
- else:
- self.name = ""
-
- # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children
- # to be added
- # Also, if this is just a string literal etc, don't bother extracting it
- if force or (self.complete and _worth_extracting(self.element)):
- state.extract_into_diagram(el_id)
-
-
-class ConverterState:
- """
- Stores some state that persists between recursions into the element tree
- """
-
- def __init__(self, diagram_kwargs: typing.Optional[dict] = None):
- #: A dictionary mapping ParserElements to state relating to them
- self._element_diagram_states: Dict[int, ElementState] = {}
- #: A dictionary mapping ParserElement IDs to subdiagrams generated from them
- self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {}
- #: The index of the next unnamed element
- self.unnamed_index: int = 1
- #: The index of the next element. This is used for sorting
- self.index: int = 0
- #: Shared kwargs that are used to customize the construction of diagrams
- self.diagram_kwargs: dict = diagram_kwargs or {}
- self.extracted_diagram_names: Set[str] = set()
-
- def __setitem__(self, key: int, value: ElementState):
- self._element_diagram_states[key] = value
-
- def __getitem__(self, key: int) -> ElementState:
- return self._element_diagram_states[key]
-
- def __delitem__(self, key: int):
- del self._element_diagram_states[key]
-
- def __contains__(self, key: int):
- return key in self._element_diagram_states
-
- def generate_unnamed(self) -> int:
- """
- Generate a number used in the name of an otherwise unnamed diagram
- """
- self.unnamed_index += 1
- return self.unnamed_index
-
- def generate_index(self) -> int:
- """
- Generate a number used to index a diagram
- """
- self.index += 1
- return self.index
-
- def extract_into_diagram(self, el_id: int):
- """
- Used when we encounter the same token twice in the same tree. When this
- happens, we replace all instances of that token with a terminal, and
- create a new subdiagram for the token
- """
- position = self[el_id]
-
- # Replace the original definition of this element with a regular block
- if position.parent:
- ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name)
- if "item" in position.parent.kwargs:
- position.parent.kwargs["item"] = ret
- elif "items" in position.parent.kwargs:
- position.parent.kwargs["items"][position.parent_index] = ret
-
- # If the element we're extracting is a group, skip to its content but keep the title
- if position.converted.func == railroad.Group:
- content = position.converted.kwargs["item"]
- else:
- content = position.converted
-
- self.diagrams[el_id] = EditablePartial.from_call(
- NamedDiagram,
- name=position.name,
- diagram=EditablePartial.from_call(
- railroad.Diagram, content, **self.diagram_kwargs
- ),
- index=position.number,
- )
-
- del self[el_id]
-
-
-def _worth_extracting(element: pyparsing.ParserElement) -> bool:
- """
- Returns true if this element is worth having its own sub-diagram. Simply, if any of its children
- themselves have children, then its complex enough to extract
- """
- children = element.recurse()
- return any(child.recurse() for child in children)
-
-
-def _apply_diagram_item_enhancements(fn):
- """
- decorator to ensure enhancements to a diagram item (such as results name annotations)
- get applied on return from _to_diagram_element (we do this since there are several
- returns in _to_diagram_element)
- """
-
- def _inner(
- element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
- ) -> typing.Optional[EditablePartial]:
-
- ret = fn(
- element,
- parent,
- lookup,
- vertical,
- index,
- name_hint,
- show_results_names,
- show_groups,
- )
-
- # apply annotation for results name, if present
- if show_results_names and ret is not None:
- element_results_name = element.resultsName
- if element_results_name:
- # add "*" to indicate if this is a "list all results" name
- element_results_name += "" if element.modalResults else "*"
- ret = EditablePartial.from_call(
- railroad.Group, item=ret, label=element_results_name
- )
-
- return ret
-
- return _inner
-
-
-def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]):
- non_diagramming_exprs = (
- pyparsing.ParseElementEnhance,
- pyparsing.PositionToken,
- pyparsing.And._ErrorStop,
- )
- return [
- e
- for e in exprs
- if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs))
- ]
-
-
-@_apply_diagram_item_enhancements
-def _to_diagram_element(
- element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> typing.Optional[EditablePartial]:
- """
- Recursively converts a PyParsing Element to a railroad Element
- :param lookup: The shared converter state that keeps track of useful things
- :param index: The index of this element within the parent
- :param parent: The parent of this element in the output tree
- :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default),
- it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never
- do so
- :param name_hint: If provided, this will override the generated name
- :param show_results_names: bool flag indicating whether to add annotations for results names
- :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed
- :param show_groups: bool flag indicating whether to show groups using bounding box
- """
- exprs = element.recurse()
- name = name_hint or element.customName or element.__class__.__name__
-
- # Python's id() is used to provide a unique identifier for elements
- el_id = id(element)
-
- element_results_name = element.resultsName
-
- # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram
- if not element.customName:
- if isinstance(
- element,
- (
- # pyparsing.TokenConverter,
- # pyparsing.Forward,
- pyparsing.Located,
- ),
- ):
- # However, if this element has a useful custom name, and its child does not, we can pass it on to the child
- if exprs:
- if not exprs[0].customName:
- propagated_name = name
- else:
- propagated_name = None
-
- return _to_diagram_element(
- element.expr,
- parent=parent,
- lookup=lookup,
- vertical=vertical,
- index=index,
- name_hint=propagated_name,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # If the element isn't worth extracting, we always treat it as the first time we say it
- if _worth_extracting(element):
- if el_id in lookup:
- # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate,
- # so we have to extract it into a new diagram.
- looked_up = lookup[el_id]
- looked_up.mark_for_extraction(el_id, lookup, name=name_hint)
- ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name)
- return ret
-
- elif el_id in lookup.diagrams:
- # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we
- # just put in a marker element that refers to the sub-diagram
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
- return ret
-
- # Recursively convert child elements
- # Here we find the most relevant Railroad element for matching pyparsing Element
- # We use ``items=[]`` here to hold the place for where the child elements will go once created
- if isinstance(element, pyparsing.And):
- # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat
- # (all will have the same name, and resultsName)
- if not exprs:
- return None
- if len(set((e.name, e.resultsName) for e in exprs)) == 1:
- ret = EditablePartial.from_call(
- railroad.OneOrMore, item="", repeat=str(len(exprs))
- )
- elif _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Stack, items=[])
- else:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)):
- if not exprs:
- return None
- if _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Choice, 0, items=[])
- else:
- ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[])
- elif isinstance(element, pyparsing.Each):
- if not exprs:
- return None
- ret = EditablePartial.from_call(EachItem, items=[])
- elif isinstance(element, pyparsing.NotAny):
- ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="")
- elif isinstance(element, pyparsing.FollowedBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="")
- elif isinstance(element, pyparsing.PrecededBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="")
- elif isinstance(element, pyparsing.Group):
- if show_groups:
- ret = EditablePartial.from_call(AnnotatedItem, label="", item="")
- else:
- ret = EditablePartial.from_call(railroad.Group, label="", item="")
- elif isinstance(element, pyparsing.TokenConverter):
- ret = EditablePartial.from_call(
- AnnotatedItem, label=type(element).__name__.lower(), item=""
- )
- elif isinstance(element, pyparsing.Opt):
- ret = EditablePartial.from_call(railroad.Optional, item="")
- elif isinstance(element, pyparsing.OneOrMore):
- ret = EditablePartial.from_call(railroad.OneOrMore, item="")
- elif isinstance(element, pyparsing.ZeroOrMore):
- ret = EditablePartial.from_call(railroad.ZeroOrMore, item="")
- elif isinstance(element, pyparsing.Group):
- ret = EditablePartial.from_call(
- railroad.Group, item=None, label=element_results_name
- )
- elif isinstance(element, pyparsing.Empty) and not element.customName:
- # Skip unnamed "Empty" elements
- ret = None
- elif len(exprs) > 1:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif len(exprs) > 0 and not element_results_name:
- ret = EditablePartial.from_call(railroad.Group, item="", label=name)
- else:
- terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName)
- ret = terminal
-
- if ret is None:
- return
-
- # Indicate this element's position in the tree so we can extract it if necessary
- lookup[el_id] = ElementState(
- element=element,
- converted=ret,
- parent=parent,
- parent_index=index,
- number=lookup.generate_index(),
- )
- if element.customName:
- lookup[el_id].mark_for_extraction(el_id, lookup, element.customName)
-
- i = 0
- for expr in exprs:
- # Add a placeholder index in case we have to extract the child before we even add it to the parent
- if "items" in ret.kwargs:
- ret.kwargs["items"].insert(i, None)
-
- item = _to_diagram_element(
- expr,
- parent=ret,
- lookup=lookup,
- vertical=vertical,
- index=i,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # Some elements don't need to be shown in the diagram
- if item is not None:
- if "item" in ret.kwargs:
- ret.kwargs["item"] = item
- elif "items" in ret.kwargs:
- # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal
- ret.kwargs["items"][i] = item
- i += 1
- elif "items" in ret.kwargs:
- # If we're supposed to skip this element, remove it from the parent
- del ret.kwargs["items"][i]
-
- # If all this items children are none, skip this item
- if ret and (
- ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0)
- or ("item" in ret.kwargs and ret.kwargs["item"] is None)
- ):
- ret = EditablePartial.from_call(railroad.Terminal, name)
-
- # Mark this element as "complete", ie it has all of its children
- if el_id in lookup:
- lookup[el_id].complete = True
-
- if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete:
- lookup.extract_into_diagram(el_id)
- if ret is not None:
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
-
- return ret
diff --git a/spaces/Blessin/yes-and-improv-game/app.py b/spaces/Blessin/yes-and-improv-game/app.py
deleted file mode 100644
index 9bdf81daece9439d99cd8a83bfaf1787c7eb96aa..0000000000000000000000000000000000000000
--- a/spaces/Blessin/yes-and-improv-game/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import gradio as gr
-import openai
-
-# Function to extract the last statement from the input
-def extract_last_statement(input_text):
- lines = input_text.strip().split('\n')
- last_line = lines[-1]
- last_statement = last_line.split(':')[-1].strip() if ':' in last_line else last_line
- return last_statement
-
-def yes_and_game(api_key, user_input):
- # Initialize OpenAI API client
- openai.api_key = api_key
-
- # Extract the last statement from the user input
- last_statement = extract_last_statement(user_input)
-
- # Create the prompt for GPT
- gpt_prompt = (f"Play the Yes, And improv game. "
- f"You will start your response with 'Yes, and'. "
- f"Keep your responses short. Not more than one statement. Responses can be funny or absurd. "
- f"The input statement can be a single line or a multi line statement.\n"
- f"Yes, And {last_statement}\n"
- f"Yes, And ")
-
- # Generate GPT response
- gpt_response = openai.Completion.create(
- engine="text-davinci-002",
- prompt=gpt_prompt,
- max_tokens=20,
- temperature=0.9 # Increased temperature for more randomness
- )['choices'][0]['text'].strip()
-
- # Format and return the result
- result = f"{last_statement}\nYes, And {gpt_response}"
- return result
-
-iface = gr.Interface(
- fn=yes_and_game,
- inputs=[
- gr.Textbox(label="OpenAI API Key", type="password"),
- gr.Textbox(lines=5, label="Statement"),
- ],
- outputs=gr.Textbox(label="Game Transcript", live=True, flagging=True), # Setting live=True for real-time updates, flagging=True to allow copying
- title="The Yes, And Game" # Adding title here
-)
-
-
-# This will create a link to host your model on Hugging Face Spaces when executed
-iface.launch(share=True)
diff --git a/spaces/CM-15/NLP-demo/README.md b/spaces/CM-15/NLP-demo/README.md
deleted file mode 100644
index 47a4db86d17ca2f7f73282888fcb4bebbcfb1efa..0000000000000000000000000000000000000000
--- a/spaces/CM-15/NLP-demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NLP Demo
-emoji: 😻
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/CVPR2022_papers/style.css b/spaces/CVPR/CVPR2022_papers/style.css
deleted file mode 100644
index e2b871457d13980ddfbbc35bf5da02a75ece292e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/CVPR2022_papers/style.css
+++ /dev/null
@@ -1,22 +0,0 @@
-h1 {
- text-align: center;
-}
-table a {
- background-color: transparent;
- color: #58a6ff;
- text-decoration: none;
-}
-a:active,
-a:hover {
- outline-width: 0;
-}
-a:hover {
- text-decoration: underline;
-}
-table, th, td {
- border: 1px solid;
-}
-img#visitor-badge {
- display: block;
- margin: auto;
-}
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/adapter.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/adapter.py
deleted file mode 100644
index 307f85b7236767009b378d3c677c1fd17b1b3e2c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/adapter.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Zhenwei Shao https://github.com/ParadoxZW
-# --------------------------------------------------------
-
-import torch.nn as nn
-import torch
-from openvqa.core.base_dataset import BaseAdapter
-from openvqa.utils.make_mask import make_mask
-
-
-class Adapter(BaseAdapter):
- def __init__(self, __C):
- super(Adapter, self).__init__(__C)
- self.__C = __C
-
-
- def relation_embedding(self, f_g):
- x_min, y_min, x_max, y_max = torch.chunk(f_g, 4, dim=2) # [bs, n_obj, 1]
-
- cx = (x_min + x_max) * 0.5 # [bs, n_obj, 1]
- cy = (y_min + y_max) * 0.5 # [bs, n_obj, 1]
- w = (x_max - x_min) + 1. # [bs, n_obj, 1]
- h = (y_max - y_min) + 1. # [bs, n_obj, 1]
-
- delta_x = cx - cx.transpose(-1, -2)
- delta_x = torch.clamp(torch.abs(delta_x / w), min=1e-3)
- delta_x = torch.log(delta_x) # [bs, n_obj, n_obj]
-
- delta_y = cy - cy.transpose(-1, -2)
- delta_y = torch.clamp(torch.abs(delta_y / h), min=1e-3)
- delta_y = torch.log(delta_y) # [bs, n_obj, n_obj]
-
- delta_w = torch.log(w / w.transpose(-1, -2)) # [bs, n_obj, n_obj]
- delta_h = torch.log(h / h.transpose(-1, -2)) # [bs, n_obj, n_obj]
- size = delta_h.size()
-
- delta_x = delta_x.view(size[0], size[1], size[2], 1)
- delta_y = delta_y.view(size[0], size[1], size[2], 1)
- delta_w = delta_w.view(size[0], size[1], size[2], 1)
- delta_h = delta_h.view(size[0], size[1], size[2], 1) # [bs, n_obj, n_obj, 1]
- position_mat = torch.cat(
- (delta_x, delta_y, delta_w, delta_h), -1) # [bs, n_obj, n_obj, 4]
-
- return position_mat
-
- def vqa_init(self, __C):
- imgfeat_linear_size = __C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][1]
- if __C.USE_BBOX_FEAT:
- self.bbox_linear = nn.Linear(5, __C.BBOXFEAT_EMB_SIZE)
- imgfeat_linear_size += __C.BBOXFEAT_EMB_SIZE
- self.frcn_linear = nn.Linear(imgfeat_linear_size, __C.HIDDEN_SIZE)
-
-
- def gqa_init(self, __C):
- imgfeat_linear_size = __C.FEAT_SIZE['gqa']['FRCN_FEAT_SIZE'][1]
- if __C.USE_BBOX_FEAT:
- self.bbox_linear = nn.Linear(5, __C.BBOXFEAT_EMB_SIZE)
- imgfeat_linear_size += __C.BBOXFEAT_EMB_SIZE
- self.frcn_linear = nn.Linear(imgfeat_linear_size, __C.HIDDEN_SIZE)
-
- if __C.USE_AUX_FEAT:
- self.grid_linear = nn.Linear(__C.FEAT_SIZE['gqa']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE)
-
-
- def clevr_init(self, __C):
- self.grid_linear = nn.Linear(__C.FEAT_SIZE['clevr']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE)
-
-
- def vqa_forward(self, feat_dict):
- frcn_feat = feat_dict['FRCN_FEAT']
- bbox_feat = feat_dict['BBOX_FEAT']
-
- img_feat_mask = make_mask(frcn_feat)
-
- if self.__C.USE_BBOX_FEAT:
- bbox_feat = self.bbox_proc(bbox_feat)
- bbox_feat = self.bbox_linear(bbox_feat)
- frcn_feat = torch.cat((frcn_feat, bbox_feat), dim=-1)
- img_feat = self.frcn_linear(frcn_feat)
- rel_embed = self.relation_embedding(bbox_feat)
-
- return img_feat, rel_embed, img_feat_mask
-
-
- def gqa_forward(self, feat_dict):
- frcn_feat = feat_dict['FRCN_FEAT']
- bbox_feat = feat_dict['BBOX_FEAT']
- grid_feat = feat_dict['GRID_FEAT']
-
- img_feat_mask = make_mask(frcn_feat)
-
- if self.__C.USE_BBOX_FEAT:
- bbox_feat = self.bbox_linear(bbox_feat)
- frcn_feat = torch.cat((frcn_feat, bbox_feat), dim=-1)
- img_feat = self.frcn_linear(frcn_feat)
-
- if self.__C.USE_AUX_FEAT:
- grid_feat_mask = make_mask(grid_feat)
- img_feat_mask = torch.cat((img_feat_mask, grid_feat_mask), dim=-1)
- grid_feat = self.grid_linear(grid_feat)
- img_feat = torch.cat((img_feat, grid_feat), dim=1)
-
- rel_embed = self.relation_embedding(bbox_feat)
-
- return img_feat, rel_embed, img_feat_mask
-
-
- def clevr_forward(self, feat_dict):
- grid_feat = feat_dict['GRID_FEAT']
-
- img_feat_mask = make_mask(grid_feat)
- img_feat = self.grid_linear(grid_feat)
-
- rel_embed = self.relation_embedding(bbox_feat)
-
- return img_feat, rel_embed, img_feat_mask
-
-
-
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_modules.py b/spaces/CVPR/LIVE/pybind11/tests/test_modules.py
deleted file mode 100644
index 7e2100524506b13a5d3189a3fabb9dead628c2a5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_modules.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# -*- coding: utf-8 -*-
-from pybind11_tests import modules as m
-from pybind11_tests.modules import subsubmodule as ms
-from pybind11_tests import ConstructorStats
-
-
-def test_nested_modules():
- import pybind11_tests
- assert pybind11_tests.__name__ == "pybind11_tests"
- assert pybind11_tests.modules.__name__ == "pybind11_tests.modules"
- assert pybind11_tests.modules.subsubmodule.__name__ == "pybind11_tests.modules.subsubmodule"
- assert m.__name__ == "pybind11_tests.modules"
- assert ms.__name__ == "pybind11_tests.modules.subsubmodule"
-
- assert ms.submodule_func() == "submodule_func()"
-
-
-def test_reference_internal():
- b = ms.B()
- assert str(b.get_a1()) == "A[1]"
- assert str(b.a1) == "A[1]"
- assert str(b.get_a2()) == "A[2]"
- assert str(b.a2) == "A[2]"
-
- b.a1 = ms.A(42)
- b.a2 = ms.A(43)
- assert str(b.get_a1()) == "A[42]"
- assert str(b.a1) == "A[42]"
- assert str(b.get_a2()) == "A[43]"
- assert str(b.a2) == "A[43]"
-
- astats, bstats = ConstructorStats.get(ms.A), ConstructorStats.get(ms.B)
- assert astats.alive() == 2
- assert bstats.alive() == 1
- del b
- assert astats.alive() == 0
- assert bstats.alive() == 0
- assert astats.values() == ['1', '2', '42', '43']
- assert bstats.values() == []
- assert astats.default_constructions == 0
- assert bstats.default_constructions == 1
- assert astats.copy_constructions == 0
- assert bstats.copy_constructions == 0
- # assert astats.move_constructions >= 0 # Don't invoke any
- # assert bstats.move_constructions >= 0 # Don't invoke any
- assert astats.copy_assignments == 2
- assert bstats.copy_assignments == 0
- assert astats.move_assignments == 0
- assert bstats.move_assignments == 0
-
-
-def test_importing():
- from pybind11_tests.modules import OD
- from collections import OrderedDict
-
- assert OD is OrderedDict
- assert str(OD([(1, 'a'), (2, 'b')])) == "OrderedDict([(1, 'a'), (2, 'b')])"
-
-
-def test_pydoc():
- """Pydoc needs to be able to provide help() for everything inside a pybind11 module"""
- import pybind11_tests
- import pydoc
-
- assert pybind11_tests.__name__ == "pybind11_tests"
- assert pybind11_tests.__doc__ == "pybind11 test module"
- assert pydoc.text.docmodule(pybind11_tests)
-
-
-def test_duplicate_registration():
- """Registering two things with the same name"""
-
- assert m.duplicate_registration() == []
diff --git a/spaces/CVPR/WALT/mmdet/datasets/samplers/distributed_sampler.py b/spaces/CVPR/WALT/mmdet/datasets/samplers/distributed_sampler.py
deleted file mode 100644
index cc61019484655ee2829f7908dc442caa20cf1d54..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/samplers/distributed_sampler.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import math
-
-import torch
-from torch.utils.data import DistributedSampler as _DistributedSampler
-
-
-class DistributedSampler(_DistributedSampler):
-
- def __init__(self,
- dataset,
- num_replicas=None,
- rank=None,
- shuffle=True,
- seed=0):
- super().__init__(
- dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- # for the compatibility from PyTorch 1.3+
- self.seed = seed if seed is not None else 0
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- if self.shuffle:
- g = torch.Generator()
- g.manual_seed(self.epoch + self.seed)
- indices = torch.randperm(len(self.dataset), generator=g).tolist()
- else:
- indices = torch.arange(len(self.dataset)).tolist()
-
- # add extra samples to make it evenly divisible
- # in case that indices is shorter than half of total_size
- indices = (indices *
- math.ceil(self.total_size / len(indices)))[:self.total_size]
- assert len(indices) == self.total_size
-
- # subsample
- indices = indices[self.rank:self.total_size:self.num_replicas]
- assert len(indices) == self.num_samples
-
- return iter(indices)
diff --git a/spaces/Candyraider/Proxy4/README.md b/spaces/Candyraider/Proxy4/README.md
deleted file mode 100644
index 0881d5470838143571654518654052ae2eff9dc4..0000000000000000000000000000000000000000
--- a/spaces/Candyraider/Proxy4/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Proxy4
-emoji: 🏢
-colorFrom: purple
-colorTo: purple
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Chris4K/llms_compare/Antares Mic Mod Efx Mac ~UPD~ Crack Torrent.md b/spaces/Chris4K/llms_compare/Antares Mic Mod Efx Mac ~UPD~ Crack Torrent.md
deleted file mode 100644
index ee71c05a939b76b62b1fcd1736b96bbb9eeb8593..0000000000000000000000000000000000000000
--- a/spaces/Chris4K/llms_compare/Antares Mic Mod Efx Mac ~UPD~ Crack Torrent.md
+++ /dev/null
@@ -1,84 +0,0 @@
-## Antares Mic Mod Efx Mac Crack Torrent
-
-
-
-
-
-
-
-
-
-**CLICK HERE ->>> [https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txP1A&sa=D&sntz=1&usg=AOvVaw2UH1YkG1xYBKItn2Gwxll7](https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txP1A&sa=D&sntz=1&usg=AOvVaw2UH1YkG1xYBKItn2Gwxll7)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Get Antares Mic Mod Efx Mac Crack Torrent for Free
-
-
-
-Antares Mic Mod Efx is a popular plugin that allows you to emulate the sound of hundreds of different microphones with your existing mic. Whether you want to record vocals, guitars, drums, or any other instrument, you can use Mic Mod Efx to change the tone and character of your sound. But how can you get this plugin for free without paying the hefty price tag?
-
-
-
-One way is to download a cracked version of Antares Mic Mod Efx Mac from a torrent site. A torrent is a file that contains information about other files that are distributed across a network of computers. By using a torrent client, you can download the files you want from other users who have them. However, this method is not recommended for several reasons.
-
-
-
-First of all, downloading cracked software is illegal and unethical. You are violating the copyright and license agreement of the software developer, and you are depriving them of their rightful income. Secondly, downloading cracked software is risky and unsafe. You never know what kind of malware or viruses might be hidden in the files you download. You could end up infecting your computer or compromising your personal data. Thirdly, downloading cracked software is unreliable and unstable. You might encounter errors, bugs, or compatibility issues that could affect your performance or quality of your recordings.
-
-
-
-So what is the best way to get Antares Mic Mod Efx Mac for free? The answer is simple: use a trial version. Antares offers a free 14-day trial of Mic Mod Efx on their website. You can download and install the plugin on your Mac and use it for two weeks without any limitations or restrictions. You can try out all the features and functions of the plugin and see how it works for you. You can also compare the sound of different microphones and find the ones that suit your style and preference.
-
-
-
-After the trial period is over, you can decide whether you want to buy the full version of Antares Mic Mod Efx Mac or not. The full version costs $129 and comes with lifetime updates and support. You can also get it as part of the Antares AVOX bundle, which includes other vocal processing plugins such as Auto-Tune, Harmony Engine, Articulator, and more.
-
-
-
-If you are serious about your music production and want to get the best sound possible, then investing in Antares Mic Mod Efx Mac is worth it. You will get access to a huge collection of microphone models that will enhance your recordings and give you more creative options. You will also get a legal and safe software that will work smoothly and reliably on your Mac.
-
-
-
-So don't waste your time and risk your security by downloading Antares Mic Mod Efx Mac crack torrent from shady sites. Instead, go to the official Antares website and download the free trial version of Mic Mod Efx today. You will be amazed by what this plugin can do for your sound.
-
-
-
-## What Users Say About Antares Mic Mod Efx Mac
-
-
-
-If you are still not convinced by the benefits of Antares Mic Mod Efx Mac, you might want to hear what other users have to say about it. Many users have shared their positive experiences and reviews of this plugin on various platforms and websites. Here are some of the testimonials from real users who have tried Antares Mic Mod Efx Mac:
-
-
-
-- "I was just recording on the Sony C800g not too long ago and when I use this plugin at home (with my ml 770) and hear myself it sounds like I'm on the Sony. Blown away by how good this plugin is." - Michael from Newport Beach, CA[^1^]
-
-- "This tool is just that... A tool. I used it alongside my 1977 U87 and my U87ai. I was unable to tell the difference between my Ai and my Vintage U87 when I used this plugin to turn one into the other. Like a few others have stated... I'm shocked this tool doesn't get more exposure." - CC from Colorado[^1^]
-
-- "I'm using this plug-in with a Manley ref cad, I have no clie what the actual version of most of these mics are really suppose to sound like. All I know is they sound great!!" - Rony from Philadelphia[^1^]
-
-- "I'm astounded at the lack of credit MIc Mod has gotten. This software is really easy to use and also sounds extremely convincing to my ear. By no means does it sound like my own mic being EQ'ed. What I hear is dynamic frequency response change and saturation as well." - Anthony Lowery from Manteca, CA[^1^]
-
-- "This is clearly not something you could do in the real world, but if it creates a sound that works then it's more than justified. The mic models themselves are stored as separate files which, in the case of Mac users, are located within the Preferences folder in the System folder." - Paul White from Sound On Sound[^3^]
-
-
-
-As you can see, Antares Mic Mod Efx Mac has received rave reviews from users who have tried it and loved it. They have praised its ease of use, its realism, its versatility, and its quality. They have also compared it favorably to some of the most expensive and sought-after microphones in the world.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/commons/ssim.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/commons/ssim.py
deleted file mode 100644
index 3f77c95803206138dd05095a037fed5acb1c4112..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/commons/ssim.py
+++ /dev/null
@@ -1,84 +0,0 @@
-"""
-Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim
-"""
-
-from math import exp
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Variable
-
-
-def gaussian(window_size, sigma):
- gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)])
- return gauss / gauss.sum()
-
-
-def create_window(window_size, channel):
- _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
- _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
- window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
- return window
-
-
-def _ssim(img1, img2, window, window_size, channel, size_average=True):
- mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel)
- mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel)
-
- mu1_sq = mu1.pow(2)
- mu2_sq = mu2.pow(2)
- mu1_mu2 = mu1 * mu2
-
- sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq
- sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq
- sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2
-
- C1 = 0.01 ** 2
- C2 = 0.03 ** 2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
-
- if size_average:
- return ssim_map.mean()
- else:
- return ssim_map.mean(1)
-
-
-class SSIM(torch.nn.Module):
- def __init__(self, window_size=11, size_average=True):
- super(SSIM, self).__init__()
- self.window_size = window_size
- self.size_average = size_average
- self.channel = 1
- self.window = create_window(window_size, self.channel)
-
- def forward(self, img1, img2):
- (_, channel, _, _) = img1.size()
-
- if channel == self.channel and self.window.data.type() == img1.data.type():
- window = self.window
- else:
- window = create_window(self.window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- self.window = window
- self.channel = channel
-
- return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
-
-
-window = None
-
-
-def ssim(img1, img2, window_size=11, size_average=True):
- (_, channel, _, _) = img1.size()
- global window
- if window is None:
- window = create_window(window_size, channel)
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
- return _ssim(img1, img2, window, window_size, channel, size_average)
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/segmentation_mask.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/segmentation_mask.py
deleted file mode 100644
index 5e1ba07767df487c9b4cccca4a87540a4bce3b99..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/structures/segmentation_mask.py
+++ /dev/null
@@ -1,535 +0,0 @@
-import cv2
-import copy
-import torch
-import numpy as np
-from maskrcnn_benchmark.layers.misc import interpolate
-
-import pycocotools.mask as mask_utils
-
-# transpose
-FLIP_LEFT_RIGHT = 0
-FLIP_TOP_BOTTOM = 1
-
-
-""" ABSTRACT
-Segmentations come in either:
-1) Binary masks
-2) Polygons
-
-Binary masks can be represented in a contiguous array
-and operations can be carried out more efficiently,
-therefore BinaryMaskList handles them together.
-
-Polygons are handled separately for each instance,
-by PolygonInstance and instances are handled by
-PolygonList.
-
-SegmentationList is supposed to represent both,
-therefore it wraps the functions of BinaryMaskList
-and PolygonList to make it transparent.
-"""
-
-
-class BinaryMaskList(object):
- """
- This class handles binary masks for all objects in the image
- """
-
- def __init__(self, masks, size):
- """
- Arguments:
- masks: Either torch.tensor of [num_instances, H, W]
- or list of torch.tensors of [H, W] with num_instances elems,
- or RLE (Run Length Encoding) - interpreted as list of dicts,
- or BinaryMaskList.
- size: absolute image size, width first
-
- After initialization, a hard copy will be made, to leave the
- initializing source data intact.
- """
-
- if isinstance(masks, torch.Tensor):
- # The raw data representation is passed as argument
- masks = masks.clone()
- elif isinstance(masks, (list, tuple)):
- if isinstance(masks[0], torch.Tensor):
- masks = torch.stack(masks, dim=2).clone()
- elif isinstance(masks[0], dict) and "count" in masks[0]:
- # RLE interpretation
-
- masks = mask_utils
- else:
- RuntimeError(
- "Type of `masks[0]` could not be interpreted: %s" % type(masks)
- )
- elif isinstance(masks, BinaryMaskList):
- # just hard copy the BinaryMaskList instance's underlying data
- masks = masks.masks.clone()
- else:
- RuntimeError(
- "Type of `masks` argument could not be interpreted:%s" % type(masks)
- )
-
- if len(masks.shape) == 2:
- # if only a single instance mask is passed
- masks = masks[None]
-
- assert len(masks.shape) == 3
- assert masks.shape[1] == size[1], "%s != %s" % (masks.shape[1], size[1])
- assert masks.shape[2] == size[0], "%s != %s" % (masks.shape[2], size[0])
-
- self.masks = masks
- self.size = tuple(size)
-
- def transpose(self, method):
- dim = 1 if method == FLIP_TOP_BOTTOM else 2
- flipped_masks = self.masks.flip(dim)
- return BinaryMaskList(flipped_masks, self.size)
-
- def crop(self, box):
- assert isinstance(box, (list, tuple, torch.Tensor)), str(type(box))
- # box is assumed to be xyxy
- current_width, current_height = self.size
- xmin, ymin, xmax, ymax = [round(float(b)) for b in box]
-
- assert xmin <= xmax and ymin <= ymax, str(box)
- xmin = min(max(xmin, 0), current_width - 1)
- ymin = min(max(ymin, 0), current_height - 1)
-
- xmax = min(max(xmax, 0), current_width)
- ymax = min(max(ymax, 0), current_height)
-
- xmax = max(xmax, xmin + 1)
- ymax = max(ymax, ymin + 1)
-
- width, height = xmax - xmin, ymax - ymin
- cropped_masks = self.masks[:, ymin:ymax, xmin:xmax]
- cropped_size = width, height
- return BinaryMaskList(cropped_masks, cropped_size)
-
- def resize(self, size):
- try:
- iter(size)
- except TypeError:
- assert isinstance(size, (int, float))
- size = size, size
- width, height = map(int, size)
-
- assert width > 0
- assert height > 0
-
- # Height comes first here!
- resized_masks = torch.nn.functional.interpolate(
- input=self.masks[None].float(),
- size=(height, width),
- mode="bilinear",
- align_corners=False,
- )[0].type_as(self.masks)
- resized_size = width, height
- return BinaryMaskList(resized_masks, resized_size)
-
- def convert_to_polygon(self):
- contours = self._findContours()
- return PolygonList(contours, self.size)
-
- def to(self, *args, **kwargs):
- return self
-
- def _findContours(self):
- contours = []
- masks = self.masks.detach().numpy()
- for mask in masks:
- mask = cv2.UMat(mask)
- contour, hierarchy = cv2.findContours(
- mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1
- )
-
- reshaped_contour = []
- for entity in contour:
- assert len(entity.shape) == 3
- assert entity.shape[1] == 1, "Hierarchical contours are not allowed"
- reshaped_contour.append(entity.reshape(-1).tolist())
- contours.append(reshaped_contour)
- return contours
-
- def __len__(self):
- return len(self.masks)
-
- def __getitem__(self, index):
- # Probably it can cause some overhead
- # but preserves consistency
- masks = self.masks[index].clone()
- return BinaryMaskList(masks, self.size)
-
- def __iter__(self):
- return iter(self.masks)
-
- def __repr__(self):
- s = self.__class__.__name__ + "("
- s += "num_instances={}, ".format(len(self.masks))
- s += "image_width={}, ".format(self.size[0])
- s += "image_height={})".format(self.size[1])
- return s
-
-
-class PolygonInstance(object):
- """
- This class holds a set of polygons that represents a single instance
- of an object mask. The object can be represented as a set of
- polygons
- """
-
- def __init__(self, polygons, size):
- """
- Arguments:
- a list of lists of numbers.
- The first level refers to all the polygons that compose the
- object, and the second level to the polygon coordinates.
- """
- if isinstance(polygons, (list, tuple)):
- valid_polygons = []
- for p in polygons:
- p = torch.as_tensor(p, dtype=torch.float32)
- if len(p) >= 6: # 3 * 2 coordinates
- valid_polygons.append(p)
- polygons = valid_polygons
-
- elif isinstance(polygons, PolygonInstance):
- polygons = copy.copy(polygons.polygons)
- else:
- RuntimeError(
- "Type of argument `polygons` is not allowed:%s" % (type(polygons))
- )
-
- """ This crashes the training way too many times...
- for p in polygons:
- assert p[::2].min() >= 0
- assert p[::2].max() < size[0]
- assert p[1::2].min() >= 0
- assert p[1::2].max() , size[1]
- """
-
- self.polygons = polygons
- self.size = tuple(size)
-
- def transpose(self, method):
- if method not in (FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM):
- raise NotImplementedError(
- "Only FLIP_LEFT_RIGHT and FLIP_TOP_BOTTOM implemented"
- )
-
- flipped_polygons = []
- width, height = self.size
- if method == FLIP_LEFT_RIGHT:
- dim = width
- idx = 0
- elif method == FLIP_TOP_BOTTOM:
- dim = height
- idx = 1
-
- for poly in self.polygons:
- p = poly.clone()
- TO_REMOVE = 1
- p[idx::2] = dim - poly[idx::2] - TO_REMOVE
- flipped_polygons.append(p)
-
- return PolygonInstance(flipped_polygons, size=self.size)
-
- def crop(self, box):
- assert isinstance(box, (list, tuple, torch.Tensor)), str(type(box))
-
- # box is assumed to be xyxy
- current_width, current_height = self.size
- xmin, ymin, xmax, ymax = map(float, box)
-
- assert xmin <= xmax and ymin <= ymax, str(box)
- xmin = min(max(xmin, 0), current_width - 1)
- ymin = min(max(ymin, 0), current_height - 1)
-
- xmax = min(max(xmax, 0), current_width)
- ymax = min(max(ymax, 0), current_height)
-
- xmax = max(xmax, xmin + 1)
- ymax = max(ymax, ymin + 1)
-
- w, h = xmax - xmin, ymax - ymin
-
- cropped_polygons = []
- for poly in self.polygons:
- p = poly.clone()
- p[0::2] = p[0::2] - xmin # .clamp(min=0, max=w)
- p[1::2] = p[1::2] - ymin # .clamp(min=0, max=h)
- cropped_polygons.append(p)
-
- return PolygonInstance(cropped_polygons, size=(w, h))
-
- def resize(self, size):
- try:
- iter(size)
- except TypeError:
- assert isinstance(size, (int, float))
- size = size, size
-
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(size, self.size))
-
- if ratios[0] == ratios[1]:
- ratio = ratios[0]
- scaled_polys = [p * ratio for p in self.polygons]
- return PolygonInstance(scaled_polys, size)
-
- ratio_w, ratio_h = ratios
- scaled_polygons = []
- for poly in self.polygons:
- p = poly.clone()
- p[0::2] *= ratio_w
- p[1::2] *= ratio_h
- scaled_polygons.append(p)
-
- return PolygonInstance(scaled_polygons, size=size)
-
- def convert_to_binarymask(self):
- width, height = self.size
- # formatting for COCO PythonAPI
- polygons = [p.numpy() for p in self.polygons]
- rles = mask_utils.frPyObjects(polygons, height, width)
- rle = mask_utils.merge(rles)
- mask = mask_utils.decode(rle)
- mask = torch.from_numpy(mask)
- return mask
-
- def __len__(self):
- return len(self.polygons)
-
- def __repr__(self):
- s = self.__class__.__name__ + "("
- s += "num_groups={}, ".format(len(self.polygons))
- s += "image_width={}, ".format(self.size[0])
- s += "image_height={}, ".format(self.size[1])
- return s
-
-
-class PolygonList(object):
- """
- This class handles PolygonInstances for all objects in the image
- """
-
- def __init__(self, polygons, size):
- """
- Arguments:
- polygons:
- a list of list of lists of numbers. The first
- level of the list correspond to individual instances,
- the second level to all the polygons that compose the
- object, and the third level to the polygon coordinates.
-
- OR
-
- a list of PolygonInstances.
-
- OR
-
- a PolygonList
-
- size: absolute image size
-
- """
- if isinstance(polygons, (list, tuple)):
- if len(polygons) == 0:
- polygons = [[[]]]
- if isinstance(polygons[0], (list, tuple)):
- assert isinstance(polygons[0][0], (list, tuple)), str(
- type(polygons[0][0])
- )
- else:
- assert isinstance(polygons[0], PolygonInstance), str(type(polygons[0]))
-
- elif isinstance(polygons, PolygonList):
- size = polygons.size
- polygons = polygons.polygons
-
- else:
- RuntimeError(
- "Type of argument `polygons` is not allowed:%s" % (type(polygons))
- )
-
- assert isinstance(size, (list, tuple)), str(type(size))
-
- self.polygons = []
- for p in polygons:
- p = PolygonInstance(p, size)
- if len(p) > 0:
- self.polygons.append(p)
-
- self.size = tuple(size)
-
- def transpose(self, method):
- if method not in (FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM):
- raise NotImplementedError(
- "Only FLIP_LEFT_RIGHT and FLIP_TOP_BOTTOM implemented"
- )
-
- flipped_polygons = []
- for polygon in self.polygons:
- flipped_polygons.append(polygon.transpose(method))
-
- return PolygonList(flipped_polygons, size=self.size)
-
- def crop(self, box):
- w, h = box[2] - box[0], box[3] - box[1]
- cropped_polygons = []
- for polygon in self.polygons:
- cropped_polygons.append(polygon.crop(box))
-
- cropped_size = w, h
- return PolygonList(cropped_polygons, cropped_size)
-
- def resize(self, size):
- resized_polygons = []
- for polygon in self.polygons:
- resized_polygons.append(polygon.resize(size))
-
- resized_size = size
- return PolygonList(resized_polygons, resized_size)
-
- def to(self, *args, **kwargs):
- return self
-
- def convert_to_binarymask(self):
- if len(self) > 0:
- masks = torch.stack([p.convert_to_binarymask() for p in self.polygons])
- else:
- size = self.size
- masks = torch.empty([0, size[1], size[0]], dtype=torch.uint8)
-
- return BinaryMaskList(masks, size=self.size)
-
- def __len__(self):
- return len(self.polygons)
-
- def __getitem__(self, item):
- if isinstance(item, int):
- selected_polygons = [self.polygons[item]]
- elif isinstance(item, slice):
- selected_polygons = self.polygons[item]
- else:
- # advanced indexing on a single dimension
- selected_polygons = []
- if isinstance(item, torch.Tensor) and item.dtype == torch.uint8:
- item = item.nonzero()
- item = item.squeeze(1) if item.numel() > 0 else item
- item = item.tolist()
- for i in item:
- selected_polygons.append(self.polygons[i])
- return PolygonList(selected_polygons, size=self.size)
-
- def __iter__(self):
- return iter(self.polygons)
-
- def __repr__(self):
- s = self.__class__.__name__ + "("
- s += "num_instances={}, ".format(len(self.polygons))
- s += "image_width={}, ".format(self.size[0])
- s += "image_height={})".format(self.size[1])
- return s
-
-
-class SegmentationMask(object):
-
- """
- This class stores the segmentations for all objects in the image.
- It wraps BinaryMaskList and PolygonList conveniently.
- """
-
- def __init__(self, instances, size, mode="poly"):
- """
- Arguments:
- instances: two types
- (1) polygon
- (2) binary mask
- size: (width, height)
- mode: 'poly', 'mask'. if mode is 'mask', convert mask of any format to binary mask
- """
-
- assert isinstance(size, (list, tuple))
- assert len(size) == 2
- if isinstance(size[0], torch.Tensor):
- assert isinstance(size[1], torch.Tensor)
- size = size[0].item(), size[1].item()
-
- assert isinstance(size[0], (int, float))
- assert isinstance(size[1], (int, float))
-
- if mode == "poly":
- self.instances = PolygonList(instances, size)
- elif mode == "mask":
- self.instances = BinaryMaskList(instances, size)
- else:
- raise NotImplementedError("Unknown mode: %s" % str(mode))
-
- self.mode = mode
- self.size = tuple(size)
-
- def transpose(self, method):
- flipped_instances = self.instances.transpose(method)
- return SegmentationMask(flipped_instances, self.size, self.mode)
-
- def crop(self, box):
- cropped_instances = self.instances.crop(box)
- cropped_size = cropped_instances.size
- return SegmentationMask(cropped_instances, cropped_size, self.mode)
-
- def resize(self, size, *args, **kwargs):
- resized_instances = self.instances.resize(size)
- resized_size = size
- return SegmentationMask(resized_instances, resized_size, self.mode)
-
- def to(self, *args, **kwargs):
- return self
-
- def convert(self, mode):
- if mode == self.mode:
- return self
-
- if mode == "poly":
- converted_instances = self.instances.convert_to_polygon()
- elif mode == "mask":
- converted_instances = self.instances.convert_to_binarymask()
- else:
- raise NotImplementedError("Unknown mode: %s" % str(mode))
-
- return SegmentationMask(converted_instances, self.size, mode)
-
- def get_mask_tensor(self):
- instances = self.instances
- if self.mode == "poly":
- instances = instances.convert_to_binarymask()
- # If there is only 1 instance
- return instances.masks.squeeze(0)
-
- def __len__(self):
- return len(self.instances)
-
- def __getitem__(self, item):
- selected_instances = self.instances.__getitem__(item)
- return SegmentationMask(selected_instances, self.size, self.mode)
-
- def __iter__(self):
- self.iter_idx = 0
- return self
-
- def __next__(self):
- if self.iter_idx < self.__len__():
- next_segmentation = self.__getitem__(self.iter_idx)
- self.iter_idx += 1
- return next_segmentation
- raise StopIteration()
-
- next = __next__ # Python 2 compatibility
-
- def __repr__(self):
- s = self.__class__.__name__ + "("
- s += "num_instances={}, ".format(len(self.instances))
- s += "image_width={}, ".format(self.size[0])
- s += "image_height={}, ".format(self.size[1])
- s += "mode={})".format(self.mode)
- return s
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/background.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/background.py
deleted file mode 100644
index dd3bbe249130348881331aea569ce3ec3f295128..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/background.py
+++ /dev/null
@@ -1 +0,0 @@
-from starlette.background import BackgroundTasks as BackgroundTasks # noqa
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py
deleted file mode 100644
index 573b3f9c3970766ea817994509f4939ef4f70f0c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_T_S_I_C_(BaseTTXConverter):
- pass
diff --git a/spaces/DaleChen/AutoGPT/autogpt/__main__.py b/spaces/DaleChen/AutoGPT/autogpt/__main__.py
deleted file mode 100644
index 128f9eea4900429e88276abdde3419b806001ac7..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/__main__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""Auto-GPT: A GPT powered AI Assistant"""
-import autogpt.cli
-
-if __name__ == "__main__":
- autogpt.cli.main()
diff --git a/spaces/DanteOz/Minimal-Endpoint/app.py b/spaces/DanteOz/Minimal-Endpoint/app.py
deleted file mode 100644
index 06d3b5bcbd4eadf5eece2457c5cf4d11556fc628..0000000000000000000000000000000000000000
--- a/spaces/DanteOz/Minimal-Endpoint/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from flask import Flask
-
-app = Flask(__name__)
-
-@app.route("/")
-def index():
- return "
-
-
-
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan2.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan2.py
deleted file mode 100644
index 832c7faf0baa0ddf6a1d39ad867a0b3d03bb47d2..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/networks_stylegan2.py
+++ /dev/null
@@ -1,1007 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Network architectures from the paper
-"Analyzing and Improving the Image Quality of StyleGAN".
-Matches the original implementation of configs E-F by Karras et al. at
-https://github.com/NVlabs/stylegan2/blob/master/training/networks_stylegan2.py"""
-
-import numpy as np
-import torch
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_resample
-from torch_utils.ops import upfirdn2d
-from torch_utils.ops import bias_act
-from torch_utils.ops import fma
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def normalize_2nd_moment(x, dim=1, eps=1e-8):
- return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt()
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def modulated_conv2d(
- # Input tensor of shape [batch_size, in_channels, in_height, in_width].
- x,
- # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width].
- weight,
- # Modulation coefficients of shape [batch_size, in_channels].
- styles,
- noise=None, # Optional noise tensor to add to the output activations.
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- padding=0, # Padding with respect to the upsampled image.
- # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter().
- resample_filter=None,
- demodulate=True, # Apply weight demodulation?
- # False = convolution, True = correlation (matches torch.nn.functional.conv2d).
- flip_weight=True,
- # Perform modulation, convolution, and demodulation as a single fused operation?
- fused_modconv=True,
-):
- batch_size = x.shape[0]
- out_channels, in_channels, kh, kw = weight.shape
- misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(styles, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs to avoid FP16 overflow.
- if x.dtype == torch.float16 and demodulate:
- weight = weight * (1 / np.sqrt(in_channels * kh * kw) /
- weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk
- styles = styles / \
- styles.norm(float('inf'), dim=1, keepdim=True) # max_I
-
- # Calculate per-sample weights and demodulation coefficients.
- w = None
- dcoefs = None
- if demodulate or fused_modconv:
- w = weight.unsqueeze(0) # [NOIkk]
- w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk]
- if demodulate:
- dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO]
- if demodulate and fused_modconv:
- w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk]
-
- # Execute by scaling the activations before and after the convolution.
- if not fused_modconv:
- x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1)
- x = conv2d_resample.conv2d_resample(x=x, w=weight.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight)
- if demodulate and noise is not None:
- x = fma.fma(x, dcoefs.to(x.dtype).reshape(
- batch_size, -1, 1, 1), noise.to(x.dtype))
- elif demodulate:
- x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1)
- elif noise is not None:
- x = x.add_(noise.to(x.dtype))
- return x
-
- # Execute as one fused op using grouped convolution.
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(batch_size)
- misc.assert_shape(x, [batch_size, in_channels, None, None])
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- if noise is not None:
- x = x.add_(noise)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- lr_multiplier=1, # Learning rate multiplier.
- bias_init=0, # Initial value for the additive bias.
- ):
- super().__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn(
- [out_features, in_features]) / lr_multiplier)
- self.bias = torch.nn.Parameter(torch.full(
- [out_features], np.float32(bias_init))) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
-
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
- def extra_repr(self):
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Conv2dLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Width and height of the convolution kernel.
- kernel_size,
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Expect the input to have memory_format=channels_last?
- trainable=True, # Update the weights of this layer during training?
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.activation = activation
- self.up = up
- self.down = down
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
- self.act_gain = bias_act.activation_funcs[activation].def_gain
-
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(
- memory_format=memory_format)
- bias = torch.zeros([out_channels]) if bias else None
- if trainable:
- self.weight = torch.nn.Parameter(weight)
- self.bias = torch.nn.Parameter(bias) if bias is not None else None
- else:
- self.register_buffer('weight', weight)
- if bias is not None:
- self.register_buffer('bias', bias)
- else:
- self.bias = None
-
- def forward(self, x, gain=1):
- w = self.weight * self.weight_gain
- b = self.bias.to(x.dtype) if self.bias is not None else None
- flip_weight = (self.up == 1) # slightly faster
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, b, act=self.activation,
- gain=act_gain, clamp=act_clamp)
- return x
-
- def extra_repr(self):
- return ' '.join([
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, activation={self.activation:s},',
- f'up={self.up}, down={self.down}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- # Input latent (Z) dimensionality, 0 = no latent.
- z_dim,
- # Conditioning label (C) dimensionality, 0 = no label.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Number of intermediate latents to output, None = do not broadcast.
- num_ws,
- num_layers=8, # Number of mapping layers.
- # Label embedding dimensionality, None = same as w_dim.
- embed_features=None,
- # Number of intermediate features in the mapping layers, None = same as w_dim.
- layer_features=None,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Learning rate multiplier for the mapping layers.
- lr_multiplier=0.01,
- # Decay for tracking the moving average of W during training, None = do not track.
- w_avg_beta=0.998,
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- if embed_features is None:
- embed_features = w_dim
- if c_dim == 0:
- embed_features = 0
- if layer_features is None:
- layer_features = w_dim
- features_list = [z_dim + embed_features] + \
- [layer_features] * (num_layers - 1) + [w_dim]
-
- if c_dim > 0:
- self.embed = FullyConnectedLayer(c_dim, embed_features)
- for idx in range(num_layers):
- in_features = features_list[idx]
- out_features = features_list[idx + 1]
- layer = FullyConnectedLayer(
- in_features, out_features, activation=activation, lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
-
- if num_ws is not None and w_avg_beta is not None:
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
- # Embed, normalize, and concat inputs.
- x = None
- with torch.autograd.profiler.record_function('input'):
- if self.z_dim > 0:
- misc.assert_shape(z, [None, self.z_dim])
- x = normalize_2nd_moment(z.to(torch.float32))
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = normalize_2nd_moment(self.embed(c.to(torch.float32)))
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Main layers.
- for idx in range(self.num_layers):
- layer = getattr(self, f'fc{idx}')
- x = layer(x)
-
- # Update moving average of W.
- if update_emas and self.w_avg_beta is not None:
- with torch.autograd.profiler.record_function('update_w_avg'):
- self.w_avg.copy_(x.detach().mean(
- dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast.
- if self.num_ws is not None:
- with torch.autograd.profiler.record_function('broadcast'):
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
-
- # Apply truncation.
- if truncation_psi != 1:
- with torch.autograd.profiler.record_function('truncate'):
- assert self.w_avg_beta is not None
- if self.num_ws is None or truncation_cutoff is None:
- x = self.w_avg.lerp(x, truncation_psi)
- else:
- x[:, :truncation_cutoff] = self.w_avg.lerp(
- x[:, :truncation_cutoff], truncation_psi)
- return x
-
- def extra_repr(self):
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Intermediate latent (W) dimensionality.
- w_dim,
- resolution, # Resolution of this layer.
- kernel_size=3, # Convolution kernel size.
- up=1, # Integer upsampling factor.
- use_noise=True, # Enable noise input?
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Use channels_last format for the weights?
- square=False, # default if for rectangle images
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.w_dim = w_dim
- self.resolution = resolution
- self.up = up
- self.use_noise = use_noise
- self.activation = activation
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.act_gain = bias_act.activation_funcs[activation].def_gain
- self.square = square
-
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- if use_noise:
- if self.square:
- self.register_buffer(
- 'noise_const', torch.randn([resolution, resolution]))
- else:
- self.register_buffer('noise_const', torch.randn(
- [resolution, resolution // 2]))
- self.noise_strength = torch.nn.Parameter(torch.zeros([]))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
-
- def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1):
- assert noise_mode in ['random', 'const', 'none']
- in_resolution = self.resolution // self.up
- if self.square:
- misc.assert_shape(
- x, [None, self.weight.shape[1], in_resolution, in_resolution])
- else:
- misc.assert_shape(
- x, [None, self.weight.shape[1], in_resolution, in_resolution // 2])
- styles = self.affine(w)
-
- noise = None
- if self.use_noise and noise_mode == 'random':
- if self.square:
- noise = torch.randn(
- [x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength
- else:
- noise = torch.randn(
- [x.shape[0], 1, self.resolution, self.resolution // 2], device=x.device) * self.noise_strength
- if self.use_noise and noise_mode == 'const':
- noise = self.noise_const * self.noise_strength
-
- flip_weight = (self.up == 1) # slightly faster
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
- padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, self.bias.to(
- x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
- return x
-
- def extra_repr(self):
- return ' '.join([
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d},',
- f'resolution={self.resolution:d}, up={self.up}, activation={self.activation:s}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class ToRGBLayer(torch.nn.Module):
- def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.w_dim = w_dim
- self.conv_clamp = conv_clamp
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
-
- def forward(self, x, w, fused_modconv=True):
- styles = self.affine(w) * self.weight_gain
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles,
- demodulate=False, fused_modconv=fused_modconv)
- x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp)
- return x
-
- def extra_repr(self):
- return f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of output channels.
- out_channels,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Resolution of this block.
- resolution,
- # Number of output color channels.
- img_channels,
- is_last, # Is this the last block?
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='skip',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=256,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- square=False, # default is for rectangle images
- # Default value of fused_modconv. 'inference_only' = True for inference, False for training.
- fused_modconv_default=True,
- # Arguments for SynthesisLayer.
- **layer_kwargs,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.w_dim = w_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.is_last = is_last
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.fused_modconv_default = fused_modconv_default
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.num_conv = 0
- self.num_torgb = 0
- self.square = square
-
- if in_channels == 0:
- if self.square:
- self.const = torch.nn.Parameter(torch.randn(
- [out_channels, resolution, resolution]))
- else: # rectangle
- self.const = torch.nn.Parameter(torch.randn(
- [out_channels, resolution, resolution // 2]))
-
- if in_channels != 0:
- self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2,
- resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs)
- self.num_conv += 1
-
- self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution,
- conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs)
- self.num_conv += 1
-
- if is_last or architecture == 'skip':
- self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim,
- conv_clamp=conv_clamp, channels_last=self.channels_last)
- self.num_torgb += 1
-
- if in_channels != 0 and architecture == 'resnet':
- self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2,
- resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs):
- _ = update_emas # unused
- misc.assert_shape(
- ws, [None, self.num_conv + self.num_torgb, self.w_dim])
- w_iter = iter(ws.unbind(dim=1))
- if ws.device.type != 'cuda':
- force_fp32 = True
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
- if fused_modconv is None:
- fused_modconv = self.fused_modconv_default
- if fused_modconv == 'inference_only':
- fused_modconv = (not self.training)
-
- # Input.
- if self.in_channels == 0:
- x = self.const.to(dtype=dtype, memory_format=memory_format)
- x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1])
- else:
- if self.square:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution // 2, self.resolution // 2])
- else: # rectangle
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution // 2, self.resolution // 4])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # Main layers.
- if self.in_channels == 0:
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- elif self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv,
- gain=np.sqrt(0.5), **layer_kwargs)
- x = y.add_(x)
- else:
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
-
- # ToRGB.
- if img is not None:
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution // 2, self.resolution // 2])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution // 2, self.resolution // 4])
- img = upfirdn2d.upsample2d(img, self.resample_filter)
- if self.is_last or self.architecture == 'skip':
- y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv)
- y = y.to(dtype=torch.float32,
- memory_format=torch.contiguous_format)
- img = img.add_(y) if img is not None else y
-
- assert x.dtype == dtype
- assert img is None or img.dtype == torch.float32
- return x, img
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- square,
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=4,
- **block_kwargs, # Arguments for SynthesisBlock.
- ):
- assert img_resolution >= 4 and img_resolution & (
- img_resolution - 1) == 0
- super().__init__()
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.square = square
- self.num_fp16_res = num_fp16_res
- self.block_resolutions = [
- 2 ** i for i in range(2, self.img_resolution_log2 + 1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- self.num_ws = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res // 2] if res > 4 else 0
- out_channels = channels_dict[res]
- use_fp16 = (res >= fp16_resolution)
- is_last = (res == self.img_resolution)
- block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res,
- img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, square=square, **block_kwargs)
- self.num_ws += block.num_conv
- if is_last:
- self.num_ws += block.num_torgb
- setattr(self, f'b{res}', block)
-
- def forward(self, ws, **block_kwargs):
- block_ws = []
- with torch.autograd.profiler.record_function('split_ws'):
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32)
- w_idx = 0
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- block_ws.append(
- ws.narrow(1, w_idx, block.num_conv + block.num_torgb))
- w_idx += block.num_conv
-
- x = img = None
- for res, cur_ws in zip(self.block_resolutions, block_ws):
- block = getattr(self, f'b{res}')
- x, img = block(x, img, cur_ws, **block_kwargs)
- return img
-
- def extra_repr(self):
- return ' '.join([
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
- f'num_fp16_res={self.num_fp16_res:d}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- # Conditioning label (C) dimensionality.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- square,
- img_resolution, # Output resolution.
- img_channels, # Number of output color channels.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- **synthesis_kwargs, # Arguments for SynthesisNetwork.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.square = square
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.synthesis = SynthesisNetwork(
- w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, square=square, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(
- z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, **synthesis_kwargs):
- ws = self.mapping(z, c, truncation_psi=truncation_psi,
- truncation_cutoff=truncation_cutoff, update_emas=update_emas)
- img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs)
- return img
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of intermediate channels.
- tmp_channels,
- # Number of output channels.
- out_channels,
- # Resolution of this block.
- resolution,
- # Number of input color channels.
- img_channels,
- # Index of the first layer.
- first_layer_idx,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- # Freeze-D: Number of layers to freeze.
- freeze_layers=0,
- square=False,
- ):
- assert in_channels in [0, tmp_channels]
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.resolution = resolution
- self.img_channels = img_channels
- self.first_layer_idx = first_layer_idx
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.square = square
-
- self.num_layers = 0
-
- def trainable_gen():
- while True:
- layer_idx = self.first_layer_idx + self.num_layers
- trainable = (layer_idx >= freeze_layers)
- self.num_layers += 1
- yield trainable
- trainable_iter = trainable_gen()
-
- if in_channels == 0 or architecture == 'skip':
- self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- if architecture == 'resnet':
- self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, force_fp32=False):
- if (x if x is not None else img).device.type != 'cuda':
- force_fp32 = True
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
-
- # Input.
- if x is not None:
- if self.square:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution // 2])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # FromRGB.
- if self.in_channels == 0 or self.architecture == 'skip':
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution // 2])
- img = img.to(dtype=dtype, memory_format=memory_format)
- y = self.fromrgb(img)
- x = x + y if x is not None else y
- img = upfirdn2d.downsample2d(
- img, self.resample_filter) if self.architecture == 'skip' else None
-
- # Main layers.
- if self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x)
- x = self.conv1(x, gain=np.sqrt(0.5))
- x = y.add_(x)
- else:
- x = self.conv0(x)
- x = self.conv1(x)
-
- assert x.dtype == dtype
- return x, img
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MinibatchStdLayer(torch.nn.Module):
- def __init__(self, group_size, num_channels=1):
- super().__init__()
- self.group_size = group_size
- self.num_channels = num_channels
-
- def forward(self, x):
- N, C, H, W = x.shape
- with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants
- G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(
- N)) if self.group_size is not None else N
- F = self.num_channels
- c = C // F
-
- # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c.
- y = x.reshape(G, -1, F, c, H, W)
- # [GnFcHW] Subtract mean over group.
- y = y - y.mean(dim=0)
- # [nFcHW] Calc variance over group.
- y = y.square().mean(dim=0)
- y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group.
- # [nF] Take average over channels and pixels.
- y = y.mean(dim=[2, 3, 4])
- y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions.
- # [NFHW] Replicate over group and pixels.
- y = y.repeat(G, 1, H, W)
- # [NCHW] Append to input as new channels.
- x = torch.cat([x, y], dim=1)
- return x
-
- def extra_repr(self):
- return f'group_size={self.group_size}, num_channels={self.num_channels:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorEpilogue(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- # Dimensionality of mapped conditioning label, 0 = no label.
- cmap_dim,
- resolution, # Resolution of this block.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Group size for the minibatch standard deviation layer, None = entire minibatch.
- mbstd_group_size=4,
- # Number of features for the minibatch standard deviation layer, 0 = disable.
- mbstd_num_channels=1,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- square=False,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.cmap_dim = cmap_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.architecture = architecture
- self.square = square
-
- if architecture == 'skip':
- self.fromrgb = Conv2dLayer(
- img_channels, in_channels, kernel_size=1, activation=activation)
- self.mbstd = MinibatchStdLayer(
- group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None
- self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels,
- kernel_size=3, activation=activation, conv_clamp=conv_clamp)
-
- if self.square:
- self.fc = FullyConnectedLayer(
- in_channels * (resolution ** 2), in_channels, activation=activation)
- else:
- self.fc = FullyConnectedLayer(
- in_channels * (resolution ** 2 // 2), in_channels, activation=activation)
-
- self.out = FullyConnectedLayer(
- in_channels, 1 if cmap_dim == 0 else cmap_dim)
-
- def forward(self, x, img, cmap, force_fp32=False):
- if self.square:
- misc.assert_shape(x, [None, self.in_channels,
- self.resolution, self.resolution])
- else:
- misc.assert_shape(
- x, [None, self.in_channels, self.resolution, self.resolution // 2]) # [NCHW]
-
- _ = force_fp32 # unused
- dtype = torch.float32
- memory_format = torch.contiguous_format
-
- # FromRGB.
- x = x.to(dtype=dtype, memory_format=memory_format)
- if self.architecture == 'skip':
- if self.square:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- else:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution // 2])
-
- img = img.to(dtype=dtype, memory_format=memory_format)
- x = x + self.fromrgb(img)
-
- # Main layers.
- if self.mbstd is not None:
- x = self.mbstd(x)
- x = self.conv(x)
- x = self.fc(x.flatten(1))
- x = self.out(x)
-
- # Conditioning.
- if self.cmap_dim > 0:
- misc.assert_shape(cmap, [None, self.cmap_dim])
- x = (x * cmap).sum(dim=1, keepdim=True) * \
- (1 / np.sqrt(self.cmap_dim))
-
- assert x.dtype == dtype
- return x
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Discriminator(torch.nn.Module):
- def __init__(self,
- # Conditioning label (C) dimensionality.
- c_dim,
- img_resolution, # Input resolution.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=4,
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=256,
- # Dimensionality of mapped conditioning label, None = default.
- cmap_dim=None,
- square=False, # default for rectangle images
- block_kwargs={}, # Arguments for DiscriminatorBlock.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- # Arguments for DiscriminatorEpilogue.
- epilogue_kwargs={},
- ):
- super().__init__()
- self.c_dim = c_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.square = square
- self.block_resolutions = [
- 2 ** i for i in range(self.img_resolution_log2, 2, -1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions + [4]}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- if cmap_dim is None:
- cmap_dim = channels_dict[4]
- if c_dim == 0:
- cmap_dim = 0
-
- common_kwargs = dict(img_channels=img_channels,
- architecture=architecture, conv_clamp=conv_clamp)
- cur_layer_idx = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res] if res < img_resolution else 0
- tmp_channels = channels_dict[res]
- out_channels = channels_dict[res // 2]
- use_fp16 = (res >= fp16_resolution)
- block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res,
- first_layer_idx=cur_layer_idx, use_fp16=use_fp16, square=square, **block_kwargs, **common_kwargs)
- setattr(self, f'b{res}', block)
- cur_layer_idx += block.num_layers
- if c_dim > 0:
- self.mapping = MappingNetwork(
- z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs)
- self.b4 = DiscriminatorEpilogue(
- channels_dict[4], cmap_dim=cmap_dim, resolution=4, square=square, **epilogue_kwargs, **common_kwargs)
-
- def forward(self, img, c, update_emas=False, **block_kwargs):
- _ = update_emas # unused
- x = None
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- x, img = block(x, img, **block_kwargs)
-
- cmap = None
- if self.c_dim > 0:
- cmap = self.mapping(None, c)
- x = self.b4(x, img, cmap)
- return x
-
- def extra_repr(self):
- return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}'
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/op_edit/fused_act.py b/spaces/DragGan/DragGan/stylegan_human/torch_utils/op_edit/fused_act.py
deleted file mode 100644
index 138f090bc67b94b363c346cbf405990f1bbdff68..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/op_edit/fused_act.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-import os
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-
-module_path = os.path.dirname(__file__)
-fused = load(
- "fused",
- sources=[
- os.path.join(module_path, "fused_bias_act.cpp"),
- os.path.join(module_path, "fused_bias_act_kernel.cu"),
- ],
-)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- (out,) = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- (out,) = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- if input.device.type == "cpu":
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- return (
- F.leaky_relu(
- input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2
- )
- * scale
- )
-
- else:
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/DreamSunny/stable-diffusion-webui-cpu/app.py b/spaces/DreamSunny/stable-diffusion-webui-cpu/app.py
deleted file mode 100644
index 86d44c530a07a58d5c32663b9c07ecd6310b742c..0000000000000000000000000000000000000000
--- a/spaces/DreamSunny/stable-diffusion-webui-cpu/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-"""
-Stable Diffusion Webui Version 1.6
-https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0
-
-"""
-commit_id=r"5ef669de080814067961f28357256e8fe27544f4" #Version 1.3.0
-import os
-from sys import executable
-import subprocess
-import pathlib
-import gc
-
-def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int :
- if pathlib.Path.exists(ClonePath):
- return 0
- for z in range(10):
- i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)])
- if(i.returncode == 0 ):
- del i
- return 0
- else :
- del i
- raise Exception(str.format("clone \'{0}\' failed",URI))
-
-
-def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int:
- if (DownloadPath / DownLoadFileName).is_file(): return 0
- for z in range(10):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
- raise Exception(str.format("download \'{0}\' failed",URI))
-
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui")
-os.chdir(str(user_home / r"stable-diffusion-webui"))
-os.system("git reset --hard "+commit_id)
-#install extensions
-print("installing extensions")
-Gitclone(r"https://huggingface.co/embed/negative",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")
-Gitclone(r"https://huggingface.co/embed/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth")
-while (True):
- i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- break
- else :
- del i
-Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )
-Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")
-Gitclone(r"https://github.com/camenduru/sd-civitai-browser",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser")
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")
-Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")
-Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")
-Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")
-#中文本地化的请解除下一行的注释
-#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN")
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")
-Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")
-Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot")
-Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo")
-os.chdir(user_home / r"stable-diffusion-webui")
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name)
-del dList
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-#Stable Diffusion Checkpoint Model
-#anything version4.5
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.5-pruned.ckpt")
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.0.vae.pt")
-#Counterfeit-V3.0
-DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"Counterfeit-V3.0_fp16.safetensors")
-#AbyssOrangeMix2 sfw
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"AbyssOrangeMix2_sfw.safetensors")
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"orangemix.vae.pt")
-#MeinaPastelV5
-DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_BakedVAE.safetensors")
-DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_WithoutVAE.safetensors")
-
-#Lora Model
-#Better Light
-DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors")
-#LAS
-DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors")
-#Backlighting
-DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors")
-#GFPGAN Model
-#detection Resnet50
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth")
-#parsing_parsenet
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth")
-#GFPGANv1.4
-DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth")
-#strt Stable Diffusion Webui
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-gc.collect()
-while True:
- ret=subprocess.run([executable ,user_home / r"stable-diffusion-webui" / r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")])
- if(ret.returncode == 0 ):
- del ret
- gc.collect()
- else :
- del ret
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/Egrt/MaskGAN/models/resnest/ablation.py b/spaces/Egrt/MaskGAN/models/resnest/ablation.py
deleted file mode 100644
index 00743ccdcf8c909b262c37476488c92ba947fde5..0000000000000000000000000000000000000000
--- a/spaces/Egrt/MaskGAN/models/resnest/ablation.py
+++ /dev/null
@@ -1,106 +0,0 @@
-##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-## Created by: Hang Zhang
-## Email: zhanghang0704@gmail.com
-## Copyright (c) 2020
-##
-## LICENSE file in the root directory of this source tree
-##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-"""ResNeSt ablation study models"""
-
-import torch
-from .resnet import ResNet, Bottleneck
-
-__all__ = ['resnest50_fast_1s1x64d', 'resnest50_fast_2s1x64d', 'resnest50_fast_4s1x64d',
- 'resnest50_fast_1s2x40d', 'resnest50_fast_2s2x40d', 'resnest50_fast_4s2x40d',
- 'resnest50_fast_1s4x24d']
-
-_url_format = 'https://s3.us-west-1.wasabisys.com/resnest/torch/{}-{}.pth'
-
-_model_sha256 = {name: checksum for checksum, name in [
- ('d8fbf808', 'resnest50_fast_1s1x64d'),
- ('44938639', 'resnest50_fast_2s1x64d'),
- ('f74f3fc3', 'resnest50_fast_4s1x64d'),
- ('32830b84', 'resnest50_fast_1s2x40d'),
- ('9d126481', 'resnest50_fast_2s2x40d'),
- ('41d14ed0', 'resnest50_fast_4s2x40d'),
- ('d4a4f76f', 'resnest50_fast_1s4x24d'),
- ]}
-
-def short_hash(name):
- if name not in _model_sha256:
- raise ValueError('Pretrained model for {name} is not available.'.format(name=name))
- return _model_sha256[name][:8]
-
-resnest_model_urls = {name: _url_format.format(name, short_hash(name)) for
- name in _model_sha256.keys()
-}
-
-def resnest50_fast_1s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=1, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_1s1x64d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_2s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=2, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_2s1x64d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_4s1x64d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=4, groups=1, bottleneck_width=64,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_4s1x64d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_1s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=1, groups=2, bottleneck_width=40,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_1s2x40d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_2s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=2, groups=2, bottleneck_width=40,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_2s2x40d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_4s2x40d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=4, groups=2, bottleneck_width=40,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_4s2x40d'], progress=True, check_hash=True))
- return model
-
-def resnest50_fast_1s4x24d(pretrained=False, root='~/.encoding/models', **kwargs):
- model = ResNet(Bottleneck, [3, 4, 6, 3],
- radix=1, groups=4, bottleneck_width=24,
- deep_stem=True, stem_width=32, avg_down=True,
- avd=True, avd_first=True, **kwargs)
- if pretrained:
- model.load_state_dict(torch.hub.load_state_dict_from_url(
- resnest_model_urls['resnest50_fast_1s4x24d'], progress=True, check_hash=True))
- return model
diff --git a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/simple_tokenizer.py b/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/simple_tokenizer.py
deleted file mode 100644
index c84cc8fb3adff99225d3e3a75b2a3d81564adcef..0000000000000000000000000000000000000000
--- a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/tokenizer/simple_tokenizer.py
+++ /dev/null
@@ -1,163 +0,0 @@
-"""
-Copied from: https://github.com/openai/CLIP/blob/573315e83f07b53a61ff5098757e8fc885f1703e/clip/simple_tokenizer.py
-"""
-
-import gzip
-import html
-import os
-from functools import lru_cache
-from typing import List, Tuple
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r"\s+", " ", text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split("\n")
- merges = merges[1 : 49152 - 256 - 2 + 1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v + "" for v in vocab]
- for merge in merges:
- vocab.append("".join(merge))
- vocab.extend(["<|startoftext|>", "<|endoftext|>"])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {"<|startoftext|>": "<|startoftext|>", "<|endoftext|>": "<|endoftext|>"}
- self.pat = re.compile(
- r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
- re.IGNORECASE,
- )
-
- @property
- def start_token(self):
- return self.encoder["<|startoftext|>"]
-
- @property
- def end_token(self):
- return self.encoder["<|endoftext|>"]
-
- def padded_tokens_and_len(self, tokens: List[int], text_ctx: int) -> Tuple[List[int], int]:
- tokens = [self.start_token] + tokens[: text_ctx - 2] + [self.end_token]
- text_len = len(tokens)
- padding = text_ctx - len(tokens)
- padded_tokens = tokens + [0] * padding
- return padded_tokens, text_len
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + (token[-1] + "",)
- pairs = get_pairs(word)
-
- if not pairs:
- return token + ""
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except: # pylint: disable=bare-except
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = (
- bytearray([self.byte_decoder[c] for c in text])
- .decode("utf-8", errors="replace")
- .replace("", " ")
- )
- return text
diff --git a/spaces/EsoCode/text-generation-webui/extensions/ngrok/README.md b/spaces/EsoCode/text-generation-webui/extensions/ngrok/README.md
deleted file mode 100644
index 0324bf9852408d9d2b86cc0165c2d548996f9c94..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/ngrok/README.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Adding an ingress URL through the ngrok Agent SDK for Python
-
-[ngrok](https://ngrok.com) is a globally distributed reverse proxy commonly used for quickly getting a public URL to a
-service running inside a private network, such as on your local laptop. The ngrok agent is usually
-deployed inside a private network and is used to communicate with the ngrok cloud service.
-
-By default the authtoken in the NGROK_AUTHTOKEN environment variable will be used. Alternatively one may be specified in
-the `settings.json` file, see the Examples below. Retrieve your authtoken on the [Auth Token page of your ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken), signing up is free.
-
-# Documentation
-
-For a list of all available options, see [the configuration documentation](https://ngrok.com/docs/ngrok-agent/config/) or [the connect example](https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py).
-
-The ngrok Python SDK is [on github here](https://github.com/ngrok/ngrok-py). A quickstart guide and a full API reference are included in the [ngrok-py Python API documentation](https://ngrok.github.io/ngrok-py/).
-
-# Running
-
-To enable ngrok install the requirements and then add `--extension ngrok` to the command line options, for instance:
-
-```bash
-pip install -r extensions/ngrok/requirements.txt
-python server.py --extension ngrok
-```
-
-In the output you should then see something like this:
-
-```bash
-INFO:Loading the extension "ngrok"...
-INFO:Session created
-INFO:Created tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" with url "https://d83706cf7be7.ngrok.app"
-INFO:Tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" TCP forwarding to "localhost:7860"
-INFO:Ingress established at https://d83706cf7be7.ngrok.app
-```
-
-You can now access the webui via the url shown, in this case `https://d83706cf7be7.ngrok.app`. It is recommended to add some authentication to the ingress, see below.
-
-# Example Settings
-
-In `settings.json` add a `ngrok` key with a dictionary of options, for instance:
-
-To enable basic authentication:
-```json
-{
- "ngrok": {
- "basic_auth": "user:password"
- }
-}
-```
-
-To enable OAUTH authentication:
-```json
-{
- "ngrok": {
- "oauth_provider": "google",
- "oauth_allow_domains": "asdf.com",
- "oauth_allow_emails": "asdf@asdf.com"
- }
-}
-```
-
-To add an authtoken instead of using the NGROK_AUTHTOKEN environment variable:
-```json
-{
- "ngrok": {
- "authtoken": "",
- "authtoken_from_env":false
- }
-}
-```
\ No newline at end of file
diff --git a/spaces/EuroSciPy2022/arxiv-cards/arxiv_util.py b/spaces/EuroSciPy2022/arxiv-cards/arxiv_util.py
deleted file mode 100644
index 7414683a2bf10c65dc85dcdacdcae799cbd9fe0e..0000000000000000000000000000000000000000
--- a/spaces/EuroSciPy2022/arxiv-cards/arxiv_util.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from collections import namedtuple # later use py3.7 dataclasses
-import urllib
-import feedparser
-import pdb
-
-ArxivPaper = namedtuple("ArxivPaper", ["title", "authors", "abstract", "linktopdf", "linktoabs", "arxiv_id"])
-
-def arxiv_url_sanitizer(url):
- """
- as of now, just converts
- arxiv.org/pdf/ to arxiv.org/abs
- """
- # if its an arxiv pdf url then
- if url.find("pdf") != -1:
- url = url.replace("/pdf","/abs")
- url = url.replace(".pdf","")
- return url
-
-def get_paper_info(url):
- """
- Given an arxiv url returns
- a ArxivPaper object with fields
- title : str
- authors : str
- abstract : str
- linktopdf : str
- linktoabs : str
- arxiv_id : str
- """
- arxiv_id = url.split("/")[-1]
- arxiv_searchurl = "http://export.arxiv.org/api/query?id_list={}".format(arxiv_id)
-
- try:
- atom_feed = urllib.request.urlopen(arxiv_searchurl)
- except urllib.error.HTTPError as e:
- # print("Couldn't retrieve : {}".format(arxiv_searchurl))
- raise RuntimeError("Trouble fetching ArXiv Id : {}".format(arxiv_id))
-
- parsed_feed = feedparser.parse(atom_feed)
- paper = parsed_feed["entries"][0]
-
- title = paper["title"]
- authors = paper["authors"]
- if len(authors)>5:
- authors = authors[:6]
- authors[5] = {'name': 'and others...'}
- abstract = paper["summary"]
- linktopdf = None
- linktoabs = None
- for link_dict in paper["links"]:
- if link_dict["type"].find("html") != -1:
- linktoabs = link_dict["href"]
-
- elif link_dict["type"].find("pdf")!= -1:
- linktopdf = link_dict["href"]
-
- # comment = paper["arxiv_comment"] # Not there in all arxiv pages.
- return ArxivPaper(title, authors, abstract, linktopdf, linktoabs, arxiv_id)
diff --git a/spaces/EzioArno/Goofy/README.md b/spaces/EzioArno/Goofy/README.md
deleted file mode 100644
index fdc2cc5e22c4c7bc7e1e94f05f1098ec795a2378..0000000000000000000000000000000000000000
--- a/spaces/EzioArno/Goofy/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Goofy
-emoji: 📉
-colorFrom: indigo
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/pages/index-a8066808bfe4a082.js b/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/pages/index-a8066808bfe4a082.js
deleted file mode 100644
index 301882b860b10139dff21afca8685f66de01060d..0000000000000000000000000000000000000000
--- a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/pages/index-a8066808bfe4a082.js
+++ /dev/null
@@ -1 +0,0 @@
-(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[405],{8477:function(e,t,l){(window.__NEXT_P=window.__NEXT_P||[]).push(["/",function(){return l(9942)}])},9942:function(e,t,l){"use strict";l.r(t),l.d(t,{default:function(){return v}});var s=l(1527),a=l(9172),i=l.n(a),r=l(959),n=l(6980),o=l.n(n),c=l(1953);function x(e){return(0,s.jsxs)("div",{className:"flex h-full min-h-screen bg-sky-500 -z-20 antialiased",style:{backgroundColor:"#38bdf8",backgroundImage:"url(\"data:image/svg+xml,%3Csvg width='30' height='30' opacity='0.4' viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M0 10h10v10H0V10zM10 0h10v10H10V0z' fill='%23bae6fd' fill-opacity='0.4' fill-rule='evenodd'/%3E%3C/svg%3E\")"},children:[(0,s.jsxs)(o(),{children:[(0,s.jsx)("title",{children:e.title}),(0,s.jsx)("meta",{property:"og:title",content:e.title}),(0,s.jsx)("meta",{name:"description",content:"Transcribe any audio file - completely free!"}),(0,s.jsx)("meta",{property:"og:description",content:"Transcribe any audio file - completely free!"})]}),(0,s.jsxs)("main",{className:"flex flex-1 flex-col",children:[(0,s.jsx)(c.x7,{}),(0,s.jsx)("div",{className:"flex-1",children:e.children})]})]})}var d=l(7632);let h=["byte","kilobyte","megabyte","gigabyte","terabyte","petabyte"];function u(e){let t=Math.abs(Number(e)),l=0;for(;t>=1e3&&l{let{progress:t,loaded:l}=e;return(0,s.jsx)(s.Fragment,{children:t>0&&t<100&&!l&&(0,s.jsx)("div",{className:"flex flex-col gap-2",children:(0,s.jsx)("div",{className:"h-3 outline outline-white bg-gray-200",children:(0,s.jsx)("div",{className:"bg-emerald-500 h-3",style:{width:"".concat(t,"%")}})})})})},f=e=>{let{selectedModel:t,setSelectedModel:l,loaded:a,progress:i}=e,[n,o]=(0,r.useState)(!1),c=e=>e.charAt(0).toUpperCase()+e.slice(1);return(0,s.jsxs)(s.Fragment,{children:[(0,s.jsxs)("div",{className:"flex flex-row justify-between",children:[(0,s.jsx)("label",{className:"text-white text-xl font-semibold",children:"Select Model"}),i>0&&!a&&(0,s.jsxs)("label",{className:"text-white text-xl font-semibold text-right",children:[i.toFixed(2),"%"]})]}),(0,s.jsxs)("div",{className:"group inline-block relative w-full",children:[(0,s.jsxs)("button",{className:"bg-pop-orange text-white font-semibold text-xl py-2.5 px-8 w-full inline-flex items-center outline outline-white",onClick:()=>o(!n),children:[(0,s.jsx)("span",{className:"mr-1",children:t?c(t):"Select Model"}),(0,s.jsx)("svg",{className:"fill-current h-4 w-4",xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 20 20",children:(0,s.jsx)("path",{d:"M9.293 12.95l.707.707L15.657 8l-1.414-1.414L10 10.828 5.757 6.586 4.343 8z"})})]}),(0,s.jsx)("ul",{className:"absolute text-white group-hover:block z-10 w-full",style:{display:n?"block":"none"},children:(()=>{let e=Object.values(d.ko).slice(0,-1),t=Array.from(d.Fd.values()).slice(0,-1),a=e.map((e,l)=>[e,t[l]]);return a.map((e,t)=>(0,s.jsx)("li",{children:(0,s.jsxs)("a",{className:"bg-orange-500 hover:bg-pop-orange py-2 px-8 font-semibold text-xl block whitespace-no-wrap cursor-pointer ".concat(t===a.length-1?"rounded-b-md":""),onClick:()=>{l(e[0]),o(!1)},children:[c(e[0])," ",u(e[1])]})},e[0]))})()})]})]})},w=e=>{let[t,l]=(0,r.useState)(null),[a,i]=(0,r.useState)(!1),n=async()=>{l(await d.tX.start())},o=async()=>{if(!t)return;let s=await t.stop(),a=(await new AudioContext({sampleRate:16e3}).decodeAudioData(s.buffer)).getChannelData(0);e.setAudioData(new Uint8Array(a.buffer));let i=s.blob;e.setAudioMetadata({file:new File([i],"recording.wav"),fromMic:!0}),e.setBlobUrl(URL.createObjectURL(i)),l(null)},c=async()=>{a?await o():await n(),i(!a)};return(0,s.jsxs)("div",{className:"flex flex-col",children:[(0,s.jsx)("label",{className:"text-white text-xl font-semibold",children:"Record"}),(0,s.jsx)("button",{className:"bg-pop-orange text-xl outline outline-white text-white font-semibold px-6 mx-auto cursor-pointer active:bg-pop-orange-dark h-full",onClick:c,children:a?(0,s.jsx)("svg",{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 24 24",fill:"currentColor",className:"w-6 h-6",children:(0,s.jsx)("path",{fillRule:"evenodd",d:"M4.5 7.5a3 3 0 013-3h9a3 3 0 013 3v9a3 3 0 01-3 3h-9a3 3 0 01-3-3v-9z",clipRule:"evenodd"})}):(0,s.jsx)("svg",{xmlns:"http://www.w3.org/2000/svg",fill:"none",viewBox:"0 0 24 24",strokeWidth:1.5,stroke:"currentColor",className:"w-8 h-8",children:(0,s.jsx)("path",{strokeLinecap:"round",strokeLinejoin:"round",d:"M12 18.75a6 6 0 006-6v-1.5m-6 7.5a6 6 0 01-6-6v-1.5m6 7.5v3.75m-3.75 0h7.5M12 15.75a3 3 0 01-3-3V4.5a3 3 0 116 0v8.25a3 3 0 01-3 3z"})})})]})},p=e=>{let t=(0,r.useRef)(null),[l,a]=(0,r.useState)(null),[i,n]=(0,r.useState)(!1),[o,x]=(0,r.useState)(null),[h,p]=(0,r.useState)(null),[g,j]=(0,r.useState)(null),[b,v]=(0,r.useState)(null),[y,N]=(0,r.useState)(!1),[k,S]=(0,r.useState)(0),[_,C]=(0,r.useState)(!1);(0,r.useEffect)(()=>{o&&l!=o&&!_&&(N(!1),S(0))},[l]);let F=async()=>{if(t.current&&t.current.destroy(),i)return;if(!l){console.error("No model selected");return}n(!0);let e=new d.Sj,s=await e.loadModel(l,()=>{N(!0),x(l)},e=>S(e));s.isErr?c.ZP.error(s.error.message):(n(!1),t.current=s.value)},A=async()=>{if(!t.current){c.ZP.error("No model loaded");return}if(!h){c.ZP.error("No audio file loaded");return}e.setTranscript(e=>({...e,segments:[]})),C(!0),await t.current.transcribe(h,g.fromMic,t=>{if(t.last){C(!1),e.setDownloadAvailable(!0);return}e.setTranscript(e=>({...e,segments:[...e.segments,t]}))})};return(0,s.jsxs)("div",{className:"flex-1 w-1/2 h-full flex flex-col relative z-10 overflow-hidden",children:[(0,s.jsxs)("div",{className:"h-full px-4 xl:pl-32 my-4",children:[(0,s.jsx)("img",{src:"/whisper-turbo.png",className:"w-full xl:w-3/4 2xl:w-1/2 mx-auto pt-8 pb-4 cursor-pointer",onClick:()=>window.open("https://github.com/FL33TW00D/whisper-turbo","_blank")}),(0,s.jsxs)("div",{className:"flex flex-col mx-auto gap-6",children:[(0,s.jsxs)("div",{children:[(0,s.jsx)(f,{selectedModel:l,setSelectedModel:a,loaded:y,progress:k}),(0,s.jsx)(m,{progress:k,loaded:y}),l!=o&&0==k&&(0,s.jsx)("div",{className:"flex flex-row justify-end",children:(0,s.jsx)("button",{className:"outline text-white text-2xl font-semibold mt-2 px-3 bg-pop-orange",onClick:F,children:i?"Loading...":"Load"})})]}),(0,s.jsxs)("div",{className:"flex flex-row gap-4",children:[(0,s.jsxs)("div",{className:"flex flex-col w-full",children:[(0,s.jsx)("label",{className:"text-white text-xl font-semibold",children:"Upload Audio"}),(0,s.jsx)("label",{className:"bg-pop-orange text-xl outline outline-white w-full text-white font-semibold py-2.5 px-8 mx-auto cursor-pointer w-full",htmlFor:"audioFile",children:(0,s.jsxs)("div",{className:"flex flex-row justify-between",children:[(0,s.jsx)("span",{className:"",children:h&&g?g.file.name:"Select Audio File"}),(0,s.jsx)("span",{className:"my-auto",children:h?u(h.length):""})]})}),(0,s.jsx)("input",{type:"file",className:"hidden",name:"audioFile",id:"audioFile",onChange:async e=>{let t=e.target.files[0];if(!t)return;let l=new FileReader;l.onload=()=>{p(new Uint8Array(l.result)),j({file:t,fromMic:!1}),v(URL.createObjectURL(t))},l.readAsArrayBuffer(t)},accept:".wav,.aac,.m4a,.mp4,.mp3"})]}),(0,s.jsx)(w,{setBlobUrl:v,setAudioData:p,setAudioMetadata:j})]}),b&&(0,s.jsxs)("div",{children:[(0,s.jsx)("label",{className:"text-white text-xl font-semibold",children:"Your Audio"}),(0,s.jsx)("audio",{controls:!0,className:"mx-auto w-full",style:{fontFamily:"__VT323_2a9463"},children:(0,s.jsx)("source",{src:b,type:"audio/wav"},b)},b)]})]}),(0,s.jsx)("div",{className:"flex flex-row pt-8 gap-4 mx-auto",children:(0,s.jsx)("button",{className:"bg-pop-orange text-2xl outline outline-white text-white font-semibold py-3 px-8 mx-auto cursor-pointer active:bg-pop-orange-dark",onClick:A,disabled:_,children:_?(0,s.jsx)("div",{className:"flex p-4",children:(0,s.jsx)("span",{className:"loader"})}):"Transcribe"})})]}),(0,s.jsx)("div",{className:"absolute bottom-0 w-full text-center px-4 xl:pl-32",children:(0,s.jsxs)("p",{className:"text-2xl text-white mx-auto",children:["Built by"," ",(0,s.jsx)("a",{href:"https://twitter.com/fleetwood___",className:"hover:underline hover:text-blue-600",children:"@fleetwood"})]})})]})};var g=l(5084);let j=()=>{let[e,t]=(0,r.useState)(!1),[l,a]=(0,r.useState)(!0);r.useRef(null),(0,r.useEffect)(()=>{if(!navigator.gpu){a(!0);return}t(!0)},[]);let i=()=>{a(!1)},n=(0,s.jsx)("svg",{xmlns:"http://www.w3.org/2000/svg",version:"1.1",width:"50",height:"50",viewBox:"0 0 78 97.5",fill:"currentColor",children:(0,s.jsxs)("g",{children:[(0,s.jsx)("rect",{x:"54",y:"54",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"36",y:"36",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"30",y:"42",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"24",y:"48",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"18",y:"54",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"42",y:"30",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"48",y:"24",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"54",y:"18",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"42",y:"42",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"48",y:"48",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"30",y:"30",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"18",y:"18",width:"6",height:"6"}),(0,s.jsx)("rect",{x:"24",y:"24",width:"6",height:"6"})]})});return(0,s.jsx)(s.Fragment,{children:e?(0,s.jsx)(s.Fragment,{}):(0,s.jsx)(g.Z,{classNames:{modal:"!bg-pop-orange !outline w-1/2 md:w-1/2 xl:w-1/3 2xl:w-1/4 overflow-x-hidden !text-white"},open:l,onClose:i,center:!0,closeIcon:n,children:(0,s.jsx)("div",{className:"flex flex-col text-2xl h-full text-center",style:{fontFamily:"__VT323_2a9463"},children:(0,s.jsx)("div",{className:"mx-8 mt-8 text-stone-50",children:(0,s.jsx)("p",{children:"Uh oh! It looks like your browser doesn't support WebGPU. Please try again in a different browser."})})})})})},b=()=>{let[e,t]=(0,r.useState)({segments:[]}),[l,a]=(0,r.useState)(!1),n=()=>{let t=JSON.stringify(e),l=new Blob([t],{type:"application/json"}),s=URL.createObjectURL(l),a=document.createElement("a");a.download="transcript.json",a.href=s,a.click(),a.remove()};return(0,s.jsxs)(x,{title:"Whisper Turbo",children:[(0,s.jsx)("div",{className:"p-0 ".concat(i().className),children:(0,s.jsxs)("div",{className:"flex gap-8 flex-row h-screen",children:[(0,s.jsx)(p,{transcript:e,setTranscript:t,setDownloadAvailable:a}),(0,s.jsx)("div",{className:"flex-1 w-1/2 h-full flex flex-col relative z-10",children:(0,s.jsx)("div",{className:"h-full flex flex-col mx-auto px-4 xl:pr-32 overflow-scroll py-12 w-full",children:(0,s.jsxs)("div",{className:"flex flex-col h-full",children:[e&&e.segments.map(e=>(0,s.jsx)("div",{className:"flex w-full py-4",children:(0,s.jsxs)("div",{className:"rounded p-4 bg-white outline outline-2 outline-black shadow-lg align-right",children:[(0,s.jsx)("div",{className:"font-bold text-lg text-green-700 mb-2",children:e.start}),(0,s.jsx)("div",{className:"mb-2 text-2xl text-slate-900 text-right",children:e.text}),(0,s.jsx)("div",{className:"font-bold text-lg text-red-700",children:e.stop})]})},e.start)),l?(0,s.jsx)("div",{className:"flex flex-row justify-end py-4",children:(0,s.jsx)("button",{className:"bg-green-500 outline hover:bg-green-700 text-white font-bold py-2 px-4",onClick:n,children:"Download"})}):(0,s.jsx)(s.Fragment,{})]})})})]})}),(0,s.jsx)(j,{})]})};var v=b}},function(e){e.O(0,[398,639,774,888,179],function(){return e(e.s=8477)}),_N_E=e.O()}]);
\ No newline at end of file
diff --git a/spaces/FangLee/Generate-Music-in-Time-Series/Deploy_gradio.py b/spaces/FangLee/Generate-Music-in-Time-Series/Deploy_gradio.py
deleted file mode 100644
index e728b224e839f7c6cde72c1faf9404f90daf0837..0000000000000000000000000000000000000000
--- a/spaces/FangLee/Generate-Music-in-Time-Series/Deploy_gradio.py
+++ /dev/null
@@ -1,213 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Time Series Music Generation.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/1XQiDakUozsDA7psZg7Bkwak3ZbaB33gQ
-
-# Setup
-
-[LSTM Music Generation Tutorial Series](https://youtube.com/playlist?list=PL-wATfeyAMNr0KMutwtbeDCmpwvtul-Xz)
-"""
-
-# !pip install music21
-# !pip install numpy
-# !pip install tensorflow
-# !pip install keras
-# !pip install matplotlib
-# !apt install fluidsynth #Pip does not work for some reason. Only apt works
-# !pip install midi2audio
-# !apt-get install musescore3
-
-import os
-import json
-import music21 as m21
-import numpy as np
-from tensorflow import keras
-from tqdm import tqdm
-from midi2audio import FluidSynth
-from IPython.display import Audio, display
-import gradio as gr
-
-# Data source: http://www.esac-data.org
-
-# MUSIC_GENRE = st.selectbox("Please choose your favorite music genre", (os.listdir("./raw_dataset/deutschl")))
-# KERN_DATASET_PATH = "./raw_dataset/deutschl/" + MUSIC_GENRE
-
-# m21.environment.set('musescoreDirectPNGPath', 'C:\\Program Files\\MuseScore 3\\bin\\MuseScore3.exe')
-
-mapping_path = "./mapping.json"
-save_model_path = "./model/cpu_model.h5"
-output_midi_path = "./output/melody.mid"
-output_audio_path = "./output/melody.wav"
-output_image_path = "./output/melody.png"
-
-sequence_length = 64
-
-# durations are expressed in quarter length
-acceptable_durations = [
- 0.25, # 16th note
- 0.5, # 8th note
- 0.75,
- 1.0, # quarter note
- 1.5,
- 2, # half note
- 3,
- 4 # whole note
-]
-
-with open(mapping_path, "r") as fp:
- dictionary = json.load(fp)
-
-"""# Generate"""
-def convert_songs_to_int(dictionary, songs):
- int_songs = []
-
- # transform songs string to list
- songs = songs.split()
-
- # map songs to int
- for symbol in songs:
- int_songs.append(dictionary[symbol])
-
- return int_songs
-
-def generate_melody(seed, max_sequence_length, song_length, dictionary):
- melody = seed.split()
- seed = convert_songs_to_int(dictionary, seed)
- model = keras.models.load_model(save_model_path)
- """
- Example: seed = [44, 50, 64, 73], max_sequence_length = 3.
- seed[-max_sequence_length:] = seed[-3:] = [50, 64, 73]
- seed.append(67) -> seed = [50, 64, 73, 67]
- seed[-3:] = [64, 73, 67].
- """
- for _ in range(song_length):
- seed = seed[-max_sequence_length:] # Example: seed[-10:] means get the last 10 elements
- onehot_seed = keras.utils.to_categorical(seed, num_classes=len(dictionary)) # one-hot encode the sequences
-
- onehot_seed = onehot_seed[np.newaxis,...] # add new axis to onehot_seed matrix. shape = (64, 28) -> (1, 64, 28)
- """ Because Keras expects a batch of samples, so we have to use 3-dimensional array although there is only one 2-dimensional element.
- Example: [[1, 3],[2, 4]] -> [[[1, 3],[2, 4]]]."""
-
- probabilitites = model.predict(onehot_seed)[0]
- """ Returns a matrix that includes the probability for each music symbol.
- Example: prob = [[0.1, 0.2]] -> Remove new axis with prob[0] = [0.1, 0.2]"""
-
- max_probability = max(probabilitites) # get the max probability
- max_probability_index = probabilitites.argmax() # get the index of max probability
- predicted_symbol = list(dictionary.keys())[max_probability_index]
- print("Predicted symbol:", predicted_symbol, "\nProbability:", max_probability)
-
- seed.append(max_probability_index)
-
- if predicted_symbol == "/":
- break
-
- melody.append(predicted_symbol)
- # print(melody)
-
- return melody
-
-def save_melody(melody, midi_path, image_path, step_duration=0.25):
- stream = m21.stream.Stream()
-
- pre_symbol = None
- step_counter = 1
-
- for i, symbol in enumerate(melody):
-
- if symbol == "_" and i + 1 < len(melody):
- step_counter += 1
-
- else:
- if pre_symbol is not None:
- quarter_length = step_duration * step_counter # Example: ["60", "_", "_", "_"] -> quarter_length = 0.25 * 4 = 1 (a quarter note C)
-
- if pre_symbol == "r":
- m21_event = m21.note.Rest(quarterLength = quarter_length)
- else:
- m21_event = m21.note.Note(int(pre_symbol), quarterLength = quarter_length)
-
- stream.append(m21_event)
- step_counter = 1
-
- pre_symbol = symbol
-
- stream.write("midi", midi_path)
-
- print("\nMelody sheet:\n")
- stream.show(fmt="musicxml.png", fp = output_image_path) # fmt: format, fp: file path
-
-def play_melody(melody_path, audio_path):
- FluidSynth(sound_font="./sounds/sf2/default-GM.sf2", sample_rate=16000).midi_to_audio(melody_path, audio_path)
- print("\nPlay melody.wav:\n")
- display(Audio(audio_path, rate=16000))
-
-seed = "67 _ 67 _ 67 _ _ 65 64 _ 64 _ 64 _ _"
-
-symbol_pitch_list = ["r"]
-name_pitch_list = ["Rest"]
-
-for x in dictionary:
- if x.isdigit():
- symbol_pitch_list.append(x)
- name_pitch_list.append(m21.note.Note(int(x)).nameWithOctave)
-
-def add_symbol(symbol, duration):
- global seed
- seed += symbol_pitch_list[name_pitch_list.index(symbol)] + " "
-
- duration = float(duration)
- if duration > 0.25:
- for i in range(int((duration-0.25)/0.25)):
- seed += "_ "
-
- return seed
-
-def clear_symbol():
- global seed
- seed = ""
-
-def generate_symbol(melody_length):
- melody = generate_melody(seed, sequence_length, melody_length, dictionary)
- print("\nMelody symbols:", melody)
-
- save_melody(melody, output_midi_path, output_image_path)
- play_melody(output_midi_path, output_audio_path)
-
- return "./output/melody-1.png", output_audio_path
-
-with gr.Blocks(title="Generate music in time series") as music_generation:
- gr.Markdown("""
- # Generate music in time series
- """)
- with gr.Box():
- with gr.Column():
- with gr.Row():
- symbol = gr.Dropdown(choices = name_pitch_list, label="Pitch of note")
- duration = gr.Dropdown(choices = acceptable_durations, label="Duration of note")
-
- seed_melody = gr.Textbox(value = seed, label="Seed melody")
-
- with gr.Row():
- add_symbol_btn = gr.Button(value="Add symbol")
- clear_symbol_btn = gr.Button(value="Clear symbol")
-
- add_symbol_btn.click(fn=add_symbol, inputs=[symbol, duration], outputs=seed_melody)
- clear_symbol_btn.click(fn = clear_symbol, outputs=seed_melody)
-
- with gr.Box():
- with gr.Column():
- with gr.Row():
- melody_length = gr.Slider(minimum=100, maximum=1000, label="Melody length")
- generate_btn = gr.Button(value="Generate melody")
-
- with gr.Row():
- melody_image = gr.Image(value = output_image_path, label="Melody sheet")
- melody_audio = gr.Audio(value = output_audio_path, label="Melody audio")
-
- generate_btn.click(fn=generate_symbol, inputs=melody_length, outputs=[melody_image, melody_audio])
-
-music_generation.launch()
\ No newline at end of file
diff --git a/spaces/FlippFuzz/whisper-webui/app-network.py b/spaces/FlippFuzz/whisper-webui/app-network.py
deleted file mode 100644
index 4f0e565b9029761d4b995fe32a65c58d1de55f53..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/app-network.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Run the app with no audio file restrictions, and make it available on the network
-from app import create_ui
-from src.config import ApplicationConfig
-
-create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, server_name="0.0.0.0"))
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/preprocess.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/preprocess.py
deleted file mode 100644
index fbe81307ee661a95b2ac479336671a44ee02151a..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/train/preprocess.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import multiprocessing
-import os
-import sys
-
-from scipy import signal
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-print(sys.argv)
-inp_root = sys.argv[1]
-sr = int(sys.argv[2])
-n_p = int(sys.argv[3])
-exp_dir = sys.argv[4]
-noparallel = sys.argv[5] == "True"
-per = float(sys.argv[6])
-import multiprocessing
-import os
-import traceback
-
-import librosa
-import numpy as np
-from scipy.io import wavfile
-
-from infer.lib.audio import load_audio
-from infer.lib.slicer2 import Slicer
-
-mutex = multiprocessing.Lock()
-f = open("%s/preprocess.log" % exp_dir, "a+")
-
-
-def println(strr):
- mutex.acquire()
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
- mutex.release()
-
-
-class PreProcess:
- def __init__(self, sr, exp_dir, per=3.7):
- self.slicer = Slicer(
- sr=sr,
- threshold=-42,
- min_length=1500,
- min_interval=400,
- hop_size=15,
- max_sil_kept=500,
- )
- self.sr = sr
- self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr)
- self.per = per
- self.overlap = 0.3
- self.tail = self.per + self.overlap
- self.max = 0.9
- self.alpha = 0.75
- self.exp_dir = exp_dir
- self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir
- self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir
- os.makedirs(self.exp_dir, exist_ok=True)
- os.makedirs(self.gt_wavs_dir, exist_ok=True)
- os.makedirs(self.wavs16k_dir, exist_ok=True)
-
- def norm_write(self, tmp_audio, idx0, idx1):
- tmp_max = np.abs(tmp_audio).max()
- if tmp_max > 2.5:
- print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max))
- return
- tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + (
- 1 - self.alpha
- ) * tmp_audio
- wavfile.write(
- "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1),
- self.sr,
- tmp_audio.astype(np.float32),
- )
- tmp_audio = librosa.resample(
- tmp_audio, orig_sr=self.sr, target_sr=16000
- ) # , res_type="soxr_vhq"
- wavfile.write(
- "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1),
- 16000,
- tmp_audio.astype(np.float32),
- )
-
- def pipeline(self, path, idx0):
- try:
- audio = load_audio(path, self.sr)
- # zero phased digital filter cause pre-ringing noise...
- # audio = signal.filtfilt(self.bh, self.ah, audio)
- audio = signal.lfilter(self.bh, self.ah, audio)
-
- idx1 = 0
- for audio in self.slicer.slice(audio):
- i = 0
- while 1:
- start = int(self.sr * (self.per - self.overlap) * i)
- i += 1
- if len(audio[start:]) > self.tail * self.sr:
- tmp_audio = audio[start : start + int(self.per * self.sr)]
- self.norm_write(tmp_audio, idx0, idx1)
- idx1 += 1
- else:
- tmp_audio = audio[start:]
- idx1 += 1
- break
- self.norm_write(tmp_audio, idx0, idx1)
- println("%s->Suc." % path)
- except:
- println("%s->%s" % (path, traceback.format_exc()))
-
- def pipeline_mp(self, infos):
- for path, idx0 in infos:
- self.pipeline(path, idx0)
-
- def pipeline_mp_inp_dir(self, inp_root, n_p):
- try:
- infos = [
- ("%s/%s" % (inp_root, name), idx)
- for idx, name in enumerate(sorted(list(os.listdir(inp_root))))
- ]
- if noparallel:
- for i in range(n_p):
- self.pipeline_mp(infos[i::n_p])
- else:
- ps = []
- for i in range(n_p):
- p = multiprocessing.Process(
- target=self.pipeline_mp, args=(infos[i::n_p],)
- )
- ps.append(p)
- p.start()
- for i in range(n_p):
- ps[i].join()
- except:
- println("Fail. %s" % traceback.format_exc())
-
-
-def preprocess_trainset(inp_root, sr, n_p, exp_dir, per):
- pp = PreProcess(sr, exp_dir, per)
- println("start preprocess")
- println(sys.argv)
- pp.pipeline_mp_inp_dir(inp_root, n_p)
- println("end preprocess")
-
-
-if __name__ == "__main__":
- preprocess_trainset(inp_root, sr, n_p, exp_dir, per)
diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/models_dml.py b/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/models_dml.py
deleted file mode 100644
index 958d7b29259763d2fea94caf8ba7e314c4a77d05..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/models_dml.py
+++ /dev/null
@@ -1,1124 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv.float()
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/GT4SD/paccmann_gp/model_cards/article.md b/spaces/GT4SD/paccmann_gp/model_cards/article.md
deleted file mode 100644
index bfdb8e90f4cf31be3ecd6dc9931dd28b77cc3493..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/paccmann_gp/model_cards/article.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# Model documentation & parameters
-
-**Algorithm Version**: Which model version to use.
-
-**Property goals**: One or multiple properties that will be optimized.
-
-**Protein target**: An AAS of a protein target used for conditioning. Leave blank unless you use `affinity` as a `property goal`.
-
-**Decoding temperature**: The temperature parameter in the SMILES/SELFIES decoder. Higher values lead to more explorative choices, smaller values culminate in mode collapse.
-
-**Maximal sequence length**: The maximal number of SMILES tokens in the generated molecule.
-
-**Number of samples**: How many samples should be generated (between 1 and 50).
-
-**Limit**: Hypercube limits in the latent space.
-
-**Number of steps**: Number of steps for a GP optmization round. The longer the slower. Has to be at least `Number of initial points`.
-
-**Number of initial points**: Number of initial points evaluated. The longer the slower.
-
-**Number of optimization rounds**: Maximum number of optimization rounds.
-
-**Sampling variance**: Variance of the Gaussian noise applied during sampling from the optimal point.
-
-**Samples for evaluation**: Number of samples averaged for each minimization function evaluation.
-
-**Max. sampling steps**: Maximum number of sampling steps in an optmization round.
-
-**Seed**: The random seed used for initialization.
-
-
-
-# Model card -- PaccMannGP
-
-**Model Details**: [PaccMannGP](https://github.com/PaccMann/paccmann_gp) is a language-based Variational Autoencoder that is coupled with a GaussianProcess for controlled sampling. This model systematically explores the latent space of a trained molecular VAE.
-
-**Developers**: Jannis Born, Matteo Manica and colleagues from IBM Research.
-
-**Distributors**: Original authors' code wrapped and distributed by GT4SD Team (2023) from IBM Research.
-
-**Model date**: Published in 2022.
-
-**Model version**: A molecular VAE trained on 1.5M molecules from ChEMBL.
-
-**Model type**: A language-based molecular generative model that can be explored with Gaussian Processes to generate molecules with desired properties.
-
-**Information about training algorithms, parameters, fairness constraints or other applied approaches, and features**:
-Described in the [original paper](https://pubs.acs.org/doi/10.1021/acs.jcim.1c00889).
-
-**Paper or other resource for more information**:
-[Active Site Sequence Representations of Human Kinases Outperform Full Sequence Representations for Affinity Prediction and Inhibitor Generation: 3D Effects in a 1D Model (2022; *Journal of Chemical Information & Modeling*)](https://pubs.acs.org/doi/10.1021/acs.jcim.1c00889).
-
-**License**: MIT
-
-**Where to send questions or comments about the model**: Open an issue on [GT4SD repository](https://github.com/GT4SD/gt4sd-core).
-
-**Intended Use. Use cases that were envisioned during development**: Chemical research, in particular drug discovery.
-
-**Primary intended uses/users**: Researchers and computational chemists using the model for model comparison or research exploration purposes.
-
-**Out-of-scope use cases**: Production-level inference, producing molecules with harmful properties.
-
-**Factors**: Not applicable.
-
-**Metrics**: High reward on generating molecules with desired properties.
-
-**Datasets**: ChEMBL.
-
-**Ethical Considerations**: Unclear, please consult with original authors in case of questions.
-
-**Caveats and Recommendations**: Unclear, please consult with original authors in case of questions.
-
-Model card prototype inspired by [Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)
-
-## Citation
-```bib
-@article{born2022active,
- author = {Born, Jannis and Huynh, Tien and Stroobants, Astrid and Cornell, Wendy D. and Manica, Matteo},
- title = {Active Site Sequence Representations of Human Kinases Outperform Full Sequence Representations for Affinity Prediction and Inhibitor Generation: 3D Effects in a 1D Model},
- journal = {Journal of Chemical Information and Modeling},
- volume = {62},
- number = {2},
- pages = {240-257},
- year = {2022},
- doi = {10.1021/acs.jcim.1c00889},
- note ={PMID: 34905358},
- URL = {https://doi.org/10.1021/acs.jcim.1c00889}
-}
-```
\ No newline at end of file
diff --git a/spaces/Gaeomg/Kaludi-chatgpt-gpt4-prompts-bart-large-cnn-samsum/README.md b/spaces/Gaeomg/Kaludi-chatgpt-gpt4-prompts-bart-large-cnn-samsum/README.md
deleted file mode 100644
index cb288c124e424be0e48d9e2f671acd9c1edb0587..0000000000000000000000000000000000000000
--- a/spaces/Gaeomg/Kaludi-chatgpt-gpt4-prompts-bart-large-cnn-samsum/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Kaludi Chatgpt Gpt4 Prompts Bart Large Cnn Samsum
-emoji: 👁
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GenerationsAI/GenAi-Pix2Pix-Video/app.py b/spaces/GenerationsAI/GenAi-Pix2Pix-Video/app.py
deleted file mode 100644
index 50254353b0ed70e4f40808d942f8948f7728f59e..0000000000000000000000000000000000000000
--- a/spaces/GenerationsAI/GenAi-Pix2Pix-Video/app.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import gradio as gr
-import os
-import cv2
-import numpy as np
-from moviepy.editor import *
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler
-import torch
-from PIL import Image
-import time
-import psutil
-import random
-
-
-pipe = DiffusionPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16, safety_checker=None)
-pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
-pipe.enable_xformers_memory_efficient_attention()
-pipe.unet.to(memory_format=torch.channels_last)
-
-device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶"
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
-
-def pix2pix(
- prompt,
- text_guidance_scale,
- image_guidance_scale,
- image,
- steps,
- neg_prompt="",
- width=512,
- height=512,
- seed=0,
-):
- print(psutil.virtual_memory()) # print memory usage
-
- if seed == 0:
- seed = random.randint(0, 2147483647)
-
- generator = torch.Generator("cuda").manual_seed(seed)
-
- try:
- image = Image.open(image)
- ratio = min(height / image.height, width / image.width)
- image = image.resize((int(image.width * ratio), int(image.height * ratio)), Image.LANCZOS)
-
- result = pipe(
- prompt,
- negative_prompt=neg_prompt,
- image=image,
- num_inference_steps=int(steps),
- image_guidance_scale=image_guidance_scale,
- guidance_scale=text_guidance_scale,
- generator=generator,
- )
-
- # return replace_nsfw_images(result)
- return result.images, result.nsfw_content_detected, seed
- except Exception as e:
- return None, None, error_str(e)
-
-def error_str(error, title="Error"):
- return (
- f"""#### {title}
- {error}"""
- if error
- else ""
- )
-
-def get_frames(video_in):
- frames = []
- #resize the video
- clip = VideoFileClip(video_in)
-
- #check fps
- if clip.fps > 30:
- print("vide rate is over 30, resetting to 30")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=30)
- else:
- print("video rate is OK")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=clip.fps)
-
- print("video resized to 512 height")
-
- # Opens the Video file with CV2
- cap= cv2.VideoCapture("video_resized.mp4")
-
- fps = cap.get(cv2.CAP_PROP_FPS)
- print("video fps: " + str(fps))
- i=0
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret == False:
- break
- cv2.imwrite('kang'+str(i)+'.jpg',frame)
- frames.append('kang'+str(i)+'.jpg')
- i+=1
-
- cap.release()
- cv2.destroyAllWindows()
- print("broke the video into frames")
-
- return frames, fps
-
-
-def create_video(frames, fps):
- print("building video result")
- clip = ImageSequenceClip(frames, fps=fps)
- clip.write_videofile("movie.mp4", fps=fps)
-
- return 'movie.mp4'
-
-
-def infer(prompt,video_in, seed_in, trim_value):
- print(prompt)
- break_vid = get_frames(video_in)
-
- frames_list= break_vid[0]
- fps = break_vid[1]
- n_frame = int(trim_value*fps)
-
- if n_frame >= len(frames_list):
- print("video is shorter than the cut value")
- n_frame = len(frames_list)
-
- result_frames = []
- print("set stop frames to: " + str(n_frame))
-
- for i in frames_list[0:int(n_frame)]:
- pix2pix_img = pix2pix(prompt,5.5,1.5,i,15,"",512,512,seed_in)
- images = pix2pix_img[0]
- rgb_im = images[0].convert("RGB")
-
- # exporting the image
- rgb_im.save(f"result_img-{i}.jpg")
- result_frames.append(f"result_img-{i}.jpg")
- print("frame " + i + "/" + str(n_frame) + ": done;")
-
- final_vid = create_video(result_frames, fps)
- print("finished !")
-
- return final_vid, gr.Group.update(visible=True)
-
-title = """
-
-
-
- Pix2Pix Video
-
-
-
- Apply Instruct Pix2Pix Diffusion to a video
-
-
-"""
-
-article = """
-
-
-
-
You may also like:
-
-
-
-
-
-
-
-
-"""
-
-with gr.Blocks(css='style.css') as demo:
- with gr.Column(elem_id="col-container"):
- gr.HTML(title)
- with gr.Row():
- with gr.Column():
- video_inp = gr.Video(label="Video source", source="upload", type="filepath", elem_id="input-vid")
- prompt = gr.Textbox(label="Prompt", placeholder="enter prompt", show_label=False, elem_id="prompt-in")
- with gr.Row():
- seed_inp = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, value=123456)
- trim_in = gr.Slider(label="Cut video at (s)", minimun=1, maximum=3, step=1, value=1)
- with gr.Column():
- video_out = gr.Video(label="Pix2pix video result", elem_id="video-output")
- gr.HTML("""
-
- work with longer videos / skip the queue:
- """, elem_id="duplicate-container")
- submit_btn = gr.Button("Generate Pix2Pix video")
-
- with gr.Group(elem_id="share-btn-container", visible=False) as share_group:
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button("Share to community", elem_id="share-btn")
-
- inputs = [prompt,video_inp,seed_inp, trim_in]
- outputs = [video_out, share_group]
-
- ex = gr.Examples(
- [
- ["Make it a marble sculpture", "./examples/pexels-jill-burrow-7665249_512x512.mp4", 422112651, 4],
- ["Make it molten lava", "./examples/Ocean_Pexels_ 8953474_512x512.mp4", 43571876, 4]
- ],
- inputs=inputs,
- outputs=outputs,
- fn=infer,
- cache_examples=True,
- )
-
- gr.HTML(article)
-
- submit_btn.click(infer, inputs, outputs)
- share_button.click(None, [], [], _js=share_js)
-
-
-
-demo.launch().queue(max_size=12)
diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/data_objects/random_cycler.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/data_objects/random_cycler.py
deleted file mode 100644
index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000
--- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/data_objects/random_cycler.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import random
-
-class RandomCycler:
- """
- Creates an internal copy of a sequence and allows access to its items in a constrained random
- order. For a source sequence of n items and one or several consecutive queries of a total
- of m items, the following guarantees hold (one implies the other):
- - Each item will be returned between m // n and ((m - 1) // n) + 1 times.
- - Between two appearances of the same item, there may be at most 2 * (n - 1) other items.
- """
-
- def __init__(self, source):
- if len(source) == 0:
- raise Exception("Can't create RandomCycler from an empty collection")
- self.all_items = list(source)
- self.next_items = []
-
- def sample(self, count: int):
- shuffle = lambda l: random.sample(l, len(l))
-
- out = []
- while count > 0:
- if count >= len(self.all_items):
- out.extend(shuffle(list(self.all_items)))
- count -= len(self.all_items)
- continue
- n = min(count, len(self.next_items))
- out.extend(self.next_items[:n])
- count -= n
- self.next_items = self.next_items[n:]
- if len(self.next_items) == 0:
- self.next_items = shuffle(list(self.all_items))
- return out
-
- def __next__(self):
- return self.sample(1)[0]
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py
deleted file mode 100644
index 81f61c6ee136628940e8bcc146d785840ac83c38..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py
+++ /dev/null
@@ -1,44 +0,0 @@
-_base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron/resnet101_caffe',
- backbone=dict(depth=101))
-img_norm_cfg = dict(
- mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py
deleted file mode 100644
index 29fb077369977688174a4c5e2a0cda548e8e3931..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,57 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-model = dict(
- type='GFL',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs='on_output',
- num_outs=5),
- bbox_head=dict(
- type='GFLHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- octave_base_scale=8,
- scales_per_octave=1,
- strides=[8, 16, 32, 64, 128]),
- loss_cls=dict(
- type='QualityFocalLoss',
- use_sigmoid=True,
- beta=2.0,
- loss_weight=1.0),
- loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25),
- reg_max=16,
- loss_bbox=dict(type='GIoULoss', loss_weight=2.0)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(type='ATSSAssigner', topk=9),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.6),
- max_per_img=100))
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 5b72ac830be29b865ed52adaf41f2fe800f252cc..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,12 +0,0 @@
-_base_ = '../pspnet/pspnet_r101-d8_512x512_160k_ade20k.py'
-model = dict(
- pretrained='mmcls://mobilenet_v2',
- backbone=dict(
- _delete_=True,
- type='MobileNetV2',
- widen_factor=1.,
- strides=(1, 2, 2, 1, 1, 1, 1),
- dilations=(1, 1, 1, 2, 2, 4, 4),
- out_indices=(1, 2, 4, 6)),
- decode_head=dict(in_channels=320),
- auxiliary_head=dict(in_channels=96))
diff --git a/spaces/GuXiaoBei/wechat-chatbot/docker/build.alpine.sh b/spaces/GuXiaoBei/wechat-chatbot/docker/build.alpine.sh
deleted file mode 100644
index 6fda600d2d6cac087c5798a53788e8d3da8e17d8..0000000000000000000000000000000000000000
--- a/spaces/GuXiaoBei/wechat-chatbot/docker/build.alpine.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-
-CHATGPT_ON_WECHAT_TAG=1.0.2
-
-docker build -f Dockerfile.alpine \
- --build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
- -t zhayujie/chatgpt-on-wechat .
-
-docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
-
\ No newline at end of file
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_ocnli.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_ocnli.sh
deleted file mode 100644
index 5598ee8027a9bc41c4c196d71d98341557e0f4eb..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_ocnli.sh
+++ /dev/null
@@ -1,93 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_large_ocnli # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-export CUDA_VISIBLE_DEVICES='6'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_large
-
-TASK=ocnli
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/classification_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.json \
- --valid_data dev.json \
- --test_data test.json \
- --train_batchsize 32 \
- --valid_batchsize 16 \
- --max_seq_length 128 \
- --texta_name sentence \
- --label_name label \
- --id_name id \
- --task_name ocnli \
- "
-
-MODEL_ARGS="\
- --learning_rate 2e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --num_labels 3 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_acc \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_acc:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 10 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 100 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_sequence_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Harshveer/Finetuned_Diffusion_Max/style.css b/spaces/Harshveer/Finetuned_Diffusion_Max/style.css
deleted file mode 100644
index 9bfa78cc983f84693cf7cbab1e3bfd0e0d36c944..0000000000000000000000000000000000000000
--- a/spaces/Harshveer/Finetuned_Diffusion_Max/style.css
+++ /dev/null
@@ -1,24 +0,0 @@
-.finetuned-diffusion-div div{
- display:inline-flex;
- align-items:center;
- gap:.8rem;
- font-size:1.75rem
-}
-.finetuned-diffusion-div div h1{
- font-weight:900;
- margin-bottom:7px
-}
-.finetuned-diffusion-div p{
- margin-bottom:10px;
- font-size:94%
-}
-a{
- text-decoration:underline
-}
-.tabs{
- margin-top:0;
- margin-bottom:0
-}
-#gallery{
- min-height:20rem
-}
diff --git a/spaces/HighCWu/GPEN/retinaface/data/wider_face.py b/spaces/HighCWu/GPEN/retinaface/data/wider_face.py
deleted file mode 100644
index e1862d5bc432566a57c10b90412929b881bb9447..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/retinaface/data/wider_face.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import os
-import os.path
-import sys
-import torch
-import torch.utils.data as data
-import cv2
-import numpy as np
-
-class WiderFaceDetection(data.Dataset):
- def __init__(self, txt_path, preproc=None):
- self.preproc = preproc
- self.imgs_path = []
- self.words = []
- f = open(txt_path,'r')
- lines = f.readlines()
- isFirst = True
- labels = []
- for line in lines:
- line = line.rstrip()
- if line.startswith('#'):
- if isFirst==True:
- isFirst = False
- else:
- labels_copy = labels.copy()
- self.words.append(labels_copy)
- labels.clear()
- path = line[2:]
- path = txt_path.replace('label.txt','images/') + path
- self.imgs_path.append(path)
- else:
- line = line.split(' ')
- label = [float(x) for x in line]
- labels.append(label)
-
- self.words.append(labels)
-
- def __len__(self):
- return len(self.imgs_path)
-
- def __getitem__(self, index):
- img = cv2.imread(self.imgs_path[index])
- height, width, _ = img.shape
-
- labels = self.words[index]
- annotations = np.zeros((0, 15))
- if len(labels) == 0:
- return annotations
- for idx, label in enumerate(labels):
- annotation = np.zeros((1, 15))
- # bbox
- annotation[0, 0] = label[0] # x1
- annotation[0, 1] = label[1] # y1
- annotation[0, 2] = label[0] + label[2] # x2
- annotation[0, 3] = label[1] + label[3] # y2
-
- # landmarks
- annotation[0, 4] = label[4] # l0_x
- annotation[0, 5] = label[5] # l0_y
- annotation[0, 6] = label[7] # l1_x
- annotation[0, 7] = label[8] # l1_y
- annotation[0, 8] = label[10] # l2_x
- annotation[0, 9] = label[11] # l2_y
- annotation[0, 10] = label[13] # l3_x
- annotation[0, 11] = label[14] # l3_y
- annotation[0, 12] = label[16] # l4_x
- annotation[0, 13] = label[17] # l4_y
- if (annotation[0, 4]<0):
- annotation[0, 14] = -1
- else:
- annotation[0, 14] = 1
-
- annotations = np.append(annotations, annotation, axis=0)
- target = np.array(annotations)
- if self.preproc is not None:
- img, target = self.preproc(img, target)
-
- return torch.from_numpy(img), target
-
-def detection_collate(batch):
- """Custom collate fn for dealing with batches of images that have a different
- number of associated object annotations (bounding boxes).
-
- Arguments:
- batch: (tuple) A tuple of tensor images and lists of annotations
-
- Return:
- A tuple containing:
- 1) (tensor) batch of images stacked on their 0 dim
- 2) (list of tensors) annotations for a given image are stacked on 0 dim
- """
- targets = []
- imgs = []
- for _, sample in enumerate(batch):
- for _, tup in enumerate(sample):
- if torch.is_tensor(tup):
- imgs.append(tup)
- elif isinstance(tup, type(np.empty(0))):
- annos = torch.from_numpy(tup).float()
- targets.append(annos)
-
- return (torch.stack(imgs, 0), targets)
diff --git a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/__init__.py b/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HuggingFaceH4/falcon-chat/README.md b/spaces/HuggingFaceH4/falcon-chat/README.md
deleted file mode 100644
index 0b9fc060fc23144275a9c8130e9d8a019c9b5a3b..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/falcon-chat/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Falcon-Chat
-emoji: 💬
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: true
-license: apache-2.0
----
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_default_train_texts/text_duplicates/text_duplicates.html b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_default_train_texts/text_duplicates/text_duplicates.html
deleted file mode 100644
index 72a73cb3f6bd75f13e687ee364303cfc8a971362..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_default_train_texts/text_duplicates/text_duplicates.html
+++ /dev/null
@@ -1,110 +0,0 @@
-
duplicate_fraction
0.0011676271846117192
duplicates_dict
Church of the Holy Sepulchre
2
Get fresh music recommendations delivered to your inbox every Friday.
-We've updated our Terms of Use. You can review the changes here.
4
The Batman – watch the Bat and the Cat trailer
2
END_OF_DOCUMENT_TOKEN_TO_BE_REPLACED
140
My name is Geoff Le Pard. Once I was a lawyer; now I am a writer. I've published four books - Dead Flies and Sherry Trifle, My Father and Other Liars, Salisbury Square and Buster & Moo. In addition I have published three anthologies of short stories and a memoir of my mother. More will appear soon. I will try and continue to blog regularly at geofflepard.com about whatever takes my fancy. I hope it does yours too. These are my thoughts and no one else is to blame. If you want to nab anything I post, please acknowledge where it came from.
-View all posts by TanGental →
-This entry was posted in #writephoto, flash fiction, miscellany and tagged #writephoto, flash fiction. Bookmark the permalink.
2
Community content is available under CC-BY-SA unless otherwise noted.
-Advertisement
2
Save products on your wishlist to buy them later or share with your friends.
2
A €500m aid package for EU farmers, a derogation from greening obligations and supports for feed and fertiliser are being considered by the European Commission.
2
An 11-Year-Old Girl Advises Her Teacher On Punishment Methods – And...
2
Molly grew up in California but now lives in the oh-so-amazing state of Texas with her husband, daughter, and fur babies. When she’s not diving into the world of her characters, some of her hobbies include hiking, snowboarding, traveling, and long walks on the beach … which roughly translates to being a homebody with her hubby and dishing out movie quotes. She has a weakness for crude-humored movies and fried pickles, and loves curling up in a fluffy comforter during a thunderstorm … or under one in a bathtub if there are tornados. That way she can pretend they aren’t really happening.
2
The 9-year-old got into character, pairing her leather jacket and pants with Jackson’s own “Smooth Criminal” hat.
2
Highland's Maddie Dortch runs at the start of the race during the Triad Invitational on Wednesday, September 30, 2020 at Triad High School in Troy, Ill. Paul Halfacre, STLhighschoolsports.com
2
After excellent first-cut silage crops, it is a case of keeping the shoulder to the wheel to ensure fodder reserves are met for the coming winter. Declan Marren reports.
2
Scroll back to top
3
Already got the injury now what ☺️
-
-Suffer till it's better jk lol
2
We will write the formula as below:
2
There was an error retrieving images from Instagram. An attempt will be remade in a few minutes.
3
You can find out more about which cookies we are using or switch them off in settings.
-
-This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
-
-Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
-
-If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
2
In the meantime, learn about Mobile Workers Compensation below through our articles and write-up!
2
Lowe's in south Fort Myers is one of several area stores that have restocked on essentials to include water, gas containers and generators in preparation for Hurricane Dorian. A manager at the Lowe's said, if needed, they will ship supplies to stores in areas hardest hit by Hurricane Dorian. Kinfay Moroti/The News-Press USA Today Network-Florida
-Fullscreen
2
There are no reviews yet.
2
80 Hindu couples tie the knot at mass wedding in Karachi
2
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
-Necessary Always Enabled
-
-Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
2
This site uses Akismet to reduce spam. Learn how your comment data is processed.
8
SEE ALL OF VELOCITY’S SUPERCARS AT PUKEKOHE HERE
2
skip to main | skip to sidebar
3
Posted 3 years ago by Yahoo
2
Not since van Gogh lopped off his ear has an artist’s knife been put to such good use.—Tessa Laird
-
-New Zealand collage artist Peter Madden draws much of his imagery from old issues of National Geographic. He plunders and reworks the magazine’s discredited ’empire of signs’ to forge his own. His surrealistic pictures, objects, and installations—with their watchmaker detail and intensity—have been described as ‘microcosms’ and ‘intricate kingdoms of flying forms’ Madden has one foot in the vanitas still-life tradition and the other in new-age thinking. On the one hand, he is death obsessed: a master of morbid decoupage. (Moths and butterflies—symbols of transient life—abound. His assemblages in bell jars suggest some Victorian taxidermist killing time in his parlour.) On the other hand, with his flocks, schools, and swarms of quivering animal energy, he revels in biodiversity and magic. Madden’s works manage to be at once morbid and abundant, rotting and blooming, creepy and fey. This book serveys Madden’s work of the last ten years
2
Fallout 4: How to Get Vertibird Support
2
For Fallout 4 on the PlayStation 4, a GameFAQs message board topic titled "Vertibirds going down constantly?".
2
I am a committed Piano tutor and composer with over 15 years experience teaching a wide range of pupils from children to...
2
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
-Cookie SettingsAccept All
-Manage consent
-
-This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
-Necessary Always Enabled
-Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
-Functional
-Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
-Performance
-Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
-Analytics
-Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
-Advertisement
-Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
-Others
-Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
-SAVE & ACCEPT
3
Serbia signs Memorandum of Understanding with USAID on energy efficiency
-
-Keep up with the latest trends and news of the CEE energy market! Sign up for our newsletters to receive curated news across the energy agenda in 20+ countries in Central and South-eastern Europe.
2
Concerns over effect of Rotorua plan
2
Jet skier in our wake
2
You may have missed
2
Showing posts from July, 2018
-Show all
2
EXCERPT
-As the band played, the dance floor filled. Nate looked over the top of his beer bottle as Rachel asked Grant to dance. It was shaping up to be a line dance and Grant, not looking like the cowboy boogie-type, begged off a second time.
-She flashed Caroline a hopeful grin. “Do you want to dance?”
-Caroline’s eyes darted to the dance floor. “I don’t know how to do that.”
-Rachel set her hands on her hips. She cocked her head toward the line forming behind them. “Come on. I’ll teach you.”
-Caroline shot Nate a pleading look as if asking him to save her. He bumped her shoulder instead. “Go ahead. Knock ’em dead.”
-And damn, if she didn’t. She picked up the steps quickly, laughing every time she turned the wrong way or kicked out the opposite foot. It wasn’t long before she was rocking the arms and rolling her hips, but with an ethereal quality Nate had never witnessed in a country line dance before. Beside her, Rachel moved to the music a little differently, more seductive, less inhibited. Side by side with Caroline, he began to suspect Rachel wasn’t as innocent and naive as her older brother wanted to believe. Nate continued to watch her dance, enthralled. He’d just as soon imagine his sisters naked as he would Caroline, but Rachel? She conjured up fantasies even he’d never imagined before.
-Grant paid no mind to Nate. His eyes were locked on Rachel’s long lithe body on the dance floor. She had a type, and this guy was it—tall, fair-haired, destined for a corner office. Nate brushed a hand over his scruffy face. Rachel could look him square in the eye when she wore heels. The only office he hoped to get was a concrete box with a pushout window.
-Jealousy spiked in his chest before he finally pushed back from the table and headed back to the bar.
-Faces flushed and smiling, Rachel and Caroline wove their way back to the table after he returned. He set a glass of water in front of Caroline, relieved to see Rachel drinking water, too.
-Good. He preferred her date tonight ended with her sober.
-Grant looked down at his phone as the band took a break and then leaned sideways to say something to Rachel. Nate sent her a curious look after Grant passed the bouncer and went outside.
-Rachel shrugged and set down her glass as recorded music started to play over the loudspeakers. “He said he had to take a call for work.”
-Caroline touched Nate’s shoulder. “Do you know which way is the toilet?”
-Rachel smiled when he pointed to the far end of the bar.
-Caroline stood. “I’ll be right back.”
-“It’s just called the toilet in Ireland,” Nate explained after Caroline disappeared into the crowd. “Tell me more about Kieran. How does he like his new home?”
-Rachel leaned her elbows on the table, her expression turning all sweet and sappy. “I think he’s happy. He meets me at the door every day when I get home and he likes to sleep in bed with me at night.”
-“Hmmm,” was the best Nate could do.
-She dropped her chin into her hands. “Can I ask you something?”
-“Sure.”
-“How much Irish do you speak?”
-He grinned, assuming cussing didn’t count. “I only know a few words that my father taught me.”
-Rachel’s lips twitched.
-“What?”
-“Your accent. You’re starting to sound a little bit like your girlfriend.”
-He could tell she was teasing him, but he still felt the color rising in his cheeks. “I told you, Caroline and I are friends.”
-She sat back and laughed as Lonestar’s “Amazed” began to play. “Matt’s right. Your Irish does come out when you’ve been drinking.”
-Nate just shrugged. His accent was a byproduct of parents born and raised in Ireland. His father was proud of his thick Irish accent. His mother tried not to speak with any accent at all, but sometimes it would sneak out when one of her four kids got her riled up. It snuck out on him, too, sometimes, and not just while he was drinking. Times Matt didn’t know about. Moments Nate wished Rachel did.
-Leaning closer, enough so that he could feel her warm breath on his cheek, she looked at him. “I have to ask you…did that kiss mean anything at all to you?”
-He didn’t know how to answer. He thought about lying or twisting the truth. Or just brushing her off altogether. But he couldn’t do it. “Of course it meant something to me. But it can’t happen again.”
-She let out a short laugh. “Then it didn’t mean much at all, did it?”
-He stared at her, his throat so tight he could barely breathe. He told himself to keep his mouth shut. Put her first. Forget her.
-But no, he looked over his shoulder for Caroline instead and then damn near lost his head. “Rachel, I’m crazy about you.” I love you! He clenched his jaw, determined to salvage the big fat mess he’d made. “But be realistic. I’m not the right guy for you.”
-She eased back with defiance. “Who says?”
-“How about we start with your brother?”
-Her lips pinched together. He’d hit a nerve. “Who says I’m looking for Mr. Right?”
-“What is that supposed to mean?”
-“It means I’m not looking for a ring, Nate. I want to go out, have fun, blow off a little steam. That doesn’t work for you, so I won’t bother you again.”
2
AUTHOR BIO
-Suzanne Winslow writes the kind of stories she loves to read—contemporary romance with relatable characters, unsung heroes and heroines, and true-to-life stories. Nurses, teachers, firefighters, and Marines top her list of champions. Give her a book about strong, brave characters with hidden vulnerabilities and a secret passion, and she’ll binge read to the end!
-Suzanne and her husband, along with their rescue dog, Murphy, call Upstate New York home. When she’s not reading or writing, she’s often planning a road trip, or if it’s summertime, hanging out at the lake. Connecting with readers through Instagram, Facebook, and newsletters is a favorite pastime.
-AUTHOR LINKS
-WEBSITE
-INSTAGRAM
-FACEBOOK
-GOODREADS
-AMAZON
2
After breaking the partition, a sturdy metal frame in placed to ensure the upper part of the wall is safely supported and to facilitate access to the roof.
2
From the window situated over the release module and behind glass we can watch the chicks without them seeing us.
2
During the release process a young one-year old male from the wild population, visited the release module, attracted by the Colony Environment effect. It is probable that it is an individual from the urban centre of San Vicente where at least two pairs of lesser kestrel breed.
2
I’ve had a long love of books, and some of my most prized books are art books. This is a review of books from my collection that can be found on shelves in my studio. I will provide links when possible.
2
The Fairy Tales of Oscar Wilde
2
Just added to your cart
2
The West Side Lofts, a mixed-use development in the heart of Red Bank's antique district, brought a fresh infusion of downtown residents when it opened about four years ago. Tanya Breen
-Fullscreen
2
Interior of one of the apartments during the opening of Element, a new high-end 35 unit apartment complex along the Navesink River in Red Bank, NJ Wednesday May 29, 2019. Tanya Breen
-Fullscreen
2
How To Responsibly Donate To Ukrainian Causes
2
The Subtle Violence Of So...
2
Corona-virus: Fun things to do while social distancing
2
Barcelona try to make up for Messi’s lost time
2
The Milton and Tamar Maltz Performing Arts Center, located on East 105th Street and Ansel Road in Cleveland. Prior to being used by Case Western Reserve University, the building was The Temple-Tifereth Israel’s home until the 1970s.
2
Error: Twitter did not respond. Please wait a few minutes and refresh this page.
2
Back to Top
-Close
3
It was all over before I knew it and I just could not believe I could see almost perfectly straight after the surgery. Read more...
2
Watch music on TV: AXS TV programming highlights for the week of April 15-21
2
The BL King’s Topographical Collection: "THE NORTH-EAST VIEW OF SCALEBY-CASTLE, IN THE COUNTY OF CUMBERLAND. "
2
Welcome to our store
2
We seek to promote lively discussion and debate. We believe that our users have the right to express themselves freely in a manner that is courteous and respectful of others' point of view and sensibility.
-
-Your message may be removed if we consider it to be:
-
-Repeated violations may lead to suspension and/or termination of your message posting privileges.
-
-www.tributes.in takes abuse very seriously: We will co-operate fully with law enforcement, including disclosure of your user ID, IP address and messaging history.
-
-Please review your message, you cannot delete/edit once it has been posted.
-
-READY TO GIVE THE MOST MEANINGFUL GIFT TO YOUR FAMILY?
-
-Give a Tribute to someone special and see how your family and friends react - it'll be priceless (trust us)!
2
How to start? Making a plan …
2
Victorian Fashion This era in fashion ranged primarily from the mid-1800s to the early 1900s. It' PowerPoint Presentation
2
How did the crisis grow between 1900-1914? PowerPoint Presentation
2
Meet Your Match on Dating Site with
2
Data Beams Down to Planet Comicon 2020
2
NBA Scoring Title Should Go To Durant Over Carmelo
2
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
3
Commendation: Made in Australia: The Future of Australian Cities by Dr Julian Bolleter and Professor Richard Weller (Perth).
2
WINNER: Dune
-Nightmare Alley
-The Power of the Dog
-The Tragedy of Macbeth
-West Side Story
2
Everything Women Need to Know About Triathlon
2
Police keep people away from the Century 16 theater in Aurora, CO, just outside Denver after a shooting at the Midnight Premier of the Dark Knight Rises where 12 people are confirmed dead and many more injured
2
You don't have permission to register
2
Geoff Neal believes he “shut people up” by knocking out Vicente Luque, expects “everybody is going to try to wrestle me now”
2
The Great Famine and the Irish Diaspora in America ebook
2
Demystifying the Role of AI in Cybersecurity
-
-There's a lot of anticipation and expectation in business around the role of artificial...
2
“Pale Blue Dot” by The NaveBlues
2
D-Day for R. Kelly as sex-crimes trial gets underway
-1 month ago
-1 month ago
2
Culture Current: Teenagers Are Hosed, Here’s What We Can Do
2
Winter camouflage in the BC Cariboo!
2
How Science Denial Happens and What You Can Do About it
2
Processed with VSCO with c1 preset
2
The Late Late Show with James Corden on Carpool karaoke
2
PAUL HINCE AND NEIL YOUNG GRAB ALL THE POINTS FOR CITY
2
Details Taking place between the 1st May and the 31st October 2010, the Shanghai World Expo was the largest Expo the world had ever seen. Represent.....
2
The office buildings contrast with the old design from Tokyo Station.
2
The most northerly point of our road trip.
2
Pin On Anniversary Quotes And Wishes
2
Longeveron up 100% after FDA approves its Lomecel-B medical product
2
↓ Download Image
-Caption: Paul Medlock-Walton demonstrates Gameblox, which was developed by researchers at the Education Arcade, and allows users to create their own games.
-Credits: Photo: Casey Atkins
2
Is Buying Gold a Good Investment?
2
Team 2 – work together on this collaborative puzzle game
2
Meredith Rosenthal (center) spoke about pharmaceutical marketing's role in the opioid crisis. She is Gray professor of health economics at the Harvard T. H. Chan School of Public Health.
2
Rehabilitated borehole in use
2
This image from video provided by the FBI, shows Aaron Alexis moves through the hallways of Building #197 at the Washington Navy Yard on Sept. 16 in Washington, carrying a Remington 870 shotgun. Alexis, a 34-year-old former Navy reservist and IT contractor, shot and killed 12 people inside a Navy Yard building last week before being killed in a shootout with police. (AP Photo/FBI)
2
Pin by Ryann McBride on Humanoids in 2021 Character art
2
The Lebanese tourist was spared serious harm due to the rescue by local surfer Alik Reyes Narag and a Frenchman lifeguard ’hero’. Photo: Pavida Anantarasmi
2
PEMUDA HARUS “I DO CARE”
2
Is GameStop the Next RadioShack?
2
\ No newline at end of file
diff --git a/spaces/ICML2022/ICML2022_papers/app.py b/spaces/ICML2022/ICML2022_papers/app.py
deleted file mode 100644
index f1327974d910e2334a34c0ee34e796acc0beeae4..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/ICML2022_papers/app.py
+++ /dev/null
@@ -1,65 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import gradio as gr
-
-from paper_list import PaperList
-
-DESCRIPTION = '# ICML 2022 Papers'
-NOTES = '''
-- [ICML 2022](https://icml.cc/Conferences/2022/)
-- [Proceedings](https://proceedings.mlr.press/v162/)
-'''
-
-paper_list = PaperList()
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
-
- search_box = gr.Textbox(
- label='Search Title',
- placeholder=
- 'You can search for titles with regular expressions. e.g. (? 0 else ""
- trailing_space = " " if len(after) > 0 else ""
-
- # detokenize
- before = detok.detokenize(before, return_str=True)
- pronoun = detok.detokenize([pronoun], return_str=True)
- after = detok.detokenize(after, return_str=True)
-
- # hack: when the pronoun ends in a period (or comma), move the
- # punctuation to the "after" part
- if pronoun.endswith(".") or pronoun.endswith(","):
- after = pronoun[-1] + trailing_space + after
- pronoun = pronoun[:-1]
-
- # hack: when the "after" part begins with a comma or period, remove
- # the trailing space
- if after.startswith(".") or after.startswith(","):
- trailing_space = ""
-
- # parse sentence with spacy
- sentence = nlp(before + leading_space + pronoun + trailing_space + after)
-
- # find pronoun span
- start = len(before + leading_space)
- first_pronoun_tok = find_token(sentence, start_pos=start)
- pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i)
- assert pronoun_span.text == pronoun
-
- if eval:
- # convert to format where pronoun is surrounded by "[]" and
- # query is surrounded by "_"
- query_span = find_span(sentence, query)
- query_with_ws = "_{}_{}".format(
- query_span.text,
- (" " if query_span.text_with_ws.endswith(" ") else ""),
- )
- pronoun_with_ws = "[{}]{}".format(
- pronoun_span.text,
- (" " if pronoun_span.text_with_ws.endswith(" ") else ""),
- )
- if query_span.start < pronoun_span.start:
- first = (query_span, query_with_ws)
- second = (pronoun_span, pronoun_with_ws)
- else:
- first = (pronoun_span, pronoun_with_ws)
- second = (query_span, query_with_ws)
- sentence = (
- sentence[: first[0].start].text_with_ws
- + first[1]
- + sentence[first[0].end : second[0].start].text_with_ws
- + second[1]
- + sentence[second[0].end :].text
- )
- yield sentence, sample.get("label", None)
- else:
- yield sentence, pronoun_span, query, sample.get("label", None)
-
-
-def winogrande_jsonl_iterator(input_fname, eval=False):
- with open(input_fname) as fin:
- for line in fin:
- sample = json.loads(line.strip())
- sentence, option1, option2 = (
- sample["sentence"],
- sample["option1"],
- sample["option2"],
- )
-
- pronoun_span = (sentence.index("_"), sentence.index("_") + 1)
-
- if eval:
- query, cand = option1, option2
- else:
- query = option1 if sample["answer"] == "1" else option2
- cand = option2 if sample["answer"] == "1" else option1
- yield sentence, pronoun_span, query, cand
-
-
-def filter_noun_chunks(
- chunks, exclude_pronouns=False, exclude_query=None, exact_match=False
-):
- if exclude_pronouns:
- chunks = [
- np
- for np in chunks
- if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np))
- ]
-
- if exclude_query is not None:
- excl_txt = [exclude_query.lower()]
- filtered_chunks = []
- for chunk in chunks:
- lower_chunk = chunk.text.lower()
- found = False
- for excl in excl_txt:
- if (
- not exact_match and (lower_chunk in excl or excl in lower_chunk)
- ) or lower_chunk == excl:
- found = True
- break
- if not found:
- filtered_chunks.append(chunk)
- chunks = filtered_chunks
-
- return chunks
diff --git a/spaces/IDKiro/DehazeFormer_Demo/models/dehazeformer.py b/spaces/IDKiro/DehazeFormer_Demo/models/dehazeformer.py
deleted file mode 100644
index 11be0da4ae5bae5ceeb463ee4cd3b3d7ee0f00c7..0000000000000000000000000000000000000000
--- a/spaces/IDKiro/DehazeFormer_Demo/models/dehazeformer.py
+++ /dev/null
@@ -1,474 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class RLN(nn.Module):
- r"""Revised LayerNorm"""
- def __init__(self, dim, eps=1e-5, detach_grad=False):
- super(RLN, self).__init__()
- self.eps = eps
- self.detach_grad = detach_grad
-
- self.weight = nn.Parameter(torch.ones((1, dim, 1, 1)))
- self.bias = nn.Parameter(torch.zeros((1, dim, 1, 1)))
-
- self.meta1 = nn.Conv2d(1, dim, 1)
- self.meta2 = nn.Conv2d(1, dim, 1)
-
- def forward(self, input):
- mean = torch.mean(input, dim=(1, 2, 3), keepdim=True)
- std = torch.sqrt((input - mean).pow(2).mean(dim=(1, 2, 3), keepdim=True) + self.eps)
-
- normalized_input = (input - mean) / std
-
- if self.detach_grad:
- rescale, rebias = self.meta1(std.detach()), self.meta2(mean.detach())
- else:
- rescale, rebias = self.meta1(std), self.meta2(mean)
-
- out = normalized_input * self.weight + self.bias
- return out, rescale, rebias
-
-
-class Mlp(nn.Module):
- def __init__(self, network_depth, in_features, hidden_features=None, out_features=None):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
-
- self.network_depth = network_depth
-
- self.mlp = nn.Sequential(
- nn.Conv2d(in_features, hidden_features, 1),
- nn.ReLU(True),
- nn.Conv2d(hidden_features, out_features, 1)
- )
-
- def forward(self, x):
- return self.mlp(x)
-
-
-def window_partition(x, window_size):
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size**2, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-def get_relative_positions(window_size):
- coords_h = torch.arange(window_size)
- coords_w = torch.arange(window_size)
-
- coords = torch.stack(torch.meshgrid([coords_h, coords_w], indexing="ij")) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_positions = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
-
- relative_positions = relative_positions.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_positions_log = torch.sign(relative_positions) * torch.log(1. + relative_positions.abs())
-
- return relative_positions_log
-
-
-class WindowAttention(nn.Module):
- def __init__(self, dim, window_size, num_heads):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = head_dim ** -0.5
-
- relative_positions = get_relative_positions(self.window_size)
- self.register_buffer("relative_positions", relative_positions)
- self.meta = nn.Sequential(
- nn.Linear(2, 256, bias=True),
- nn.ReLU(True),
- nn.Linear(256, num_heads, bias=True)
- )
-
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, qkv):
- B_, N, _ = qkv.shape
-
- qkv = qkv.reshape(B_, N, 3, self.num_heads, self.dim // self.num_heads).permute(2, 0, 3, 1, 4)
-
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.meta(self.relative_positions)
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- attn = self.softmax(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, self.dim)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, network_depth, dim, num_heads, window_size, shift_size, use_attn=False, conv_type=None):
- super().__init__()
- self.dim = dim
- self.head_dim = int(dim // num_heads)
- self.num_heads = num_heads
-
- self.window_size = window_size
- self.shift_size = shift_size
-
- self.network_depth = network_depth
- self.use_attn = use_attn
- self.conv_type = conv_type
-
- if self.conv_type == 'Conv':
- self.conv = nn.Sequential(
- nn.Conv2d(dim, dim, kernel_size=3, padding=1, padding_mode='reflect'),
- nn.ReLU(True),
- nn.Conv2d(dim, dim, kernel_size=3, padding=1, padding_mode='reflect')
- )
-
- if self.conv_type == 'DWConv':
- self.conv = nn.Conv2d(dim, dim, kernel_size=5, padding=2, groups=dim, padding_mode='reflect')
-
- if self.conv_type == 'DWConv' or self.use_attn:
- self.V = nn.Conv2d(dim, dim, 1)
- self.proj = nn.Conv2d(dim, dim, 1)
-
- if self.use_attn:
- self.QK = nn.Conv2d(dim, dim * 2, 1)
- self.attn = WindowAttention(dim, window_size, num_heads)
-
- def check_size(self, x, shift=False):
- _, _, h, w = x.size()
- mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
- mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
-
- if shift:
- x = F.pad(x, (self.shift_size, (self.window_size-self.shift_size+mod_pad_w) % self.window_size,
- self.shift_size, (self.window_size-self.shift_size+mod_pad_h) % self.window_size), mode='reflect')
- else:
- x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
- return x
-
- def forward(self, X):
- B, C, H, W = X.shape
-
- if self.conv_type == 'DWConv' or self.use_attn:
- V = self.V(X)
-
- if self.use_attn:
- QK = self.QK(X)
- QKV = torch.cat([QK, V], dim=1)
-
- # shift
- shifted_QKV = self.check_size(QKV, self.shift_size > 0)
- Ht, Wt = shifted_QKV.shape[2:]
-
- # partition windows
- shifted_QKV = shifted_QKV.permute(0, 2, 3, 1)
- qkv = window_partition(shifted_QKV, self.window_size) # nW*B, window_size**2, C
-
- attn_windows = self.attn(qkv)
-
- # merge windows
- shifted_out = window_reverse(attn_windows, self.window_size, Ht, Wt) # B H' W' C
-
- # reverse cyclic shift
- out = shifted_out[:, self.shift_size:(self.shift_size+H), self.shift_size:(self.shift_size+W), :]
- attn_out = out.permute(0, 3, 1, 2)
-
- if self.conv_type in ['Conv', 'DWConv']:
- conv_out = self.conv(V)
- out = self.proj(conv_out + attn_out)
- else:
- out = self.proj(attn_out)
-
- else:
- if self.conv_type == 'Conv':
- out = self.conv(X) # no attention and use conv, no projection
- elif self.conv_type == 'DWConv':
- out = self.proj(self.conv(V))
-
- return out
-
-
-class TransformerBlock(nn.Module):
- def __init__(self, network_depth, dim, num_heads, mlp_ratio=4.,
- norm_layer=nn.LayerNorm, mlp_norm=False,
- window_size=8, shift_size=0, use_attn=True, conv_type=None):
- super().__init__()
- self.use_attn = use_attn
- self.mlp_norm = mlp_norm
-
- self.norm1 = norm_layer(dim) if use_attn else nn.Identity()
- self.attn = Attention(network_depth, dim, num_heads=num_heads, window_size=window_size,
- shift_size=shift_size, use_attn=use_attn, conv_type=conv_type)
-
- self.norm2 = norm_layer(dim) if use_attn and mlp_norm else nn.Identity()
- self.mlp = Mlp(network_depth, dim, hidden_features=int(dim * mlp_ratio))
-
- def forward(self, x):
- identity = x
- if self.use_attn: x, rescale, rebias = self.norm1(x)
- x = self.attn(x)
- if self.use_attn: x = x * rescale + rebias
- x = identity + x
-
- identity = x
- if self.use_attn and self.mlp_norm: x, rescale, rebias = self.norm2(x)
- x = self.mlp(x)
- if self.use_attn and self.mlp_norm: x = x * rescale + rebias
- x = identity + x
- return x
-
-
-class BasicLayer(nn.Module):
- def __init__(self, network_depth, dim, depth, num_heads, mlp_ratio=4.,
- norm_layer=nn.LayerNorm, window_size=8,
- attn_ratio=0., attn_loc='last', conv_type=None):
-
- super().__init__()
- self.dim = dim
- self.depth = depth
-
- attn_depth = attn_ratio * depth
-
- if attn_loc == 'last':
- use_attns = [i >= depth-attn_depth for i in range(depth)]
- elif attn_loc == 'first':
- use_attns = [i < attn_depth for i in range(depth)]
- elif attn_loc == 'middle':
- use_attns = [i >= (depth-attn_depth)//2 and i < (depth+attn_depth)//2 for i in range(depth)]
-
- # build blocks
- self.blocks = nn.ModuleList([
- TransformerBlock(network_depth=network_depth,
- dim=dim,
- num_heads=num_heads,
- mlp_ratio=mlp_ratio,
- norm_layer=norm_layer,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- use_attn=use_attns[i], conv_type=conv_type)
- for i in range(depth)])
-
- def forward(self, x):
- for blk in self.blocks:
- x = blk(x)
- return x
-
-
-class PatchEmbed(nn.Module):
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, kernel_size=None):
- super().__init__()
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- if kernel_size is None:
- kernel_size = patch_size
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=kernel_size, stride=patch_size,
- padding=(kernel_size-patch_size+1)//2, padding_mode='reflect')
-
- def forward(self, x):
- x = self.proj(x)
- return x
-
-
-class PatchUnEmbed(nn.Module):
- def __init__(self, patch_size=4, out_chans=3, embed_dim=96, kernel_size=None):
- super().__init__()
- self.out_chans = out_chans
- self.embed_dim = embed_dim
-
- if kernel_size is None:
- kernel_size = 1
-
- self.proj = nn.Sequential(
- nn.Conv2d(embed_dim, out_chans*patch_size**2, kernel_size=kernel_size,
- padding=kernel_size//2, padding_mode='reflect'),
- nn.PixelShuffle(patch_size)
- )
-
- def forward(self, x):
- x = self.proj(x)
- return x
-
-
-class SKFusion(nn.Module):
- def __init__(self, dim, height=2, reduction=8):
- super(SKFusion, self).__init__()
-
- self.height = height
- d = max(int(dim/reduction), 4)
-
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.mlp = nn.Sequential(
- nn.Conv2d(dim, d, 1, bias=False),
- nn.ReLU(),
- nn.Conv2d(d, dim*height, 1, bias=False)
- )
-
- self.softmax = nn.Softmax(dim=1)
-
- def forward(self, in_feats):
- B, C, H, W = in_feats[0].shape
-
- in_feats = torch.cat(in_feats, dim=1)
- in_feats = in_feats.view(B, self.height, C, H, W)
-
- feats_sum = torch.sum(in_feats, dim=1)
- attn = self.mlp(self.avg_pool(feats_sum))
- attn = self.softmax(attn.view(B, self.height, C, 1, 1))
-
- out = torch.sum(in_feats*attn, dim=1)
- return out
-
-
-class DehazeFormer(nn.Module):
- def __init__(self, in_chans=3, out_chans=3, window_size=8,
- embed_dims=[24, 48, 96, 48, 24],
- mlp_ratios=[2., 2., 4., 2., 2.],
- depths=[4, 4, 8, 4, 4],
- num_heads=[2, 4, 6, 4, 2],
- attn_ratio=[1., 1., 1., 1., 1.],
- conv_type=['DWConv', 'DWConv', 'DWConv', 'DWConv', 'DWConv'],
- norm_layer=[RLN, RLN, RLN, RLN, RLN]):
- super(DehazeFormer, self).__init__()
-
- # setting
- self.patch_size = 4
- self.window_size = window_size
- self.mlp_ratios = mlp_ratios
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=1, in_chans=in_chans, embed_dim=embed_dims[0], kernel_size=3)
-
- # backbone
- self.layer1 = BasicLayer(network_depth=sum(depths), dim=embed_dims[0], depth=depths[0],
- num_heads=num_heads[0], mlp_ratio=mlp_ratios[0],
- norm_layer=norm_layer[0], window_size=window_size,
- attn_ratio=attn_ratio[0], attn_loc='last', conv_type=conv_type[0])
-
- self.patch_merge1 = PatchEmbed(
- patch_size=2, in_chans=embed_dims[0], embed_dim=embed_dims[1])
-
- self.skip1 = nn.Conv2d(embed_dims[0], embed_dims[0], 1)
-
- self.layer2 = BasicLayer(network_depth=sum(depths), dim=embed_dims[1], depth=depths[1],
- num_heads=num_heads[1], mlp_ratio=mlp_ratios[1],
- norm_layer=norm_layer[1], window_size=window_size,
- attn_ratio=attn_ratio[1], attn_loc='last', conv_type=conv_type[1])
-
- self.patch_merge2 = PatchEmbed(
- patch_size=2, in_chans=embed_dims[1], embed_dim=embed_dims[2])
-
- self.skip2 = nn.Conv2d(embed_dims[1], embed_dims[1], 1)
-
- self.layer3 = BasicLayer(network_depth=sum(depths), dim=embed_dims[2], depth=depths[2],
- num_heads=num_heads[2], mlp_ratio=mlp_ratios[2],
- norm_layer=norm_layer[2], window_size=window_size,
- attn_ratio=attn_ratio[2], attn_loc='last', conv_type=conv_type[2])
-
- self.patch_split1 = PatchUnEmbed(
- patch_size=2, out_chans=embed_dims[3], embed_dim=embed_dims[2])
-
- assert embed_dims[1] == embed_dims[3]
- self.fusion1 = SKFusion(embed_dims[3])
-
- self.layer4 = BasicLayer(network_depth=sum(depths), dim=embed_dims[3], depth=depths[3],
- num_heads=num_heads[3], mlp_ratio=mlp_ratios[3],
- norm_layer=norm_layer[3], window_size=window_size,
- attn_ratio=attn_ratio[3], attn_loc='last', conv_type=conv_type[3])
-
- self.patch_split2 = PatchUnEmbed(
- patch_size=2, out_chans=embed_dims[4], embed_dim=embed_dims[3])
-
- assert embed_dims[0] == embed_dims[4]
- self.fusion2 = SKFusion(embed_dims[4])
-
- self.layer5 = BasicLayer(network_depth=sum(depths), dim=embed_dims[4], depth=depths[4],
- num_heads=num_heads[4], mlp_ratio=mlp_ratios[4],
- norm_layer=norm_layer[4], window_size=window_size,
- attn_ratio=attn_ratio[4], attn_loc='last', conv_type=conv_type[4])
-
- # merge non-overlapping patches into image
- self.patch_unembed = PatchUnEmbed(
- patch_size=1, out_chans=out_chans, embed_dim=embed_dims[4], kernel_size=3)
-
- def forward(self, x):
- x = self.patch_embed(x)
- x = self.layer1(x)
- skip1 = x
-
- x = self.patch_merge1(x)
- x = self.layer2(x)
- skip2 = x
-
- x = self.patch_merge2(x)
- x = self.layer3(x)
- x = self.patch_split1(x)
-
- x = self.fusion1([x, self.skip2(skip2)]) + x
- x = self.layer4(x)
- x = self.patch_split2(x)
-
- x = self.fusion2([x, self.skip1(skip1)]) + x
- x = self.layer5(x)
- x = self.patch_unembed(x)
- return x
-
-
-class MCT(nn.Module):
- def __init__(self):
- super(MCT, self).__init__()
- self.ts = 256
- self.l = 8
-
- self.dims = 3 * 3 * self.l
-
- self.basenet = DehazeFormer(3, self.dims)
-
- def get_coord(self, x):
- B, _, H, W = x.size()
-
- coordh, coordw = torch.meshgrid([torch.linspace(-1,1,H), torch.linspace(-1,1,W)], indexing="ij")
- coordh = coordh.unsqueeze(0).unsqueeze(1).repeat(B,1,1,1)
- coordw = coordw.unsqueeze(0).unsqueeze(1).repeat(B,1,1,1)
-
- return coordw.detach(), coordh.detach()
-
- def mapping(self, x, param):
- # curves
- curve = torch.stack(torch.chunk(param, 3, dim=1), dim=1)
- curve_list = list(torch.chunk(curve, 3, dim=2))
-
- # grid: x, y, z -> w, h, d ~[-1 ,1]
- x_list = list(torch.chunk(x.detach(), 3, dim=1))
- coordw, coordh = self.get_coord(x)
- grid_list = [torch.stack([coordw, coordh, x_i], dim=4) for x_i in x_list]
-
- # mapping
- out = sum([F.grid_sample(curve_i, grid_i, 'bilinear', 'border', True) \
- for curve_i, grid_i in zip(curve_list, grid_list)]).squeeze(2)
-
- return out # no Tanh is much better than using Tanh
-
- def forward(self, x):
- # param input
- x_d = F.interpolate(x, (self.ts, self.ts), mode='area')
- param = self.basenet(x_d)
- out = self.mapping(x, param)
- return out
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/spec_gen.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/spec_gen.py
deleted file mode 100644
index 9476395adab6fa841fde10c05fbb92902310ebd4..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/spec_gen.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from data_utils import TextAudioSpeakerLoader
-import json
-from tqdm import tqdm
-
-from utils import HParams
-
-config_path = 'configs/config.json'
-with open(config_path, "r") as f:
- data = f.read()
-config = json.loads(data)
-hps = HParams(**config)
-
-train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps)
-test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps)
-eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps)
-
-for _ in tqdm(train_dataset):
- pass
-for _ in tqdm(eval_dataset):
- pass
-for _ in tqdm(test_dataset):
- pass
\ No newline at end of file
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/README.md b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/README.md
deleted file mode 100644
index 4f5efb986bae5f1d93cb2862e677672ec42954cd..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/README.md
+++ /dev/null
@@ -1,171 +0,0 @@
-# Segment Anything
-
-**[Meta AI Research, FAIR](https://ai.facebook.com/research/)**
-
-[Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/)
-
-[[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)] [[`BibTeX`](#citing-segment-anything)]
-
-
-
-The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
-
-
-
-
-
-
-## Installation
-
-The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
-
-Install Segment Anything:
-
-```
-pip install git+https://github.com/facebookresearch/segment-anything.git
-```
-
-or clone the repository locally and install with
-
-```
-git clone git@github.com:facebookresearch/segment-anything.git
-cd segment-anything; pip install -e .
-```
-
-The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks.
-
-```
-pip install opencv-python pycocotools matplotlib onnxruntime onnx
-```
-
-## Getting Started
-
-First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt:
-
-```
-from segment_anything import SamPredictor, sam_model_registry
-sam = sam_model_registry[""](checkpoint="")
-predictor = SamPredictor(sam)
-predictor.set_image()
-masks, _, _ = predictor.predict()
-```
-
-or generate masks for an entire image:
-
-```
-from segment_anything import SamAutomaticMaskGenerator, sam_model_registry
-sam = sam_model_registry[""](checkpoint="")
-mask_generator = SamAutomaticMaskGenerator(sam)
-masks = mask_generator.generate()
-```
-
-Additionally, masks can be generated for images from the command line:
-
-```
-python scripts/amg.py --checkpoint --model-type --input --output
-```
-
-See the examples notebooks on [using SAM with prompts](/notebooks/predictor_example.ipynb) and [automatically generating masks](/notebooks/automatic_mask_generator_example.ipynb) for more details.
-
-
-
-
-
-
-## ONNX Export
-
-SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with
-
-```
-python scripts/export_onnx_model.py --checkpoint --model-type --output
-```
-
-See the [example notebook](https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb) for details on how to combine image preprocessing via SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export.
-
-### Web demo
-
-The `demo/` folder has a simple one page React app which shows how to run mask prediction with the exported ONNX model in a web browser with multithreading. Please see [`demo/README.md`](https://github.com/facebookresearch/segment-anything/blob/main/demo/README.md) for more details.
-
-## Model Checkpoints
-
-Three model versions of the model are available with different backbone sizes. These models can be instantiated by running
-
-```
-from segment_anything import sam_model_registry
-sam = sam_model_registry[""](checkpoint="")
-```
-
-Click the links below to download the checkpoint for the corresponding model type.
-
-- **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)**
-- `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth)
-- `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth)
-
-## Dataset
-
-See [here](https://ai.facebook.com/datasets/segment-anything/) for an overview of the datastet. The dataset can be downloaded [here](https://ai.facebook.com/datasets/segment-anything-downloads/). By downloading the datasets you agree that you have read and accepted the terms of the SA-1B Dataset Research License.
-
-We save masks per image as a json file. It can be loaded as a dictionary in python in the below format.
-
-```python
-{
- "image" : image_info,
- "annotations" : [annotation],
-}
-
-image_info {
- "image_id" : int, # Image id
- "width" : int, # Image width
- "height" : int, # Image height
- "file_name" : str, # Image filename
-}
-
-annotation {
- "id" : int, # Annotation id
- "segmentation" : dict, # Mask saved in COCO RLE format.
- "bbox" : [x, y, w, h], # The box around the mask, in XYWH format
- "area" : int, # The area in pixels of the mask
- "predicted_iou" : float, # The model's own prediction of the mask's quality
- "stability_score" : float, # A measure of the mask's quality
- "crop_box" : [x, y, w, h], # The crop of the image used to generate the mask, in XYWH format
- "point_coords" : [[x, y]], # The point coordinates input to the model to generate the mask
-}
-```
-
-Image ids can be found in sa_images_ids.txt which can be downloaded using the above [link](https://ai.facebook.com/datasets/segment-anything-downloads/) as well.
-
-To decode a mask in COCO RLE format into binary:
-
-```
-from pycocotools import mask as mask_utils
-mask = mask_utils.decode(annotation["segmentation"])
-```
-
-See [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) for more instructions to manipulate masks stored in RLE format.
-
-## License
-
-The model is licensed under the [Apache 2.0 license](LICENSE).
-
-## Contributing
-
-See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).
-
-## Contributors
-
-The Segment Anything project was made possible with the help of many contributors (alphabetical):
-
-Aaron Adcock, Vaibhav Aggarwal, Morteza Behrooz, Cheng-Yang Fu, Ashley Gabriel, Ahuva Goldstand, Allen Goodman, Sumanth Gurram, Jiabo Hu, Somya Jain, Devansh Kukreja, Robert Kuo, Joshua Lane, Yanghao Li, Lilian Luong, Jitendra Malik, Mallika Malhotra, William Ngan, Omkar Parkhi, Nikhil Raina, Dirk Rowe, Neil Sejoor, Vanessa Stark, Bala Varadarajan, Bram Wasti, Zachary Winstrom
-
-## Citing Segment Anything
-
-If you use SAM or SA-1B in your research, please use the following BibTeX entry.
-
-```
-@article{kirillov2023segany,
- title={Segment Anything},
- author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
- journal={arXiv:2304.02643},
- year={2023}
-}
-```
diff --git a/spaces/Jack7510/trychatgpt/app.py b/spaces/Jack7510/trychatgpt/app.py
deleted file mode 100644
index 0659e6c8ade4da4cca313bc4bc00db215632feb6..0000000000000000000000000000000000000000
--- a/spaces/Jack7510/trychatgpt/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# example of chat with openAI
-
-import gradio as gr
-import openai
-import datetime
-import os
-
-# openAI Python program guide
-# https://github.com/openai/openai-python
-
-# 设置 OpenAI API 密钥
-openai.api_key = os.getenv("OPENAI_API_KEY")
-MODEL = "gpt-3.5-turbo"
-
-# 文件名
-FILE_NAME = "chat_history.log"
-
-# 定义对话函数
-def chat(question):
- try:
- # 发送 API 请求
- completion = openai.ChatCompletion.create(
- model=MODEL,
- messages=[
- {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": question},
- ],
- temperature=0.8,
- )
-
- response = completion.choices[0].message.content
-
- except openai.Error as e:
- response = f"OpenAI API error: {e}"
-
- # 获取当前日期和时间
- now = datetime.datetime.now()
-
- # 将日期和时间转换为字符串格式
- date_string = now.strftime('%Y-%m-%d %H:%M:%S')
-
- # 将提问和回答保存到聊天历史记录中
- # 打开文件进行追加
- with open(FILE_NAME, 'a') as f:
- f.write(f'\n{date_string}\n')
- f.write('You: ' + question + '\n')
- f.write('chatGPT: ' + response + '\n')
-
- return response
-
-
-if __name__ == '__main__':
- # 创建 Gradio 应用程序界面
- iface = gr.Interface(
- fn=chat,
- inputs="text",
- outputs='text',
- title="Chat with OpenAI 3.5",
- #description="Talk to an AI powered by OpenAI's GPT language model.",
- )
-
- iface.launch()
diff --git a/spaces/JoYCC/ICBU-NPU-FashionGPT-70B-V1.1/app.py b/spaces/JoYCC/ICBU-NPU-FashionGPT-70B-V1.1/app.py
deleted file mode 100644
index bfc1f9cfdb2775bcfae84f72f3bb3caefd451327..0000000000000000000000000000000000000000
--- a/spaces/JoYCC/ICBU-NPU-FashionGPT-70B-V1.1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ICBU-NPU/FashionGPT-70B-V1.1").launch()
\ No newline at end of file
diff --git a/spaces/KPCGD/bingo/src/pages/api/image.ts b/spaces/KPCGD/bingo/src/pages/api/image.ts
deleted file mode 100644
index 4b894bea86050c0f3888cc56f60c0cb7f8b57cfc..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/pages/api/image.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, {
- IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE
- })
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- return res.end(response)
- } catch (e) {
- return res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Food_Recipes.py b/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Food_Recipes.py
deleted file mode 100644
index 41ed59f87e587ae5268e012d2dcd00f635151050..0000000000000000000000000000000000000000
--- a/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Food_Recipes.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import streamlit as st
-import requests
-import json
-import random
-import re
-
-def main():
- st.title("Food Recipes")
- st.markdown("Food Recipe recommendation system based on user input for any food and maximum calories.")
- # Textbox for Food Type Input
- food_type = st.text_input('Enter Any Food')
-
- # Slider for Calories
- calories = st.slider("Select Max Calories", 25, 1000, 500)
- st.write("Selected: **{}** Max Calories.".format(calories))
- if st.button("Submit"):
- url = "https://alcksyjrmd.execute-api.us-east-2.amazonaws.com/default/nutrients_response"
-
- params = {"f": food_type.capitalize(), "k": str(calories)}
-
- response = requests.get(url, params=params)
- response_json = json.loads(response.content)
-
- # Convert response_json to a list
- response_json = list(response_json)
-
- # Randomly select a recipe
- st.markdown("## Recommended Recipe")
- if len(response_json) > 0:
- random_recipe = random.choice(response_json)
- recipe_calories = random_recipe['Calories']
- st.write("**Title:** ", random_recipe['Title'])
- st.write("**Calories:** ", recipe_calories)
- st.write("**Total Fat:** ", random_recipe['Total Fat'])
- st.write("**Total Carbohydrate:** ", random_recipe['Total Carbohydrate'])
- st.write("**Protein:** ", random_recipe['Protein'])
- st.write("**Tags:** ", random_recipe['Tags'])
- if random_recipe['Image Link'].endswith(".jpg") or random_recipe['Image Link'].endswith(".jpeg") or random_recipe['Image Link'].endswith(".png"):
- st.image(random_recipe['Image Link'], width=300)
- else:
- st.write("**Image Link:** ", random_recipe['Image Link'])
- st.write("**Recipe URL:** ", random_recipe['Recipe URLs'])
- st.write("*To download this recipe as a PDF, open the hamburger menu on the top right and click on Print.*")
- else:
- st.markdown("### No Recipes Found:")
- st.write("**No recipes found that match your search criteria. Please try a different food type.**")
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Katsuki098/test03/Dockerfile b/spaces/Katsuki098/test03/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Katsuki098/test03/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Kedareeshwar/Dental-Caries-Diagnosis/README.md b/spaces/Kedareeshwar/Dental-Caries-Diagnosis/README.md
deleted file mode 100644
index 8c22c5fa648b41cbe5ea506d654ccea6ebcb21c6..0000000000000000000000000000000000000000
--- a/spaces/Kedareeshwar/Dental-Caries-Diagnosis/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dental Caries Diagnosis
-emoji: 🚀
-colorFrom: indigo
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/logmmse.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/logmmse.py
deleted file mode 100644
index 58cc4502fa5ba0670678c3edaf5ba1587b8b58ea..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/logmmse.py
+++ /dev/null
@@ -1,247 +0,0 @@
-# The MIT License (MIT)
-#
-# Copyright (c) 2015 braindead
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-#
-#
-# This code was extracted from the logmmse package (https://pypi.org/project/logmmse/) and I
-# simply modified the interface to meet my needs.
-
-
-import numpy as np
-import math
-from scipy.special import expn
-from collections import namedtuple
-
-NoiseProfile = namedtuple("NoiseProfile", "sampling_rate window_size len1 len2 win n_fft noise_mu2")
-
-
-def profile_noise(noise, sampling_rate, window_size=0):
- """
- Creates a profile of the noise in a given waveform.
-
- :param noise: a waveform containing noise ONLY, as a numpy array of floats or ints.
- :param sampling_rate: the sampling rate of the audio
- :param window_size: the size of the window the logmmse algorithm operates on. A default value
- will be picked if left as 0.
- :return: a NoiseProfile object
- """
- noise, dtype = to_float(noise)
- noise += np.finfo(np.float64).eps
-
- if window_size == 0:
- window_size = int(math.floor(0.02 * sampling_rate))
-
- if window_size % 2 == 1:
- window_size = window_size + 1
-
- perc = 50
- len1 = int(math.floor(window_size * perc / 100))
- len2 = int(window_size - len1)
-
- win = np.hanning(window_size)
- win = win * len2 / np.sum(win)
- n_fft = 2 * window_size
-
- noise_mean = np.zeros(n_fft)
- n_frames = len(noise) // window_size
- for j in range(0, window_size * n_frames, window_size):
- noise_mean += np.absolute(np.fft.fft(win * noise[j:j + window_size], n_fft, axis=0))
- noise_mu2 = (noise_mean / n_frames) ** 2
-
- return NoiseProfile(sampling_rate, window_size, len1, len2, win, n_fft, noise_mu2)
-
-
-def denoise(wav, noise_profile: NoiseProfile, eta=0.15):
- """
- Cleans the noise from a speech waveform given a noise profile. The waveform must have the
- same sampling rate as the one used to create the noise profile.
-
- :param wav: a speech waveform as a numpy array of floats or ints.
- :param noise_profile: a NoiseProfile object that was created from a similar (or a segment of
- the same) waveform.
- :param eta: voice threshold for noise update. While the voice activation detection value is
- below this threshold, the noise profile will be continuously updated throughout the audio.
- Set to 0 to disable updating the noise profile.
- :return: the clean wav as a numpy array of floats or ints of the same length.
- """
- wav, dtype = to_float(wav)
- wav += np.finfo(np.float64).eps
- p = noise_profile
-
- nframes = int(math.floor(len(wav) / p.len2) - math.floor(p.window_size / p.len2))
- x_final = np.zeros(nframes * p.len2)
-
- aa = 0.98
- mu = 0.98
- ksi_min = 10 ** (-25 / 10)
-
- x_old = np.zeros(p.len1)
- xk_prev = np.zeros(p.len1)
- noise_mu2 = p.noise_mu2
- for k in range(0, nframes * p.len2, p.len2):
- insign = p.win * wav[k:k + p.window_size]
-
- spec = np.fft.fft(insign, p.n_fft, axis=0)
- sig = np.absolute(spec)
- sig2 = sig ** 2
-
- gammak = np.minimum(sig2 / noise_mu2, 40)
-
- if xk_prev.all() == 0:
- ksi = aa + (1 - aa) * np.maximum(gammak - 1, 0)
- else:
- ksi = aa * xk_prev / noise_mu2 + (1 - aa) * np.maximum(gammak - 1, 0)
- ksi = np.maximum(ksi_min, ksi)
-
- log_sigma_k = gammak * ksi/(1 + ksi) - np.log(1 + ksi)
- vad_decision = np.sum(log_sigma_k) / p.window_size
- if vad_decision < eta:
- noise_mu2 = mu * noise_mu2 + (1 - mu) * sig2
-
- a = ksi / (1 + ksi)
- vk = a * gammak
- ei_vk = 0.5 * expn(1, np.maximum(vk, 1e-8))
- hw = a * np.exp(ei_vk)
- sig = sig * hw
- xk_prev = sig ** 2
- xi_w = np.fft.ifft(hw * spec, p.n_fft, axis=0)
- xi_w = np.real(xi_w)
-
- x_final[k:k + p.len2] = x_old + xi_w[0:p.len1]
- x_old = xi_w[p.len1:p.window_size]
-
- output = from_float(x_final, dtype)
- output = np.pad(output, (0, len(wav) - len(output)), mode="constant")
- return output
-
-
-## Alternative VAD algorithm to webrctvad. It has the advantage of not requiring to install that
-## darn package and it also works for any sampling rate. Maybe I'll eventually use it instead of
-## webrctvad
-# def vad(wav, sampling_rate, eta=0.15, window_size=0):
-# """
-# TODO: fix doc
-# Creates a profile of the noise in a given waveform.
-#
-# :param wav: a waveform containing noise ONLY, as a numpy array of floats or ints.
-# :param sampling_rate: the sampling rate of the audio
-# :param window_size: the size of the window the logmmse algorithm operates on. A default value
-# will be picked if left as 0.
-# :param eta: voice threshold for noise update. While the voice activation detection value is
-# below this threshold, the noise profile will be continuously updated throughout the audio.
-# Set to 0 to disable updating the noise profile.
-# """
-# wav, dtype = to_float(wav)
-# wav += np.finfo(np.float64).eps
-#
-# if window_size == 0:
-# window_size = int(math.floor(0.02 * sampling_rate))
-#
-# if window_size % 2 == 1:
-# window_size = window_size + 1
-#
-# perc = 50
-# len1 = int(math.floor(window_size * perc / 100))
-# len2 = int(window_size - len1)
-#
-# win = np.hanning(window_size)
-# win = win * len2 / np.sum(win)
-# n_fft = 2 * window_size
-#
-# wav_mean = np.zeros(n_fft)
-# n_frames = len(wav) // window_size
-# for j in range(0, window_size * n_frames, window_size):
-# wav_mean += np.absolute(np.fft.fft(win * wav[j:j + window_size], n_fft, axis=0))
-# noise_mu2 = (wav_mean / n_frames) ** 2
-#
-# wav, dtype = to_float(wav)
-# wav += np.finfo(np.float64).eps
-#
-# nframes = int(math.floor(len(wav) / len2) - math.floor(window_size / len2))
-# vad = np.zeros(nframes * len2, dtype=np.bool)
-#
-# aa = 0.98
-# mu = 0.98
-# ksi_min = 10 ** (-25 / 10)
-#
-# xk_prev = np.zeros(len1)
-# noise_mu2 = noise_mu2
-# for k in range(0, nframes * len2, len2):
-# insign = win * wav[k:k + window_size]
-#
-# spec = np.fft.fft(insign, n_fft, axis=0)
-# sig = np.absolute(spec)
-# sig2 = sig ** 2
-#
-# gammak = np.minimum(sig2 / noise_mu2, 40)
-#
-# if xk_prev.all() == 0:
-# ksi = aa + (1 - aa) * np.maximum(gammak - 1, 0)
-# else:
-# ksi = aa * xk_prev / noise_mu2 + (1 - aa) * np.maximum(gammak - 1, 0)
-# ksi = np.maximum(ksi_min, ksi)
-#
-# log_sigma_k = gammak * ksi / (1 + ksi) - np.log(1 + ksi)
-# vad_decision = np.sum(log_sigma_k) / window_size
-# if vad_decision < eta:
-# noise_mu2 = mu * noise_mu2 + (1 - mu) * sig2
-# print(vad_decision)
-#
-# a = ksi / (1 + ksi)
-# vk = a * gammak
-# ei_vk = 0.5 * expn(1, np.maximum(vk, 1e-8))
-# hw = a * np.exp(ei_vk)
-# sig = sig * hw
-# xk_prev = sig ** 2
-#
-# vad[k:k + len2] = vad_decision >= eta
-#
-# vad = np.pad(vad, (0, len(wav) - len(vad)), mode="constant")
-# return vad
-
-
-def to_float(_input):
- if _input.dtype == np.float64:
- return _input, _input.dtype
- elif _input.dtype == np.float32:
- return _input.astype(np.float64), _input.dtype
- elif _input.dtype == np.uint8:
- return (_input - 128) / 128., _input.dtype
- elif _input.dtype == np.int16:
- return _input / 32768., _input.dtype
- elif _input.dtype == np.int32:
- return _input / 2147483648., _input.dtype
- raise ValueError('Unsupported wave file format')
-
-
-def from_float(_input, dtype):
- if dtype == np.float64:
- return _input, np.float64
- elif dtype == np.float32:
- return _input.astype(np.float32)
- elif dtype == np.uint8:
- return ((_input * 128) + 128).astype(np.uint8)
- elif dtype == np.int16:
- return (_input * 32768).astype(np.int16)
- elif dtype == np.int32:
- print(_input)
- return (_input * 2147483648).astype(np.int32)
- raise ValueError('Unsupported wave file format')
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ga_retina_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ga_retina_head.py
deleted file mode 100644
index 569910b365126e90638256f0d10addfa230fd141..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ga_retina_head.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Tuple
-
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmcv.ops import MaskedConv2d
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import OptConfigType, OptMultiConfig
-from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead
-
-
-@MODELS.register_module()
-class GARetinaHead(GuidedAnchorHead):
- """Guided-Anchor-based RetinaNet head."""
-
- def __init__(self,
- num_classes: int,
- in_channels: int,
- stacked_convs: int = 4,
- conv_cfg: OptConfigType = None,
- norm_cfg: OptConfigType = None,
- init_cfg: OptMultiConfig = None,
- **kwargs) -> None:
- if init_cfg is None:
- init_cfg = dict(
- type='Normal',
- layer='Conv2d',
- std=0.01,
- override=[
- dict(
- type='Normal',
- name='conv_loc',
- std=0.01,
- bias_prob=0.01),
- dict(
- type='Normal',
- name='retina_cls',
- std=0.01,
- bias_prob=0.01)
- ])
- self.stacked_convs = stacked_convs
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- super().__init__(
- num_classes=num_classes,
- in_channels=in_channels,
- init_cfg=init_cfg,
- **kwargs)
-
- def _init_layers(self) -> None:
- """Initialize layers of the head."""
- self.relu = nn.ReLU(inplace=True)
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
-
- self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1)
- num_anchors = self.square_anchor_generator.num_base_priors[0]
- self.conv_shape = nn.Conv2d(self.feat_channels, num_anchors * 2, 1)
- self.feature_adaption_cls = FeatureAdaption(
- self.feat_channels,
- self.feat_channels,
- kernel_size=3,
- deform_groups=self.deform_groups)
- self.feature_adaption_reg = FeatureAdaption(
- self.feat_channels,
- self.feat_channels,
- kernel_size=3,
- deform_groups=self.deform_groups)
- self.retina_cls = MaskedConv2d(
- self.feat_channels,
- self.num_base_priors * self.cls_out_channels,
- 3,
- padding=1)
- self.retina_reg = MaskedConv2d(
- self.feat_channels, self.num_base_priors * 4, 3, padding=1)
-
- def forward_single(self, x: Tensor) -> Tuple[Tensor]:
- """Forward feature map of a single scale level."""
- cls_feat = x
- reg_feat = x
- for cls_conv in self.cls_convs:
- cls_feat = cls_conv(cls_feat)
- for reg_conv in self.reg_convs:
- reg_feat = reg_conv(reg_feat)
-
- loc_pred = self.conv_loc(cls_feat)
- shape_pred = self.conv_shape(reg_feat)
-
- cls_feat = self.feature_adaption_cls(cls_feat, shape_pred)
- reg_feat = self.feature_adaption_reg(reg_feat, shape_pred)
-
- if not self.training:
- mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr
- else:
- mask = None
- cls_score = self.retina_cls(cls_feat, mask)
- bbox_pred = self.retina_reg(reg_feat, mask)
- return cls_score, bbox_pred, shape_pred, loc_pred
diff --git a/spaces/Lalo42/hassanblend-HassanBlend1.5.1.2/app.py b/spaces/Lalo42/hassanblend-HassanBlend1.5.1.2/app.py
deleted file mode 100644
index b7e8364d8c652e112c2298a87a324457694060f5..0000000000000000000000000000000000000000
--- a/spaces/Lalo42/hassanblend-HassanBlend1.5.1.2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/hassanblend/HassanBlend1.5.1.2").launch()
\ No newline at end of file
diff --git a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_newbing.py b/spaces/Liu-LAB/GPT-academic/request_llm/bridge_newbing.py
deleted file mode 100644
index 2136f01beb3edd25b94dd8048c20b63a14ef905e..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_newbing.py
+++ /dev/null
@@ -1,254 +0,0 @@
-"""
-========================================================================
-第一部分:来自EdgeGPT.py
-https://github.com/acheong08/EdgeGPT
-========================================================================
-"""
-from .edge_gpt import NewbingChatbot
-load_message = "等待NewBing响应。"
-
-"""
-========================================================================
-第二部分:子进程Worker(调用主体)
-========================================================================
-"""
-import time
-import json
-import re
-import logging
-import asyncio
-import importlib
-import threading
-from toolbox import update_ui, get_conf, trimmed_format_exc
-from multiprocessing import Process, Pipe
-
-def preprocess_newbing_out(s):
- pattern = r'\^(\d+)\^' # 匹配^数字^
- sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值
- result = re.sub(pattern, sub, s) # 替换操作
- if '[1]' in result:
- result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
- return result
-
-def preprocess_newbing_out_simple(result):
- if '[1]' in result:
- result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
- return result
-
-class NewBingHandle(Process):
- def __init__(self):
- super().__init__(daemon=True)
- self.parent, self.child = Pipe()
- self.newbing_model = None
- self.info = ""
- self.success = True
- self.local_history = []
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def check_dependency(self):
- try:
- self.success = False
- import certifi, httpx, rich
- self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
- self.success = True
- except:
- self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
- self.success = False
-
- def ready(self):
- return self.newbing_model is not None
-
- async def async_run(self):
- # 读取配置
- NEWBING_STYLE, = get_conf('NEWBING_STYLE')
- from request_llm.bridge_all import model_info
- endpoint = model_info['newbing']['endpoint']
- while True:
- # 等待
- kwargs = self.child.recv()
- question=kwargs['query']
- history=kwargs['history']
- system_prompt=kwargs['system_prompt']
-
- # 是否重置
- if len(self.local_history) > 0 and len(history)==0:
- await self.newbing_model.reset()
- self.local_history = []
-
- # 开始问问题
- prompt = ""
- if system_prompt not in self.local_history:
- self.local_history.append(system_prompt)
- prompt += system_prompt + '\n'
-
- # 追加历史
- for ab in history:
- a, b = ab
- if a not in self.local_history:
- self.local_history.append(a)
- prompt += a + '\n'
- # if b not in self.local_history:
- # self.local_history.append(b)
- # prompt += b + '\n'
-
- # 问题
- prompt += question
- self.local_history.append(question)
- print('question:', prompt)
- # 提交
- async for final, response in self.newbing_model.ask_stream(
- prompt=question,
- conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
- wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
- ):
- if not final:
- print(response)
- self.child.send(str(response))
- else:
- print('-------- receive final ---------')
- self.child.send('[Finish]')
- # self.local_history.append(response)
-
-
- def run(self):
- """
- 这个函数运行在子进程
- """
- # 第一次运行,加载参数
- self.success = False
- self.local_history = []
- if (self.newbing_model is None) or (not self.success):
- # 代理设置
- proxies, = get_conf('proxies')
- if proxies is None:
- self.proxies_https = None
- else:
- self.proxies_https = proxies['https']
- # cookie
- NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
- try:
- cookies = json.loads(NEWBING_COOKIES)
- except:
- self.success = False
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
- raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
-
- try:
- self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
- except:
- self.success = False
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
- raise RuntimeError(f"不能加载Newbing组件。")
-
- self.success = True
- try:
- # 进入任务等待状态
- asyncio.run(self.async_run())
- except Exception:
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- """
- 这个函数运行在主进程
- """
- self.threadLock.acquire()
- self.parent.send(kwargs) # 发送请求到子进程
- while True:
- res = self.parent.recv() # 等待newbing回复的片段
- if res == '[Finish]':
- break # 结束
- elif res == '[Fail]':
- self.success = False
- break
- else:
- yield res # newbing回复的片段
- self.threadLock.release()
-
-
-"""
-========================================================================
-第三部分:主进程统一调用函数接口
-========================================================================
-"""
-global newbing_handle
-newbing_handle = None
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
- """
- 多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- global newbing_handle
- if (newbing_handle is None) or (not newbing_handle.success):
- newbing_handle = NewBingHandle()
- observe_window[0] = load_message + "\n\n" + newbing_handle.info
- if not newbing_handle.success:
- error = newbing_handle.info
- newbing_handle = None
- raise RuntimeError(error)
-
- # 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
- for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- observe_window[0] = preprocess_newbing_out_simple(response)
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("程序终止。")
- return preprocess_newbing_out_simple(response)
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
-
- global newbing_handle
- if (newbing_handle is None) or (not newbing_handle.success):
- newbing_handle = NewBingHandle()
- chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not newbing_handle.success:
- newbing_handle = None
- return
-
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- history_feedin = []
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
- response = "[Local Message]: 等待NewBing响应中 ..."
- yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, preprocess_newbing_out(response))
- yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..."
- history.extend([inputs, response])
- logging.info(f'[raw_input] {inputs}')
- logging.info(f'[response] {response}')
- yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
-
diff --git a/spaces/LuChengTHU/dpmsolver_sdm/app.py b/spaces/LuChengTHU/dpmsolver_sdm/app.py
deleted file mode 100644
index 46536e1ba06ca1004295ce45e15e6c39d5c38560..0000000000000000000000000000000000000000
--- a/spaces/LuChengTHU/dpmsolver_sdm/app.py
+++ /dev/null
@@ -1,277 +0,0 @@
-from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-import os
-
-scheduler = DPMSolverMultistepScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- num_train_timesteps=1000,
- trained_betas=None,
- prediction_type="epsilon",
- thresholding=False,
- algorithm_type="dpmsolver++",
- solver_type="midpoint",
- lower_order_final=True,
-)
-
-class Model:
- def __init__(self, name, path, prefix):
- self.name = name
- self.path = path
- self.prefix = prefix
- self.pipe_t2i = None
- self.pipe_i2i = None
-
-models = [
- Model("Stable-Diffusion-v1.4", "CompVis/stable-diffusion-v1-4", "The 1.4 version of official stable-diffusion"),
- Model("Waifu", "hakurei/waifu-diffusion", "anime style"),
-]
-
-last_mode = "txt2img"
-current_model = models[0]
-current_model_path = current_model.path
-
-auth_token = os.getenv("HUGGING_FACE_HUB_TOKEN")
-
-print(f"Is CUDA available: {torch.cuda.is_available()}")
-
-if torch.cuda.is_available():
- vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16, use_auth_token=auth_token)
- for model in models:
- try:
- unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16, use_auth_token=auth_token)
- model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler, use_auth_token=auth_token)
- model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler, use_auth_token=auth_token)
- except:
- models.remove(model)
- pipe = models[0].pipe_t2i
- pipe = pipe.to("cuda")
-
-else:
- vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", use_auth_token=auth_token)
- for model in models:
- try:
- unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", use_auth_token=auth_token)
- model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, scheduler=scheduler, use_auth_token=auth_token)
- model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, scheduler=scheduler, use_auth_token=auth_token)
- except:
- models.remove(model)
- pipe = models[0].pipe_t2i
- pipe = pipe.to("cpu")
-
-device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶"
-
-def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""):
-
- global current_model
- for model in models:
- if model.name == model_name:
- current_model = model
- model_path = current_model.path
-
- generator = torch.Generator('cuda' if torch.cuda.is_available() else 'cpu').manual_seed(seed) if seed != 0 else None
-
- if img is not None:
- return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator)
- else:
- return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator)
-
-def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator=None):
-
- global last_mode
- global pipe
- global current_model_path
- if model_path != current_model_path or last_mode != "txt2img":
- current_model_path = model_path
-
- pipe.to("cpu")
- pipe = current_model.pipe_t2i
-
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- last_mode = "txt2img"
-
- prompt = current_model.prefix + prompt
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- # num_images_per_prompt=n_images,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return replace_nsfw_images(result)
-
-def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator=None):
-
- global last_mode
- global pipe
- global current_model_path
- if model_path != current_model_path or last_mode != "img2img":
- current_model_path = model_path
-
- pipe.to("cpu")
- pipe = current_model.pipe_i2i
-
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- last_mode = "img2img"
-
- prompt = current_model.prefix + prompt
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- # num_images_per_prompt=n_images,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- #width = width,
- #height = height,
- generator = generator)
-
- return replace_nsfw_images(result)
-
-def replace_nsfw_images(results):
- for i in range(len(results.images)):
- if results.nsfw_content_detected[i]:
- results.images[i] = Image.open("nsfw.png")
- return results.images[0]
-
-css = """
-
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Stable-Diffusion with DPM-Solver (fastest sampler for diffusion models)
-
-
-
- ❤️ Acknowledgement: Hardware resources of this demo are supported by HuggingFace 🤗 . Many thanks for the help!
-
-
-
- This is a demo of sampling by DPM-Solver with two variants of Stable Diffusion models, including Stable-Diffusion-v1.4 and Waifu.
-
-
-
- DPM-Solver (Neurips 2022 Oral) is a fast high-order solver customized for diffusion ODEs, which can generate high-quality samples by diffusion models within only 10-25 steps. DPM-Solver has an analytical formulation and is very easy to use for all types of Gaussian diffusion models, and includes DDIM as a first-order special case.
-
-
- We use Diffusers 🧨 to implement this demo, which currently supports the multistep DPM-Solver scheduler. For more details of DPM-Solver with Diffusers, check this pull request.
-
-
-
- Currently, the default sampler of stable-diffusion is PNDM, which needs 50 steps to generate high-quality samples. However, DPM-Solver can generate high-quality samples within only 20-25 steps, and for some samples even within 10-15 steps.
-
-
-
- Running on {device}
-
-
- """
- )
-
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False)
- generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
-
-
- image_out = gr.Image(height=512)
- # gallery = gr.Gallery(
- # label="Generated images", show_label=False, elem_id="gallery"
- # ).style(grid=[1], height="auto")
-
- with gr.Column(scale=45):
- with gr.Tab("Options"):
- with gr.Group():
- neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
-
- # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1)
-
- with gr.Row():
- guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15)
- steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=100, step=1)
-
- with gr.Row():
- width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8)
- height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8)
-
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- with gr.Tab("Image to image"):
- with gr.Group():
- image = gr.Image(label="Image", height=256, tool="editor", type="pil")
- strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
-
- # model_name.change(lambda x: gr.update(visible = x == models[0].name), inputs=model_name, outputs=custom_model_group)
-
- inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt]
- prompt.submit(inference, inputs=inputs, outputs=image_out)
-
- generate.click(inference, inputs=inputs, outputs=image_out)
-
-
- gr.Markdown('''
- Stable-diffusion Models by [CompVis](https://huggingface.co/CompVis) and [stabilityai](https://huggingface.co/stabilityai), Waifu-diffusion models by [@hakurei](https://huggingface.co/hakurei). Most of the code of this demo are copied from [@anzorq's fintuned-diffusion](https://huggingface.co/spaces/anzorq/finetuned_diffusion/tree/main) ❤️
- Space by [Cheng Lu](https://github.com/LuChengTHU). [](https://twitter.com/ChengLu05671218)
-
- 
- ''')
-
-demo.queue(concurrency_count=1)
-demo.launch(debug=False, share=False)
diff --git a/spaces/Matthijs/image2reverb/image2reverb/util.py b/spaces/Matthijs/image2reverb/image2reverb/util.py
deleted file mode 100644
index b37bd91a2f8d73e368234285d69c521f907e24a1..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/image2reverb/image2reverb/util.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import os
-import math
-import numpy
-import torch
-import torch.fft
-from PIL import Image
-
-
-def compare_t60(a, b, sr=86):
- try:
- a = a.detach().clone().abs()
- b = b.detach().clone().abs()
- a = (a - a.min())/(a.max() - a.min())
- b = (b - b.min())/(b.max() - b.min())
- t_a = estimate_t60(a, sr)
- t_b = estimate_t60(b, sr)
- return abs((t_b - t_a)/t_a) * 100
- except Exception as error:
- return 100
-
-
-def estimate_t60(audio, sr):
- fs = float(sr)
- audio = audio.detach().clone()
-
- decay_db = 20
-
- # The power of the impulse response in dB
- power = audio ** 2
- energy = torch.flip(torch.cumsum(torch.flip(power, [0]), 0), [0]) # Integration according to Schroeder
-
- # remove the possibly all zero tail
- i_nz = torch.max(torch.where(energy > 0)[0])
- n = energy[:i_nz]
- db = 10 * torch.log10(n)
- db = db - db[0]
-
- # -5 dB headroom
- i_5db = torch.min(torch.where(-5 - db > 0)[0])
- e_5db = db[i_5db]
- t_5db = i_5db / fs
-
- # after decay
- i_decay = torch.min(torch.where(-5 - decay_db - db > 0)[0])
- t_decay = i_decay / fs
-
- # compute the decay time
- decay_time = t_decay - t_5db
- est_rt60 = (60 / decay_db) * decay_time
-
- return est_rt60
-
-def hilbert(x): #hilbert transform
- N = x.shape[1]
- Xf = torch.fft.fft(x, n=None, dim=-1)
- h = torch.zeros(N)
- if N % 2 == 0:
- h[0] = h[N//2] = 1
- h[1:N//2] = 2
- else:
- h[0] = 1
- h[1:(N + 1)//2] = 2
- x = torch.fft.ifft(Xf * h)
- return x
-
-
-def spectral_centroid(x): #calculate the spectral centroid "brightness" of an audio input
- Xf = torch.abs(torch.fft.fft(x,n=None,dim=-1)) #take fft and abs of x
- norm_Xf = Xf / sum(sum(Xf)) # like probability mass function
- norm_freqs = torch.linspace(0, 1, Xf.shape[1])
- spectral_centroid = sum(sum(norm_freqs * norm_Xf))
- return spectral_centroid
-
-
-# Converts a Tensor into a Numpy array
-# |imtype|: the desired type of the converted numpy array
-def tensor2im(image_tensor, imtype=numpy.uint8, normalize=True):
- if isinstance(image_tensor, list):
- image_numpy = []
- for i in range(len(image_tensor)):
- image_numpy.append(tensor2im(image_tensor[i], imtype, normalize))
- return image_numpy
- image_numpy = image_tensor.cpu().float().numpy()
- if normalize:
- image_numpy = (numpy.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0
- else:
- image_numpy = numpy.transpose(image_numpy, (1, 2, 0)) * 255.0
- image_numpy = numpy.clip(image_numpy, 0, 255)
- if image_numpy.shape[2] == 1 or image_numpy.shape[2] > 3:
- image_numpy = image_numpy[:,:,0]
- return image_numpy.astype(imtype)
-
-# Converts a one-hot tensor into a colorful label map
-def tensor2label(label_tensor, n_label, imtype=numpy.uint8):
- if n_label == 0:
- return tensor2im(label_tensor, imtype)
- label_tensor = label_tensor.cpu().float()
- if label_tensor.size()[0] > 1:
- label_tensor = label_tensor.max(0, keepdim=True)[1]
- label_tensor = Colorize(n_label)(label_tensor)
- label_numpy = numpy.transpose(label_tensor.numpy(), (1, 2, 0))
- return label_numpy.astype(imtype)
-
-def save_image(image_numpy, image_path):
- image_pil = Image.fromarray(image_numpy)
- image_pil.save(image_path)
-
-def mkdirs(paths):
- if isinstance(paths, list) and not isinstance(paths, str):
- for path in paths:
- mkdir(path)
- else:
- mkdir(paths)
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-###############################################################################
-# Code from
-# https://github.com/ycszen/pytorch-seg/blob/master/transform.py
-# Modified so it complies with the Citscape label map colors
-###############################################################################
-def uint82bin(n, count=8):
- """returns the binary of integer n, count refers to amount of bits"""
- return ''.join([str((n >> y) & 1) for y in range(count-1, -1, -1)])
-
-def labelcolormap(N):
- if N == 35: # cityscape
- cmap = numpy.array([( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), (111, 74, 0), ( 81, 0, 81),
- (128, 64,128), (244, 35,232), (250,170,160), (230,150,140), ( 70, 70, 70), (102,102,156), (190,153,153),
- (180,165,180), (150,100,100), (150,120, 90), (153,153,153), (153,153,153), (250,170, 30), (220,220, 0),
- (107,142, 35), (152,251,152), ( 70,130,180), (220, 20, 60), (255, 0, 0), ( 0, 0,142), ( 0, 0, 70),
- ( 0, 60,100), ( 0, 0, 90), ( 0, 0,110), ( 0, 80,100), ( 0, 0,230), (119, 11, 32), ( 0, 0,142)],
- dtype=numpy.uint8)
- else:
- cmap = numpy.zeros((N, 3), dtype=numpy.uint8)
- for i in range(N):
- r, g, b = 0, 0, 0
- id = i
- for j in range(7):
- str_id = uint82bin(id)
- r = r ^ (numpy.uint8(str_id[-1]) << (7-j))
- g = g ^ (numpy.uint8(str_id[-2]) << (7-j))
- b = b ^ (numpy.uint8(str_id[-3]) << (7-j))
- id = id >> 3
- cmap[i, 0] = r
- cmap[i, 1] = g
- cmap[i, 2] = b
- return cmap
-
-class Colorize(object):
- def __init__(self, n=35):
- self.cmap = labelcolormap(n)
- self.cmap = torch.from_numpy(self.cmap[:n])
-
- def __call__(self, gray_image):
- size = gray_image.size()
- color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0)
-
- for label in range(0, len(self.cmap)):
- mask = (label == gray_image[0]).cpu()
- color_image[0][mask] = self.cmap[label][0]
- color_image[1][mask] = self.cmap[label][1]
- color_image[2][mask] = self.cmap[label][2]
-
- return color_image
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_runner.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_runner.py
deleted file mode 100644
index 4928db0a73b56fe0218a4bf66ec4ffa082d31ccc..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/base_runner.py
+++ /dev/null
@@ -1,542 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import logging
-import os.path as osp
-import warnings
-from abc import ABCMeta, abstractmethod
-
-import torch
-from torch.optim import Optimizer
-
-import annotator.uniformer.mmcv as mmcv
-from ..parallel import is_module_wrapper
-from .checkpoint import load_checkpoint
-from .dist_utils import get_dist_info
-from .hooks import HOOKS, Hook
-from .log_buffer import LogBuffer
-from .priority import Priority, get_priority
-from .utils import get_time_str
-
-
-class BaseRunner(metaclass=ABCMeta):
- """The base class of Runner, a training helper for PyTorch.
-
- All subclasses should implement the following APIs:
-
- - ``run()``
- - ``train()``
- - ``val()``
- - ``save_checkpoint()``
-
- Args:
- model (:obj:`torch.nn.Module`): The model to be run.
- batch_processor (callable): A callable method that process a data
- batch. The interface of this method should be
- `batch_processor(model, data, train_mode) -> dict`
- optimizer (dict or :obj:`torch.optim.Optimizer`): It can be either an
- optimizer (in most cases) or a dict of optimizers (in models that
- requires more than one optimizer, e.g., GAN).
- work_dir (str, optional): The working directory to save checkpoints
- and logs. Defaults to None.
- logger (:obj:`logging.Logger`): Logger used during training.
- Defaults to None. (The default value is just for backward
- compatibility)
- meta (dict | None): A dict records some import information such as
- environment info and seed, which will be logged in logger hook.
- Defaults to None.
- max_epochs (int, optional): Total training epochs.
- max_iters (int, optional): Total training iterations.
- """
-
- def __init__(self,
- model,
- batch_processor=None,
- optimizer=None,
- work_dir=None,
- logger=None,
- meta=None,
- max_iters=None,
- max_epochs=None):
- if batch_processor is not None:
- if not callable(batch_processor):
- raise TypeError('batch_processor must be callable, '
- f'but got {type(batch_processor)}')
- warnings.warn('batch_processor is deprecated, please implement '
- 'train_step() and val_step() in the model instead.')
- # raise an error is `batch_processor` is not None and
- # `model.train_step()` exists.
- if is_module_wrapper(model):
- _model = model.module
- else:
- _model = model
- if hasattr(_model, 'train_step') or hasattr(_model, 'val_step'):
- raise RuntimeError(
- 'batch_processor and model.train_step()/model.val_step() '
- 'cannot be both available.')
- else:
- assert hasattr(model, 'train_step')
-
- # check the type of `optimizer`
- if isinstance(optimizer, dict):
- for name, optim in optimizer.items():
- if not isinstance(optim, Optimizer):
- raise TypeError(
- f'optimizer must be a dict of torch.optim.Optimizers, '
- f'but optimizer["{name}"] is a {type(optim)}')
- elif not isinstance(optimizer, Optimizer) and optimizer is not None:
- raise TypeError(
- f'optimizer must be a torch.optim.Optimizer object '
- f'or dict or None, but got {type(optimizer)}')
-
- # check the type of `logger`
- if not isinstance(logger, logging.Logger):
- raise TypeError(f'logger must be a logging.Logger object, '
- f'but got {type(logger)}')
-
- # check the type of `meta`
- if meta is not None and not isinstance(meta, dict):
- raise TypeError(
- f'meta must be a dict or None, but got {type(meta)}')
-
- self.model = model
- self.batch_processor = batch_processor
- self.optimizer = optimizer
- self.logger = logger
- self.meta = meta
- # create work_dir
- if mmcv.is_str(work_dir):
- self.work_dir = osp.abspath(work_dir)
- mmcv.mkdir_or_exist(self.work_dir)
- elif work_dir is None:
- self.work_dir = None
- else:
- raise TypeError('"work_dir" must be a str or None')
-
- # get model name from the model class
- if hasattr(self.model, 'module'):
- self._model_name = self.model.module.__class__.__name__
- else:
- self._model_name = self.model.__class__.__name__
-
- self._rank, self._world_size = get_dist_info()
- self.timestamp = get_time_str()
- self.mode = None
- self._hooks = []
- self._epoch = 0
- self._iter = 0
- self._inner_iter = 0
-
- if max_epochs is not None and max_iters is not None:
- raise ValueError(
- 'Only one of `max_epochs` or `max_iters` can be set.')
-
- self._max_epochs = max_epochs
- self._max_iters = max_iters
- # TODO: Redesign LogBuffer, it is not flexible and elegant enough
- self.log_buffer = LogBuffer()
-
- @property
- def model_name(self):
- """str: Name of the model, usually the module class name."""
- return self._model_name
-
- @property
- def rank(self):
- """int: Rank of current process. (distributed training)"""
- return self._rank
-
- @property
- def world_size(self):
- """int: Number of processes participating in the job.
- (distributed training)"""
- return self._world_size
-
- @property
- def hooks(self):
- """list[:obj:`Hook`]: A list of registered hooks."""
- return self._hooks
-
- @property
- def epoch(self):
- """int: Current epoch."""
- return self._epoch
-
- @property
- def iter(self):
- """int: Current iteration."""
- return self._iter
-
- @property
- def inner_iter(self):
- """int: Iteration in an epoch."""
- return self._inner_iter
-
- @property
- def max_epochs(self):
- """int: Maximum training epochs."""
- return self._max_epochs
-
- @property
- def max_iters(self):
- """int: Maximum training iterations."""
- return self._max_iters
-
- @abstractmethod
- def train(self):
- pass
-
- @abstractmethod
- def val(self):
- pass
-
- @abstractmethod
- def run(self, data_loaders, workflow, **kwargs):
- pass
-
- @abstractmethod
- def save_checkpoint(self,
- out_dir,
- filename_tmpl,
- save_optimizer=True,
- meta=None,
- create_symlink=True):
- pass
-
- def current_lr(self):
- """Get current learning rates.
-
- Returns:
- list[float] | dict[str, list[float]]: Current learning rates of all
- param groups. If the runner has a dict of optimizers, this
- method will return a dict.
- """
- if isinstance(self.optimizer, torch.optim.Optimizer):
- lr = [group['lr'] for group in self.optimizer.param_groups]
- elif isinstance(self.optimizer, dict):
- lr = dict()
- for name, optim in self.optimizer.items():
- lr[name] = [group['lr'] for group in optim.param_groups]
- else:
- raise RuntimeError(
- 'lr is not applicable because optimizer does not exist.')
- return lr
-
- def current_momentum(self):
- """Get current momentums.
-
- Returns:
- list[float] | dict[str, list[float]]: Current momentums of all
- param groups. If the runner has a dict of optimizers, this
- method will return a dict.
- """
-
- def _get_momentum(optimizer):
- momentums = []
- for group in optimizer.param_groups:
- if 'momentum' in group.keys():
- momentums.append(group['momentum'])
- elif 'betas' in group.keys():
- momentums.append(group['betas'][0])
- else:
- momentums.append(0)
- return momentums
-
- if self.optimizer is None:
- raise RuntimeError(
- 'momentum is not applicable because optimizer does not exist.')
- elif isinstance(self.optimizer, torch.optim.Optimizer):
- momentums = _get_momentum(self.optimizer)
- elif isinstance(self.optimizer, dict):
- momentums = dict()
- for name, optim in self.optimizer.items():
- momentums[name] = _get_momentum(optim)
- return momentums
-
- def register_hook(self, hook, priority='NORMAL'):
- """Register a hook into the hook list.
-
- The hook will be inserted into a priority queue, with the specified
- priority (See :class:`Priority` for details of priorities).
- For hooks with the same priority, they will be triggered in the same
- order as they are registered.
-
- Args:
- hook (:obj:`Hook`): The hook to be registered.
- priority (int or str or :obj:`Priority`): Hook priority.
- Lower value means higher priority.
- """
- assert isinstance(hook, Hook)
- if hasattr(hook, 'priority'):
- raise ValueError('"priority" is a reserved attribute for hooks')
- priority = get_priority(priority)
- hook.priority = priority
- # insert the hook to a sorted list
- inserted = False
- for i in range(len(self._hooks) - 1, -1, -1):
- if priority >= self._hooks[i].priority:
- self._hooks.insert(i + 1, hook)
- inserted = True
- break
- if not inserted:
- self._hooks.insert(0, hook)
-
- def register_hook_from_cfg(self, hook_cfg):
- """Register a hook from its cfg.
-
- Args:
- hook_cfg (dict): Hook config. It should have at least keys 'type'
- and 'priority' indicating its type and priority.
-
- Notes:
- The specific hook class to register should not use 'type' and
- 'priority' arguments during initialization.
- """
- hook_cfg = hook_cfg.copy()
- priority = hook_cfg.pop('priority', 'NORMAL')
- hook = mmcv.build_from_cfg(hook_cfg, HOOKS)
- self.register_hook(hook, priority=priority)
-
- def call_hook(self, fn_name):
- """Call all hooks.
-
- Args:
- fn_name (str): The function name in each hook to be called, such as
- "before_train_epoch".
- """
- for hook in self._hooks:
- getattr(hook, fn_name)(self)
-
- def get_hook_info(self):
- # Get hooks info in each stage
- stage_hook_map = {stage: [] for stage in Hook.stages}
- for hook in self.hooks:
- try:
- priority = Priority(hook.priority).name
- except ValueError:
- priority = hook.priority
- classname = hook.__class__.__name__
- hook_info = f'({priority:<12}) {classname:<35}'
- for trigger_stage in hook.get_triggered_stages():
- stage_hook_map[trigger_stage].append(hook_info)
-
- stage_hook_infos = []
- for stage in Hook.stages:
- hook_infos = stage_hook_map[stage]
- if len(hook_infos) > 0:
- info = f'{stage}:\n'
- info += '\n'.join(hook_infos)
- info += '\n -------------------- '
- stage_hook_infos.append(info)
- return '\n'.join(stage_hook_infos)
-
- def load_checkpoint(self,
- filename,
- map_location='cpu',
- strict=False,
- revise_keys=[(r'^module.', '')]):
- return load_checkpoint(
- self.model,
- filename,
- map_location,
- strict,
- self.logger,
- revise_keys=revise_keys)
-
- def resume(self,
- checkpoint,
- resume_optimizer=True,
- map_location='default'):
- if map_location == 'default':
- if torch.cuda.is_available():
- device_id = torch.cuda.current_device()
- checkpoint = self.load_checkpoint(
- checkpoint,
- map_location=lambda storage, loc: storage.cuda(device_id))
- else:
- checkpoint = self.load_checkpoint(checkpoint)
- else:
- checkpoint = self.load_checkpoint(
- checkpoint, map_location=map_location)
-
- self._epoch = checkpoint['meta']['epoch']
- self._iter = checkpoint['meta']['iter']
- if self.meta is None:
- self.meta = {}
- self.meta.setdefault('hook_msgs', {})
- # load `last_ckpt`, `best_score`, `best_ckpt`, etc. for hook messages
- self.meta['hook_msgs'].update(checkpoint['meta'].get('hook_msgs', {}))
-
- # Re-calculate the number of iterations when resuming
- # models with different number of GPUs
- if 'config' in checkpoint['meta']:
- config = mmcv.Config.fromstring(
- checkpoint['meta']['config'], file_format='.py')
- previous_gpu_ids = config.get('gpu_ids', None)
- if previous_gpu_ids and len(previous_gpu_ids) > 0 and len(
- previous_gpu_ids) != self.world_size:
- self._iter = int(self._iter * len(previous_gpu_ids) /
- self.world_size)
- self.logger.info('the iteration number is changed due to '
- 'change of GPU number')
-
- # resume meta information meta
- self.meta = checkpoint['meta']
-
- if 'optimizer' in checkpoint and resume_optimizer:
- if isinstance(self.optimizer, Optimizer):
- self.optimizer.load_state_dict(checkpoint['optimizer'])
- elif isinstance(self.optimizer, dict):
- for k in self.optimizer.keys():
- self.optimizer[k].load_state_dict(
- checkpoint['optimizer'][k])
- else:
- raise TypeError(
- 'Optimizer should be dict or torch.optim.Optimizer '
- f'but got {type(self.optimizer)}')
-
- self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter)
-
- def register_lr_hook(self, lr_config):
- if lr_config is None:
- return
- elif isinstance(lr_config, dict):
- assert 'policy' in lr_config
- policy_type = lr_config.pop('policy')
- # If the type of policy is all in lower case, e.g., 'cyclic',
- # then its first letter will be capitalized, e.g., to be 'Cyclic'.
- # This is for the convenient usage of Lr updater.
- # Since this is not applicable for `
- # CosineAnnealingLrUpdater`,
- # the string will not be changed if it contains capital letters.
- if policy_type == policy_type.lower():
- policy_type = policy_type.title()
- hook_type = policy_type + 'LrUpdaterHook'
- lr_config['type'] = hook_type
- hook = mmcv.build_from_cfg(lr_config, HOOKS)
- else:
- hook = lr_config
- self.register_hook(hook, priority='VERY_HIGH')
-
- def register_momentum_hook(self, momentum_config):
- if momentum_config is None:
- return
- if isinstance(momentum_config, dict):
- assert 'policy' in momentum_config
- policy_type = momentum_config.pop('policy')
- # If the type of policy is all in lower case, e.g., 'cyclic',
- # then its first letter will be capitalized, e.g., to be 'Cyclic'.
- # This is for the convenient usage of momentum updater.
- # Since this is not applicable for
- # `CosineAnnealingMomentumUpdater`,
- # the string will not be changed if it contains capital letters.
- if policy_type == policy_type.lower():
- policy_type = policy_type.title()
- hook_type = policy_type + 'MomentumUpdaterHook'
- momentum_config['type'] = hook_type
- hook = mmcv.build_from_cfg(momentum_config, HOOKS)
- else:
- hook = momentum_config
- self.register_hook(hook, priority='HIGH')
-
- def register_optimizer_hook(self, optimizer_config):
- if optimizer_config is None:
- return
- if isinstance(optimizer_config, dict):
- optimizer_config.setdefault('type', 'OptimizerHook')
- hook = mmcv.build_from_cfg(optimizer_config, HOOKS)
- else:
- hook = optimizer_config
- self.register_hook(hook, priority='ABOVE_NORMAL')
-
- def register_checkpoint_hook(self, checkpoint_config):
- if checkpoint_config is None:
- return
- if isinstance(checkpoint_config, dict):
- checkpoint_config.setdefault('type', 'CheckpointHook')
- hook = mmcv.build_from_cfg(checkpoint_config, HOOKS)
- else:
- hook = checkpoint_config
- self.register_hook(hook, priority='NORMAL')
-
- def register_logger_hooks(self, log_config):
- if log_config is None:
- return
- log_interval = log_config['interval']
- for info in log_config['hooks']:
- logger_hook = mmcv.build_from_cfg(
- info, HOOKS, default_args=dict(interval=log_interval))
- self.register_hook(logger_hook, priority='VERY_LOW')
-
- def register_timer_hook(self, timer_config):
- if timer_config is None:
- return
- if isinstance(timer_config, dict):
- timer_config_ = copy.deepcopy(timer_config)
- hook = mmcv.build_from_cfg(timer_config_, HOOKS)
- else:
- hook = timer_config
- self.register_hook(hook, priority='LOW')
-
- def register_custom_hooks(self, custom_config):
- if custom_config is None:
- return
-
- if not isinstance(custom_config, list):
- custom_config = [custom_config]
-
- for item in custom_config:
- if isinstance(item, dict):
- self.register_hook_from_cfg(item)
- else:
- self.register_hook(item, priority='NORMAL')
-
- def register_profiler_hook(self, profiler_config):
- if profiler_config is None:
- return
- if isinstance(profiler_config, dict):
- profiler_config.setdefault('type', 'ProfilerHook')
- hook = mmcv.build_from_cfg(profiler_config, HOOKS)
- else:
- hook = profiler_config
- self.register_hook(hook)
-
- def register_training_hooks(self,
- lr_config,
- optimizer_config=None,
- checkpoint_config=None,
- log_config=None,
- momentum_config=None,
- timer_config=dict(type='IterTimerHook'),
- custom_hooks_config=None):
- """Register default and custom hooks for training.
-
- Default and custom hooks include:
-
- +----------------------+-------------------------+
- | Hooks | Priority |
- +======================+=========================+
- | LrUpdaterHook | VERY_HIGH (10) |
- +----------------------+-------------------------+
- | MomentumUpdaterHook | HIGH (30) |
- +----------------------+-------------------------+
- | OptimizerStepperHook | ABOVE_NORMAL (40) |
- +----------------------+-------------------------+
- | CheckpointSaverHook | NORMAL (50) |
- +----------------------+-------------------------+
- | IterTimerHook | LOW (70) |
- +----------------------+-------------------------+
- | LoggerHook(s) | VERY_LOW (90) |
- +----------------------+-------------------------+
- | CustomHook(s) | defaults to NORMAL (50) |
- +----------------------+-------------------------+
-
- If custom hooks have same priority with default hooks, custom hooks
- will be triggered after default hooks.
- """
- self.register_lr_hook(lr_config)
- self.register_momentum_hook(momentum_config)
- self.register_optimizer_hook(optimizer_config)
- self.register_checkpoint_hook(checkpoint_config)
- self.register_timer_hook(timer_config)
- self.register_logger_hooks(log_config)
- self.register_custom_hooks(custom_hooks_config)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/blocks.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/blocks.py
deleted file mode 100644
index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/blocks.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import torch
-import torch.nn as nn
-
-from .vit import (
- _make_pretrained_vitb_rn50_384,
- _make_pretrained_vitl16_384,
- _make_pretrained_vitb16_384,
- forward_vit,
-)
-
-def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",):
- if backbone == "vitl16_384":
- pretrained = _make_pretrained_vitl16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # ViT-L/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb_rn50_384":
- pretrained = _make_pretrained_vitb_rn50_384(
- use_pretrained,
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
- scratch = _make_scratch(
- [256, 512, 768, 768], features, groups=groups, expand=expand
- ) # ViT-H/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb16_384":
- pretrained = _make_pretrained_vitb16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # ViT-B/16 - 84.6% Top1 (backbone)
- elif backbone == "resnext101_wsl":
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
- elif backbone == "efficientnet_lite3":
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
- else:
- print(f"Backbone '{backbone}' not implemented")
- assert False
-
- return pretrained, scratch
-
-
-def _make_scratch(in_shape, out_shape, groups=1, expand=False):
- scratch = nn.Module()
-
- out_shape1 = out_shape
- out_shape2 = out_shape
- out_shape3 = out_shape
- out_shape4 = out_shape
- if expand==True:
- out_shape1 = out_shape
- out_shape2 = out_shape*2
- out_shape3 = out_shape*4
- out_shape4 = out_shape*8
-
- scratch.layer1_rn = nn.Conv2d(
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer2_rn = nn.Conv2d(
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer3_rn = nn.Conv2d(
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer4_rn = nn.Conv2d(
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
-
- return scratch
-
-
-def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
- efficientnet = torch.hub.load(
- "rwightman/gen-efficientnet-pytorch",
- "tf_efficientnet_lite3",
- pretrained=use_pretrained,
- exportable=exportable
- )
- return _make_efficientnet_backbone(efficientnet)
-
-
-def _make_efficientnet_backbone(effnet):
- pretrained = nn.Module()
-
- pretrained.layer1 = nn.Sequential(
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
- )
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
-
- return pretrained
-
-
-def _make_resnet_backbone(resnet):
- pretrained = nn.Module()
- pretrained.layer1 = nn.Sequential(
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
- )
-
- pretrained.layer2 = resnet.layer2
- pretrained.layer3 = resnet.layer3
- pretrained.layer4 = resnet.layer4
-
- return pretrained
-
-
-def _make_pretrained_resnext101_wsl(use_pretrained):
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
- return _make_resnet_backbone(resnet)
-
-
-
-class Interpolate(nn.Module):
- """Interpolation module.
- """
-
- def __init__(self, scale_factor, mode, align_corners=False):
- """Init.
-
- Args:
- scale_factor (float): scaling
- mode (str): interpolation mode
- """
- super(Interpolate, self).__init__()
-
- self.interp = nn.functional.interpolate
- self.scale_factor = scale_factor
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: interpolated data
- """
-
- x = self.interp(
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
- )
-
- return x
-
-
-class ResidualConvUnit(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
- out = self.relu(x)
- out = self.conv1(out)
- out = self.relu(out)
- out = self.conv2(out)
-
- return out + x
-
-
-class FeatureFusionBlock(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock, self).__init__()
-
- self.resConfUnit1 = ResidualConvUnit(features)
- self.resConfUnit2 = ResidualConvUnit(features)
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- output += self.resConfUnit1(xs[1])
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=True
- )
-
- return output
-
-
-
-
-class ResidualConvUnit_custom(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features, activation, bn):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.bn = bn
-
- self.groups=1
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- if self.bn==True:
- self.bn1 = nn.BatchNorm2d(features)
- self.bn2 = nn.BatchNorm2d(features)
-
- self.activation = activation
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
-
- out = self.activation(x)
- out = self.conv1(out)
- if self.bn==True:
- out = self.bn1(out)
-
- out = self.activation(out)
- out = self.conv2(out)
- if self.bn==True:
- out = self.bn2(out)
-
- if self.groups > 1:
- out = self.conv_merge(out)
-
- return self.skip_add.add(out, x)
-
- # return out + x
-
-
-class FeatureFusionBlock_custom(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock_custom, self).__init__()
-
- self.deconv = deconv
- self.align_corners = align_corners
-
- self.groups=1
-
- self.expand = expand
- out_features = features
- if self.expand==True:
- out_features = features//2
-
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
-
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- res = self.resConfUnit1(xs[1])
- output = self.skip_add.add(output, res)
- # output += res
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=self.align_corners
- )
-
- output = self.out_conv(output)
-
- return output
-
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/diffusionmodules/__init__.py b/spaces/MirageML/sjc/sd1/ldm/modules/diffusionmodules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Monster/Llama-2-13B-chat/Dockerfile b/spaces/Monster/Llama-2-13B-chat/Dockerfile
deleted file mode 100644
index 2febaaadf56245aec758f041ef8519da43b17db7..0000000000000000000000000000000000000000
--- a/spaces/Monster/Llama-2-13B-chat/Dockerfile
+++ /dev/null
@@ -1,5 +0,0 @@
-# Monster/Llama-2-13B-chat
-FROM ghcr.io/ggerganov/llama.cpp:full
-RUN apt update && apt upgrade -y && apt install wget -y
-RUN wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/resolve/main/llama-2-13b-chat.Q4_K_M.gguf" -O llama-2-13b-chat.Q4_K_M.gguf
-CMD ["--server", "-m", "llama-2-13b-chat.Q4_K_M.gguf", "--port", "7860", "--host", "0.0.0.0", "-t", "2"]
\ No newline at end of file
diff --git a/spaces/NeuralInternet/InfiniteGPT/README.md b/spaces/NeuralInternet/InfiniteGPT/README.md
deleted file mode 100644
index c9095ec1cdff08a20ae5164548655114afab1897..0000000000000000000000000000000000000000
--- a/spaces/NeuralInternet/InfiniteGPT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: InfiniteGPT
-emoji: 📚
-colorFrom: indigo
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-duplicated_from: asifhugs/InfiniteGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NimaBoscarino/aot-gan-inpainting/app.py b/spaces/NimaBoscarino/aot-gan-inpainting/app.py
deleted file mode 100644
index ad375cea000649e1fd732e4cc9433124144f7020..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/aot-gan-inpainting/app.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from PIL import Image
-import streamlit as st
-from streamlit_drawable_canvas import st_canvas
-from torchvision.transforms import ToTensor
-import torch
-import numpy as np
-import cv2
-import aotgan.model.aotgan as net
-
-@st.cache
-def load_model(model_name):
- model = net.InpaintGenerator.from_pretrained(model_name)
- return model
-
-def postprocess(image):
- image = torch.clamp(image, -1., 1.)
- image = (image + 1) / 2.0 * 255.0
- image = image.permute(1, 2, 0)
- image = image.cpu().numpy().astype(np.uint8)
- return image
-
-def infer(img, mask):
- with torch.no_grad():
- img_cv = cv2.resize(np.array(img), (512, 512)) # Fixing everything to 512 x 512 for this demo.
- img_tensor = (ToTensor()(img_cv) * 2.0 - 1.0).unsqueeze(0)
- mask_tensor = (ToTensor()(mask.astype(np.uint8))).unsqueeze(0)
- masked_tensor = (img_tensor * (1 - mask_tensor).float()) + mask_tensor
- pred_tensor = model(masked_tensor, mask_tensor)
- comp_tensor = (pred_tensor * mask_tensor + img_tensor * (1 - mask_tensor))
- comp_np = postprocess(comp_tensor[0])
-
- return comp_np
-
-stroke_width = 8
-stroke_color = "#FFF"
-bg_color = "#000"
-bg_image = st.sidebar.file_uploader("Image:", type=["png", "jpg", "jpeg"])
-sample_bg_image = st.sidebar.radio('Sample Images', [
- "man.png",
- "pexels-ike-louie-natividad-2709388.jpg",
- "pexels-christina-morillo-1181686.jpg",
- "pexels-italo-melo-2379005.jpg",
- "rainbow.jpeg",
- "kitty.jpg",
- "kitty_on_chair.jpeg",
-])
-drawing_mode = st.sidebar.selectbox(
- "Drawing tool:", ("freedraw", "rect", "circle")
-)
-
-model_name = st.sidebar.selectbox(
- "Select model:", ("NimaBoscarino/aot-gan-celebahq", "NimaBoscarino/aot-gan-places2")
-)
-model = load_model(model_name)
-
-bg_image = Image.open(bg_image) if bg_image else Image.open(f"./pictures/{sample_bg_image}")
-
-st.subheader("Draw on the image to erase features. The inpainted result will be generated and displayed below.")
-canvas_result = st_canvas(
- fill_color="rgb(255, 255, 255)",
- stroke_width=stroke_width,
- stroke_color=stroke_color,
- background_color=bg_color,
- background_image=bg_image,
- update_streamlit=True,
- height=512,
- width=512,
- drawing_mode=drawing_mode,
- key="canvas",
-)
-
-if canvas_result.image_data is not None and bg_image and len(canvas_result.json_data["objects"]) > 0:
- result = infer(bg_image, canvas_result.image_data[:, :, 3])
- st.image(result)
diff --git a/spaces/OAOA/DifFace/basicsr/models/srgan_model.py b/spaces/OAOA/DifFace/basicsr/models/srgan_model.py
deleted file mode 100644
index 45387ca7908e3f38f59a605adb8242ad12fcf1a1..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/models/srgan_model.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import torch
-from collections import OrderedDict
-
-from basicsr.archs import build_network
-from basicsr.losses import build_loss
-from basicsr.utils import get_root_logger
-from basicsr.utils.registry import MODEL_REGISTRY
-from .sr_model import SRModel
-
-
-@MODEL_REGISTRY.register()
-class SRGANModel(SRModel):
- """SRGAN model for single image super-resolution."""
-
- def init_training_settings(self):
- train_opt = self.opt['train']
-
- self.ema_decay = train_opt.get('ema_decay', 0)
- if self.ema_decay > 0:
- logger = get_root_logger()
- logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}')
- # define network net_g with Exponential Moving Average (EMA)
- # net_g_ema is used only for testing on one GPU and saving
- # There is no need to wrap with DistributedDataParallel
- self.net_g_ema = build_network(self.opt['network_g']).to(self.device)
- # load pretrained model
- load_path = self.opt['path'].get('pretrain_network_g', None)
- if load_path is not None:
- self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')
- else:
- self.model_ema(0) # copy net_g weight
- self.net_g_ema.eval()
-
- # define network net_d
- self.net_d = build_network(self.opt['network_d'])
- self.net_d = self.model_to_device(self.net_d)
- self.print_network(self.net_d)
-
- # load pretrained models
- load_path = self.opt['path'].get('pretrain_network_d', None)
- if load_path is not None:
- param_key = self.opt['path'].get('param_key_d', 'params')
- self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True), param_key)
-
- self.net_g.train()
- self.net_d.train()
-
- # define losses
- if train_opt.get('pixel_opt'):
- self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)
- else:
- self.cri_pix = None
-
- if train_opt.get('ldl_opt'):
- self.cri_ldl = build_loss(train_opt['ldl_opt']).to(self.device)
- else:
- self.cri_ldl = None
-
- if train_opt.get('perceptual_opt'):
- self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)
- else:
- self.cri_perceptual = None
-
- if train_opt.get('gan_opt'):
- self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)
-
- self.net_d_iters = train_opt.get('net_d_iters', 1)
- self.net_d_init_iters = train_opt.get('net_d_init_iters', 0)
-
- # set up optimizers and schedulers
- self.setup_optimizers()
- self.setup_schedulers()
-
- def setup_optimizers(self):
- train_opt = self.opt['train']
- # optimizer g
- optim_type = train_opt['optim_g'].pop('type')
- self.optimizer_g = self.get_optimizer(optim_type, self.net_g.parameters(), **train_opt['optim_g'])
- self.optimizers.append(self.optimizer_g)
- # optimizer d
- optim_type = train_opt['optim_d'].pop('type')
- self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d'])
- self.optimizers.append(self.optimizer_d)
-
- def optimize_parameters(self, current_iter):
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, self.gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, self.gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(self.gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach())
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- def save(self, epoch, current_iter):
- if hasattr(self, 'net_g_ema'):
- self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])
- else:
- self.save_network(self.net_g, 'net_g', current_iter)
- self.save_network(self.net_d, 'net_d', current_iter)
- self.save_training_state(epoch, current_iter)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py
deleted file mode 100644
index 4a26422f650cf13ee7d4e8d2228b50ec49876fb8..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-from fairseq import checkpoint_utils
-from fairseq.models import (
- register_model,
- register_model_architecture,
-)
-from fairseq.models.speech_to_text import (
- ConvTransformerModel,
- convtransformer_espnet,
- ConvTransformerEncoder,
-)
-from fairseq.models.speech_to_text.modules.augmented_memory_attention import (
- augmented_memory,
- SequenceEncoder,
- AugmentedMemoryConvTransformerEncoder,
-)
-
-from torch import nn, Tensor
-from typing import Dict, List
-from fairseq.models.speech_to_text.modules.emformer import NoSegAugmentedMemoryTransformerEncoderLayer
-
-@register_model("convtransformer_simul_trans")
-class SimulConvTransformerModel(ConvTransformerModel):
- """
- Implementation of the paper:
-
- SimulMT to SimulST: Adapting Simultaneous Text Translation to
- End-to-End Simultaneous Speech Translation
-
- https://www.aclweb.org/anthology/2020.aacl-main.58.pdf
- """
-
- @staticmethod
- def add_args(parser):
- super(SimulConvTransformerModel, SimulConvTransformerModel).add_args(parser)
- parser.add_argument(
- "--train-monotonic-only",
- action="store_true",
- default=False,
- help="Only train monotonic attention",
- )
-
- @classmethod
- def build_decoder(cls, args, task, embed_tokens):
- tgt_dict = task.tgt_dict
-
- from examples.simultaneous_translation.models.transformer_monotonic_attention import (
- TransformerMonotonicDecoder,
- )
-
- decoder = TransformerMonotonicDecoder(args, tgt_dict, embed_tokens)
-
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
-
-@register_model_architecture(
- "convtransformer_simul_trans", "convtransformer_simul_trans_espnet"
-)
-def convtransformer_simul_trans_espnet(args):
- convtransformer_espnet(args)
-
-
-@register_model("convtransformer_augmented_memory")
-@augmented_memory
-class AugmentedMemoryConvTransformerModel(SimulConvTransformerModel):
- @classmethod
- def build_encoder(cls, args):
- encoder = SequenceEncoder(args, AugmentedMemoryConvTransformerEncoder(args))
-
- if getattr(args, "load_pretrained_encoder_from", None) is not None:
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
-
- return encoder
-
-
-@register_model_architecture(
- "convtransformer_augmented_memory", "convtransformer_augmented_memory"
-)
-def augmented_memory_convtransformer_espnet(args):
- convtransformer_espnet(args)
-
-
-# ============================================================================ #
-# Convtransformer
-# with monotonic attention decoder
-# with emformer encoder
-# ============================================================================ #
-
-
-class ConvTransformerEmformerEncoder(ConvTransformerEncoder):
- def __init__(self, args):
- super().__init__(args)
- stride = self.conv_layer_stride(args)
- trf_left_context = args.segment_left_context // stride
- trf_right_context = args.segment_right_context // stride
- context_config = [trf_left_context, trf_right_context]
- self.transformer_layers = nn.ModuleList(
- [
- NoSegAugmentedMemoryTransformerEncoderLayer(
- input_dim=args.encoder_embed_dim,
- num_heads=args.encoder_attention_heads,
- ffn_dim=args.encoder_ffn_embed_dim,
- num_layers=args.encoder_layers,
- dropout_in_attn=args.dropout,
- dropout_on_attn=args.dropout,
- dropout_on_fc1=args.dropout,
- dropout_on_fc2=args.dropout,
- activation_fn=args.activation_fn,
- context_config=context_config,
- segment_size=args.segment_length,
- max_memory_size=args.max_memory_size,
- scaled_init=True, # TODO: use constant for now.
- tanh_on_mem=args.amtrf_tanh_on_mem,
- )
- ]
- )
- self.conv_transformer_encoder = ConvTransformerEncoder(args)
-
- def forward(self, src_tokens, src_lengths):
- encoder_out: Dict[str, List[Tensor]] = self.conv_transformer_encoder(src_tokens, src_lengths.to(src_tokens.device))
- output = encoder_out["encoder_out"][0]
- encoder_padding_masks = encoder_out["encoder_padding_mask"]
-
- return {
- "encoder_out": [output],
- # This is because that in the original implementation
- # the output didn't consider the last segment as right context.
- "encoder_padding_mask": [encoder_padding_masks[0][:, : output.size(0)]] if len(encoder_padding_masks) > 0
- else [],
- "encoder_embedding": [],
- "encoder_states": [],
- "src_tokens": [],
- "src_lengths": [],
- }
-
- @staticmethod
- def conv_layer_stride(args):
- # TODO: make it configurable from the args
- return 4
-
-
-@register_model("convtransformer_emformer")
-class ConvtransformerEmformer(SimulConvTransformerModel):
- @staticmethod
- def add_args(parser):
- super(ConvtransformerEmformer, ConvtransformerEmformer).add_args(parser)
-
- parser.add_argument(
- "--segment-length",
- type=int,
- metavar="N",
- help="length of each segment (not including left context / right context)",
- )
- parser.add_argument(
- "--segment-left-context",
- type=int,
- help="length of left context in a segment",
- )
- parser.add_argument(
- "--segment-right-context",
- type=int,
- help="length of right context in a segment",
- )
- parser.add_argument(
- "--max-memory-size",
- type=int,
- default=-1,
- help="Right context for the segment.",
- )
- parser.add_argument(
- "--amtrf-tanh-on-mem",
- default=False,
- action="store_true",
- help="whether to use tanh on memory vector",
- )
-
- @classmethod
- def build_encoder(cls, args):
- encoder = ConvTransformerEmformerEncoder(args)
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
-
-@register_model_architecture(
- "convtransformer_emformer",
- "convtransformer_emformer",
-)
-def convtransformer_emformer_base(args):
- convtransformer_espnet(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/config/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/config/__init__.py
deleted file mode 100644
index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/config/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_amp_optimizer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_amp_optimizer.py
deleted file mode 100644
index 3a785e1830e91b7e090e841d428fe4ea61f3a65c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_amp_optimizer.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import copy
-import unittest
-
-import torch
-from torch.cuda.amp import autocast, GradScaler
-from fairseq.optim import build_optimizer
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestGradientScalingAMP(unittest.TestCase):
- def setUp(self):
- self.x = torch.tensor([2.0]).cuda().half()
- weight = 3.0
- bias = 5.0
- self.error = 1.0
- self.target = torch.tensor([self.x * weight + bias + self.error]).cuda()
- self.loss_fn = torch.nn.L1Loss()
-
- self.model = torch.nn.Linear(1, 1)
- self.model.weight.data = torch.tensor([[weight]])
- self.model.bias.data = torch.tensor([bias])
- self.model.cuda()
- self.params = list(self.model.parameters())
-
- self.namespace_dls = argparse.Namespace(
- optimizer="adam",
- lr=[0.1],
- adam_betas="(0.9, 0.999)",
- adam_eps=1e-8,
- weight_decay=0.0,
- threshold_loss_scale=1,
- min_loss_scale=1e-4,
- )
- self.scaler = GradScaler(
- init_scale=1,
- growth_interval=1,
- )
-
- def run_iter(self, model, params, optimizer):
- optimizer.zero_grad()
- with autocast():
- y = model(self.x)
- loss = self.loss_fn(y, self.target)
- self.scaler.scale(loss).backward()
- self.assertEqual(loss, torch.tensor(1.0, device="cuda:0", dtype=torch.float16))
-
- self.scaler.unscale_(optimizer)
- grad_norm = optimizer.clip_grad_norm(0)
- self.assertAlmostEqual(grad_norm.item(), 2.2361, 4)
-
- self.scaler.step(optimizer)
- self.scaler.update()
- self.assertEqual(
- model.weight,
- torch.tensor(
- [[3.1]], device="cuda:0", requires_grad=True
- ),
- )
- self.assertEqual(
- model.bias,
- torch.tensor(
- [5.1], device="cuda:0", requires_grad=True
- ),
- )
- self.assertEqual(self.scaler.get_scale(), 2.0)
-
- def test_automatic_mixed_precision(self):
- model = copy.deepcopy(self.model)
- params = list(model.parameters())
- optimizer = build_optimizer(self.namespace_dls, params)
-
- self.run_iter(model, params, optimizer)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_ema.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_ema.py
deleted file mode 100644
index 88ea65a434e49775d40f2b08ce6df0f8d9929c18..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_ema.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-from copy import deepcopy
-from dataclasses import dataclass
-from typing import Optional
-
-import torch
-from fairseq.models.ema import EMA
-
-
-class DummyModule(torch.nn.Module):
- def __init__(self) -> None:
- """LightningModule for testing purposes
-
- Args:
- epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum
- validation loss for testing purposes (zero based). If None this is ignored. Defaults to None.
- """
- super().__init__()
- self.layer = torch.nn.Linear(in_features=32, out_features=2)
- self.another_layer = torch.nn.Linear(in_features=2, out_features=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.layer(x)
- return self.another_layer(x)
-
-
-@dataclass
-class EMAConfig(object):
- ema_decay: float = 0.99
- ema_start_update: int = 0
- ema_fp32: bool = False
- ema_seed_model: Optional[str] = None
-
-
-class TestEMAGPU(unittest.TestCase):
- def assertTorchAllClose(self, x, y, atol=1e-8, rtol=1e-5, msg=None):
- diff = x.float() - y.float()
- diff_norm = torch.norm(diff)
- other_norm = torch.norm(y.float())
-
- if msg is None:
- msg = "|input - other| > {} + {} * |other|".format(
- atol, rtol
- )
-
- self.assertLessEqual(
- diff_norm,
- atol + rtol * other_norm,
- msg=msg,
- )
-
- def test_ema(self):
- model = DummyModule()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig()
- ema = EMA(model, config)
-
- # set decay
- ema._set_decay(config.ema_decay)
- self.assertEqual(ema.get_decay(), config.ema_decay)
-
- # get model
- self.assertEqual(ema.get_model(), ema.model)
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- # EMA step
- x = torch.randn(32)
- y = model(x)
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- ema_state_dict = ema.get_model().state_dict()
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema_state_dict[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
- self.assertTorchAllClose(
- ema_param,
- config.ema_decay * prev_param + (1 - config.ema_decay) * param,
- )
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- # Load EMA into model
- model2 = DummyModule()
- ema.reverse(model2)
-
- for key, param in model2.state_dict().items():
- ema_param = ema_state_dict[key]
- self.assertTrue(
- torch.allclose(ema_param, param)
- )
-
- def test_ema_fp32(self):
- model = DummyModule().half()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig(ema_fp32=True)
- ema = EMA(model, config)
-
- x = torch.randn(32)
- y = model(x.half())
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema.get_model().state_dict()[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
- self.assertIn(key, ema.fp32_params)
-
- # EMA update is done in fp32, and hence the EMA param must be
- # closer to the EMA update done in fp32 than in fp16.
- self.assertLessEqual(
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float()
- ),
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float()
- ),
- )
- self.assertTorchAllClose(
- ema_param,
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half(),
- )
-
- def test_ema_fp16(self):
- model = DummyModule().half()
- optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
- state = deepcopy(model.state_dict())
- config = EMAConfig(ema_fp32=False)
- ema = EMA(model, config)
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
- x = torch.randn(32)
- y = model(x.half())
- loss = y.sum()
- loss.backward()
- optimizer.step()
-
- ema.step(model)
-
- for key, param in model.state_dict().items():
- prev_param = state[key]
- ema_param = ema.get_model().state_dict()[key]
-
- if "version" in key:
- # Do not decay a model.version pytorch param
- continue
-
- # EMA update is done in fp16, and hence the EMA param must be
- # closer to the EMA update done in fp16 than in fp32.
- self.assertLessEqual(
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float()
- ),
- torch.norm(
- ema_param.float() -
- (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float()
- ),
- )
- self.assertTorchAllClose(
- ema_param,
- config.ema_decay * prev_param + (1 - config.ema_decay) * param,
- )
-
- # Since fp32 params is not used, it should be of size 0
- self.assertEqual(len(ema.fp32_params), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/models/sequence_generator.py b/spaces/OFA-Sys/OFA-Image_Caption/models/sequence_generator.py
deleted file mode 100644
index 7afe0757e38603740f7c2186d5410f9346e6b568..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/models/sequence_generator.py
+++ /dev/null
@@ -1,1053 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from typing import Dict, List, Optional
-import sys
-
-import torch
-import torch.nn as nn
-from fairseq import search, utils
-from fairseq.models import FairseqIncrementalDecoder
-from torch import Tensor
-from fairseq.ngram_repeat_block import NGramRepeatBlock
-
-from data import data_utils
-
-class SequenceGenerator(nn.Module):
- def __init__(
- self,
- models,
- tgt_dict,
- beam_size=1,
- max_len_a=0,
- max_len_b=200,
- max_len=0,
- min_len=1,
- normalize_scores=True,
- len_penalty=1.0,
- unk_penalty=0.0,
- temperature=1.0,
- match_source_len=False,
- no_repeat_ngram_size=0,
- search_strategy=None,
- eos=None,
- symbols_to_strip_from_output=None,
- lm_model=None,
- lm_weight=1.0,
- constraint_trie=None,
- constraint_range=None,
- gen_code=False,
- gen_box=False,
- ignore_eos=False,
- zero_shot=False
- ):
- """Generates translations of a given source sentence.
-
- Args:
- models (List[~fairseq.models.FairseqModel]): ensemble of models,
- currently support fairseq.models.TransformerModel for scripting
- beam_size (int, optional): beam width (default: 1)
- max_len_a/b (int, optional): generate sequences of maximum length
- ax + b, where x is the source length
- max_len (int, optional): the maximum length of the generated output
- (not including end-of-sentence)
- min_len (int, optional): the minimum length of the generated output
- (not including end-of-sentence)
- normalize_scores (bool, optional): normalize scores by the length
- of the output (default: True)
- len_penalty (float, optional): length penalty, where <1.0 favors
- shorter, >1.0 favors longer sentences (default: 1.0)
- unk_penalty (float, optional): unknown word penalty, where <0
- produces more unks, >0 produces fewer (default: 0.0)
- temperature (float, optional): temperature, where values
- >1.0 produce more uniform samples and values <1.0 produce
- sharper samples (default: 1.0)
- match_source_len (bool, optional): outputs should match the source
- length (default: False)
- """
- super().__init__()
- if isinstance(models, EnsembleModel):
- self.model = models
- else:
- self.model = EnsembleModel(models)
- self.gen_code = gen_code
- self.gen_box = gen_box
- self.ignore_eos = ignore_eos
- self.tgt_dict = tgt_dict
- self.pad = tgt_dict.pad()
- self.unk = tgt_dict.unk()
- self.bos = tgt_dict.bos()
- self.eos = tgt_dict.eos() if eos is None else eos
- self.symbols_to_strip_from_output = (
- symbols_to_strip_from_output.union({self.eos})
- if symbols_to_strip_from_output is not None
- else {self.bos, self.eos}
- )
- self.vocab_size = len(tgt_dict)
- self.beam_size = beam_size
- # the max beam size is the dictionary size - 1, since we never select pad
- self.beam_size = min(beam_size, self.vocab_size - 1)
- self.max_len_a = max_len_a
- self.max_len_b = max_len_b
- self.min_len = min_len
- self.max_len = max_len or self.model.max_decoder_positions()
-
- self.normalize_scores = normalize_scores
- self.len_penalty = len_penalty
- self.unk_penalty = unk_penalty
- self.temperature = temperature
- self.match_source_len = match_source_len
- self.zero_shot = zero_shot
-
- if no_repeat_ngram_size > 0:
- self.repeat_ngram_blocker = NGramRepeatBlock(no_repeat_ngram_size)
- else:
- self.repeat_ngram_blocker = None
-
- assert temperature > 0, "--temperature must be greater than 0"
-
- self.search = (
- search.BeamSearch(tgt_dict) if search_strategy is None else search_strategy
- )
- # We only need to set src_lengths in LengthConstrainedBeamSearch.
- # As a module attribute, setting it would break in multithread
- # settings when the model is shared.
- self.should_set_src_lengths = (
- hasattr(self.search, "needs_src_lengths") and self.search.needs_src_lengths
- )
-
- self.model.eval()
-
- self.lm_model = lm_model
- self.lm_weight = lm_weight
- if self.lm_model is not None:
- self.lm_model.eval()
-
- self.constraint_trie = constraint_trie
-
- self.constraint_start = None
- self.constraint_end = None
- if constraint_range is not None:
- constraint_start, constraint_end = constraint_range.split(',')
- self.constraint_start = int(constraint_start)
- self.constraint_end = int(constraint_end)
-
- def cuda(self):
- self.model.cuda()
- return self
-
- @torch.no_grad()
- def forward(
- self,
- sample: Dict[str, Dict[str, Tensor]],
- prefix_tokens: Optional[Tensor] = None,
- bos_token: Optional[int] = None,
- ):
- """Generate a batch of translations.
-
- Args:
- sample (dict): batch
- prefix_tokens (torch.LongTensor, optional): force decoder to begin
- with these tokens
- bos_token (int, optional): beginning of sentence token
- (default: self.eos)
- """
- return self._generate(sample, prefix_tokens, bos_token=bos_token)
-
- # TODO(myleott): unused, deprecate after pytorch-translate migration
- def generate_batched_itr(self, data_itr, beam_size=None, cuda=False, timer=None):
- """Iterate over a batched dataset and yield individual translations.
- Args:
- cuda (bool, optional): use GPU for generation
- timer (StopwatchMeter, optional): time generations
- """
- for sample in data_itr:
- s = utils.move_to_cuda(sample) if cuda else sample
- if "net_input" not in s:
- continue
- input = s["net_input"]
- # model.forward normally channels prev_output_tokens into the decoder
- # separately, but SequenceGenerator directly calls model.encoder
- encoder_input = {
- k: v for k, v in input.items() if k != "prev_output_tokens"
- }
- if timer is not None:
- timer.start()
- with torch.no_grad():
- hypos = self.generate(encoder_input)
- if timer is not None:
- timer.stop(sum(len(h[0]["tokens"]) for h in hypos))
- for i, id in enumerate(s["id"].data):
- # remove padding
- src = utils.strip_pad(input["src_tokens"].data[i, :], self.pad)
- ref = (
- utils.strip_pad(s["target"].data[i, :], self.pad)
- if s["target"] is not None
- else None
- )
- yield id, src, ref, hypos[i]
-
- @torch.no_grad()
- def generate(self, models, sample: Dict[str, Dict[str, Tensor]], **kwargs) -> List[List[Dict[str, Tensor]]]:
- """Generate translations. Match the api of other fairseq generators.
-
- Args:
- models (List[~fairseq.models.FairseqModel]): ensemble of models
- sample (dict): batch
- prefix_tokens (torch.LongTensor, optional): force decoder to begin
- with these tokens
- constraints (torch.LongTensor, optional): force decoder to include
- the list of constraints
- bos_token (int, optional): beginning of sentence token
- (default: self.eos)
- """
- return self._generate(models, sample, **kwargs)
-
- def _generate(
- self,
- models,
- sample: Dict[str, Dict[str, Tensor]],
- prefix_tokens: Optional[Tensor] = None,
- constraints: Optional[Tensor] = None,
- bos_token: Optional[int] = None,
- ):
- model = EnsembleModel(models)
- incremental_states = torch.jit.annotate(
- List[Dict[str, Dict[str, Optional[Tensor]]]],
- [
- torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {})
- for i in range(model.models_size)
- ],
- )
- net_input = sample["net_input"]
-
- if "src_tokens" in net_input:
- src_tokens = net_input["src_tokens"]
- # length of the source text being the character length except EndOfSentence and pad
- src_lengths = (
- (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1)
- )
- elif "source" in net_input:
- src_tokens = net_input["source"]
- src_lengths = (
- net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1)
- if net_input["padding_mask"] is not None
- else torch.tensor(src_tokens.size(-1)).to(src_tokens)
- )
- elif "features" in net_input:
- src_tokens = net_input["features"]
- src_lengths = (
- net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1)
- if net_input["padding_mask"] is not None
- else torch.tensor(src_tokens.size(-1)).to(src_tokens)
- )
- else:
- raise Exception("expected src_tokens or source in net input. input keys: " + str(net_input.keys()))
-
- # bsz: total number of sentences in beam
- # Note that src_tokens may have more than 2 dimensions (i.e. audio features)
- bsz, src_len = src_tokens.size()[:2]
- beam_size = self.beam_size
-
- if constraints is not None and not self.search.supports_constraints:
- raise NotImplementedError(
- "Target-side constraints were provided, but search method doesn't support them"
- )
-
- # Initialize constraints, when active
- self.search.init_constraints(constraints, beam_size)
-
- max_len: int = -1
- if self.match_source_len:
- max_len = src_lengths.max().item()
- else:
- max_len = int(self.max_len_a * src_len + self.max_len_b)
- assert (
- self.min_len <= max_len
- ), "min_len cannot be larger than max_len, please adjust these!"
- # compute the encoder output for each beam
- with torch.autograd.profiler.record_function("EnsembleModel: forward_encoder"):
- encoder_outs = model.forward_encoder(net_input)
-
- # placeholder of indices for bsz * beam_size to hold tokens and accumulative scores
- new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1)
- new_order = new_order.to(src_tokens.device).long()
- encoder_outs = model.reorder_encoder_out(encoder_outs, new_order)
- # ensure encoder_outs is a List.
- assert encoder_outs is not None
-
- # initialize buffers
- scores = (
- torch.zeros(bsz * beam_size, max_len + 1).to(src_tokens).float()
- ) # +1 for eos; pad is never chosen for scoring
- tokens = (
- torch.zeros(bsz * beam_size, max_len + 2)
- .to(src_tokens)
- .long()
- .fill_(self.pad)
- ) # +2 for eos and pad
- # tokens[:, 0] = self.eos if bos_token is None else bos_token
- tokens[:, 0] = self.bos
- attn: Optional[Tensor] = None
-
- # A list that indicates candidates that should be ignored.
- # For example, suppose we're sampling and have already finalized 2/5
- # samples. Then cands_to_ignore would mark 2 positions as being ignored,
- # so that we only finalize the remaining 3 samples.
- cands_to_ignore = (
- torch.zeros(bsz, beam_size).to(src_tokens).eq(-1)
- ) # forward and backward-compatible False mask
-
- # list of completed sentences
- finalized = torch.jit.annotate(
- List[List[Dict[str, Tensor]]],
- [torch.jit.annotate(List[Dict[str, Tensor]], []) for i in range(bsz)],
- ) # contains lists of dictionaries of infomation about the hypothesis being finalized at each step
-
- # a boolean array indicating if the sentence at the index is finished or not
- finished = [False for i in range(bsz)]
- num_remaining_sent = bsz # number of sentences remaining
-
- # number of candidate hypos per step
- cand_size = 2 * beam_size # 2 x beam size in case half are EOS
-
- # offset arrays for converting between different indexing schemes
- bbsz_offsets = (
- (torch.arange(0, bsz) * beam_size)
- .unsqueeze(1)
- .type_as(tokens)
- .to(src_tokens.device)
- )
- cand_offsets = torch.arange(0, cand_size).type_as(tokens).to(src_tokens.device)
-
- reorder_state: Optional[Tensor] = None
- batch_idxs: Optional[Tensor] = None
-
- original_batch_idxs: Optional[Tensor] = None
- if "id" in sample and isinstance(sample["id"], Tensor):
- original_batch_idxs = sample["id"]
- else:
- original_batch_idxs = torch.arange(0, bsz).type_as(tokens)
-
- for step in range(max_len + 1): # one extra step for EOS marker
- # reorder decoder internal states based on the prev choice of beams
- if reorder_state is not None:
- if batch_idxs is not None:
- # update beam indices to take into account removed sentences
- corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as(
- batch_idxs
- )
- reorder_state.view(-1, beam_size).add_(
- corr.unsqueeze(-1) * beam_size
- )
- original_batch_idxs = original_batch_idxs[batch_idxs]
- model.reorder_incremental_state(incremental_states, reorder_state)
- encoder_outs = model.reorder_encoder_out(
- encoder_outs, reorder_state
- )
- with torch.autograd.profiler.record_function("EnsembleModel: forward_decoder"):
- lprobs, avg_attn_scores = model.forward_decoder(
- tokens[:, : step + 1],
- encoder_outs,
- incremental_states,
- self.temperature,
- constraint_trie=self.constraint_trie,
- constraint_start=self.constraint_start,
- constraint_end=self.constraint_end,
- gen_code=self.gen_code,
- zero_shot=self.zero_shot,
- prefix_tokens=prefix_tokens
- )
-
- if self.lm_model is not None:
- lm_out = self.lm_model(tokens[:, : step + 1])
- probs = self.lm_model.get_normalized_probs(
- lm_out, log_probs=True, sample=None
- )
- probs = probs[:, -1, :] * self.lm_weight
- lprobs += probs
- # handle prefix tokens (possibly with different lengths)
- if (
- prefix_tokens is not None
- and step < prefix_tokens.size(1)
- and step < max_len
- ):
- lprobs, tokens, scores = self._prefix_tokens(
- step, lprobs, scores, tokens, prefix_tokens, beam_size
- )
- elif step < self.min_len:
- # minimum length constraint (does not apply if using prefix_tokens)
- lprobs[:, self.eos] = -math.inf
-
- lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs)
-
- lprobs[:, self.pad] = -math.inf # never select pad
- lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty
-
- if (self.gen_code or self.gen_box) and step < max_len:
- lprobs[:, :4] = -math.inf
- if self.gen_box:
- lprobs[:, -1] = -math.inf
- if (step + 1) % 5 == 0:
- lprobs[:, self.constraint_start:59457] = -math.inf
- else:
- lprobs[:, 59457:] = -math.inf
-
- # handle max length constraint
- if step >= max_len:
- lprobs[:, : self.eos] = -math.inf
- lprobs[:, self.eos + 1 :] = -math.inf
- if self.ignore_eos:
- lprobs[:, self.eos] = 1
-
- # Record attention scores, only support avg_attn_scores is a Tensor
- if avg_attn_scores is not None:
- if attn is None:
- attn = torch.empty(
- bsz * beam_size, avg_attn_scores.size(1), max_len + 2
- ).to(scores)
- attn[:, :, step + 1].copy_(avg_attn_scores)
-
- scores = scores.type_as(lprobs)
- eos_bbsz_idx = torch.empty(0).to(
- tokens
- ) # indices of hypothesis ending with eos (finished sentences)
- eos_scores = torch.empty(0).to(
- scores
- ) # scores of hypothesis ending with eos (finished sentences)
-
- if self.should_set_src_lengths:
- self.search.set_src_lengths(src_lengths)
-
- if self.repeat_ngram_blocker is not None:
- lprobs = self.repeat_ngram_blocker(tokens, lprobs, bsz, beam_size, step)
-
- # Shape: (batch, cand_size)
- cand_scores, cand_indices, cand_beams = self.search.step(
- step,
- lprobs.view(bsz, -1, self.vocab_size),
- scores.view(bsz, beam_size, -1)[:, :, :step],
- tokens[:, : step + 1],
- original_batch_idxs,
- )
-
- # cand_bbsz_idx contains beam indices for the top candidate
- # hypotheses, with a range of values: [0, bsz*beam_size),
- # and dimensions: [bsz, cand_size]
- cand_bbsz_idx = cand_beams.add(bbsz_offsets)
-
- # finalize hypotheses that end in eos
- # Shape of eos_mask: (batch size, beam size)
- eos_mask = cand_indices.eq(self.eos) & cand_scores.ne(-math.inf)
- eos_mask[:, :beam_size][cands_to_ignore] = torch.tensor(0).to(eos_mask)
-
- # only consider eos when it's among the top beam_size indices
- # Now we know what beam item(s) to finish
- # Shape: 1d list of absolute-numbered
- eos_bbsz_idx = torch.masked_select(
- cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size]
- )
-
- finalized_sents: List[int] = []
- if eos_bbsz_idx.numel() > 0:
- eos_scores = torch.masked_select(
- cand_scores[:, :beam_size], mask=eos_mask[:, :beam_size]
- )
-
- finalized_sents = self.finalize_hypos(
- step,
- eos_bbsz_idx,
- eos_scores,
- tokens,
- scores,
- finalized,
- finished,
- beam_size,
- attn,
- src_lengths,
- max_len,
- )
- num_remaining_sent -= len(finalized_sents)
-
- assert num_remaining_sent >= 0
- if num_remaining_sent == 0:
- break
- if self.search.stop_on_max_len and step >= max_len:
- break
- assert step < max_len, f"{step} < {max_len}"
-
- # Remove finalized sentences (ones for which {beam_size}
- # finished hypotheses have been generated) from the batch.
- if len(finalized_sents) > 0:
- new_bsz = bsz - len(finalized_sents)
-
- # construct batch_idxs which holds indices of batches to keep for the next pass
- batch_mask = torch.ones(
- bsz, dtype=torch.bool, device=cand_indices.device
- )
- batch_mask[finalized_sents] = False
- # TODO replace `nonzero(as_tuple=False)` after TorchScript supports it
- batch_idxs = torch.arange(
- bsz, device=cand_indices.device
- ).masked_select(batch_mask)
-
- # Choose the subset of the hypothesized constraints that will continue
- self.search.prune_sentences(batch_idxs)
-
- eos_mask = eos_mask[batch_idxs]
- cand_beams = cand_beams[batch_idxs]
- bbsz_offsets.resize_(new_bsz, 1)
- cand_bbsz_idx = cand_beams.add(bbsz_offsets)
- cand_scores = cand_scores[batch_idxs]
- cand_indices = cand_indices[batch_idxs]
-
- if prefix_tokens is not None:
- prefix_tokens = prefix_tokens[batch_idxs]
- src_lengths = src_lengths[batch_idxs]
- cands_to_ignore = cands_to_ignore[batch_idxs]
-
- scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1)
- tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1)
- if attn is not None:
- attn = attn.view(bsz, -1)[batch_idxs].view(
- new_bsz * beam_size, attn.size(1), -1
- )
- bsz = new_bsz
- else:
- batch_idxs = None
-
- # Set active_mask so that values > cand_size indicate eos hypos
- # and values < cand_size indicate candidate active hypos.
- # After, the min values per row are the top candidate active hypos
-
- # Rewrite the operator since the element wise or is not supported in torchscript.
-
- eos_mask[:, :beam_size] = ~((~cands_to_ignore) & (~eos_mask[:, :beam_size]))
- active_mask = torch.add(
- eos_mask.type_as(cand_offsets) * cand_size,
- cand_offsets[: eos_mask.size(1)],
- )
-
- # get the top beam_size active hypotheses, which are just
- # the hypos with the smallest values in active_mask.
- # {active_hypos} indicates which {beam_size} hypotheses
- # from the list of {2 * beam_size} candidates were
- # selected. Shapes: (batch size, beam size)
- new_cands_to_ignore, active_hypos = torch.topk(
- active_mask, k=beam_size, dim=1, largest=False
- )
-
- # update cands_to_ignore to ignore any finalized hypos.
- cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size]
- # Make sure there is at least one active item for each sentence in the batch.
- assert (~cands_to_ignore).any(dim=1).all()
-
- # update cands_to_ignore to ignore any finalized hypos
-
- # {active_bbsz_idx} denotes which beam number is continued for each new hypothesis (a beam
- # can be selected more than once).
- active_bbsz_idx = torch.gather(cand_bbsz_idx, dim=1, index=active_hypos)
- active_scores = torch.gather(cand_scores, dim=1, index=active_hypos)
-
- active_bbsz_idx = active_bbsz_idx.view(-1)
- active_scores = active_scores.view(-1)
-
- # copy tokens and scores for active hypotheses
-
- # Set the tokens for each beam (can select the same row more than once)
- tokens[:, : step + 1] = torch.index_select(
- tokens[:, : step + 1], dim=0, index=active_bbsz_idx
- )
- # Select the next token for each of them
- tokens.view(bsz, beam_size, -1)[:, :, step + 1] = torch.gather(
- cand_indices, dim=1, index=active_hypos
- )
- if step > 0:
- scores[:, :step] = torch.index_select(
- scores[:, :step], dim=0, index=active_bbsz_idx
- )
- scores.view(bsz, beam_size, -1)[:, :, step] = torch.gather(
- cand_scores, dim=1, index=active_hypos
- )
-
- # Update constraints based on which candidates were selected for the next beam
- self.search.update_constraints(active_hypos)
-
- # copy attention for active hypotheses
- if attn is not None:
- attn[:, :, : step + 2] = torch.index_select(
- attn[:, :, : step + 2], dim=0, index=active_bbsz_idx
- )
-
- # reorder incremental state in decoder
- reorder_state = active_bbsz_idx
-
- # sort by score descending
- for sent in range(len(finalized)):
- scores = torch.tensor(
- [float(elem["score"].item()) for elem in finalized[sent]]
- )
- _, sorted_scores_indices = torch.sort(scores, descending=True)
- finalized[sent] = [finalized[sent][ssi] for ssi in sorted_scores_indices]
- finalized[sent] = torch.jit.annotate(
- List[Dict[str, Tensor]], finalized[sent]
- )
- return finalized
-
- def _prefix_tokens(
- self, step: int, lprobs, scores, tokens, prefix_tokens, beam_size: int
- ):
- """Handle prefix tokens"""
- prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1)
- prefix_lprobs = lprobs.gather(-1, prefix_toks.unsqueeze(-1))
- prefix_mask = prefix_toks.ne(self.pad)
- if self.constraint_trie is None:
- lprobs[prefix_mask] = torch.min(prefix_lprobs) - 1
- else:
- lprobs[prefix_mask] = -math.inf
- lprobs[prefix_mask] = lprobs[prefix_mask].scatter(
- -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lprobs[prefix_mask]
- )
- # if prefix includes eos, then we should make sure tokens and
- # scores are the same across all beams
- eos_mask = prefix_toks.eq(self.eos)
- if eos_mask.any():
- # validate that the first beam matches the prefix
- first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[
- :, 0, 1 : step + 1
- ]
- eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0]
- target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step]
- assert (first_beam == target_prefix).all()
-
- # copy tokens, scores and lprobs from the first beam to all beams
- tokens = self.replicate_first_beam(tokens, eos_mask_batch_dim, beam_size)
- scores = self.replicate_first_beam(scores, eos_mask_batch_dim, beam_size)
- lprobs = self.replicate_first_beam(lprobs, eos_mask_batch_dim, beam_size)
- return lprobs, tokens, scores
-
- def replicate_first_beam(self, tensor, mask, beam_size: int):
- tensor = tensor.view(-1, beam_size, tensor.size(-1))
- tensor[mask] = tensor[mask][:, :1, :]
- return tensor.view(-1, tensor.size(-1))
-
- def finalize_hypos(
- self,
- step: int,
- bbsz_idx,
- eos_scores,
- tokens,
- scores,
- finalized: List[List[Dict[str, Tensor]]],
- finished: List[bool],
- beam_size: int,
- attn: Optional[Tensor],
- src_lengths,
- max_len: int,
- ):
- """Finalize hypothesis, store finalized information in `finalized`, and change `finished` accordingly.
- A sentence is finalized when {beam_size} finished items have been collected for it.
-
- Returns number of sentences (not beam items) being finalized.
- These will be removed from the batch and not processed further.
- Args:
- bbsz_idx (Tensor):
- """
- assert bbsz_idx.numel() == eos_scores.numel()
-
- # clone relevant token and attention tensors.
- # tokens is (batch * beam, max_len). So the index_select
- # gets the newly EOS rows, then selects cols 1..{step + 2}
- tokens_clone = tokens.index_select(0, bbsz_idx)[
- :, 1 : step + 2
- ] # skip the first index, which is EOS
-
- tokens_clone[:, step] = self.eos
- attn_clone = (
- attn.index_select(0, bbsz_idx)[:, :, 1 : step + 2]
- if attn is not None
- else None
- )
-
- # compute scores per token position
- pos_scores = scores.index_select(0, bbsz_idx)[:, : step + 1]
- pos_scores[:, step] = eos_scores
- # convert from cumulative to per-position scores
- pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1]
-
- # normalize sentence-level scores
- if self.normalize_scores:
- eos_scores /= (step + 1) ** self.len_penalty
-
- # cum_unfin records which sentences in the batch are finished.
- # It helps match indexing between (a) the original sentences
- # in the batch and (b) the current, possibly-reduced set of
- # sentences.
- cum_unfin: List[int] = []
- prev = 0
- for f in finished:
- if f:
- prev += 1
- else:
- cum_unfin.append(prev)
- cum_fin_tensor = torch.tensor(cum_unfin, dtype=torch.int).to(bbsz_idx)
-
- unfin_idx = bbsz_idx // beam_size
- sent = unfin_idx + torch.index_select(cum_fin_tensor, 0, unfin_idx)
-
- # Create a set of "{sent}{unfin_idx}", where
- # "unfin_idx" is the index in the current (possibly reduced)
- # list of sentences, and "sent" is the index in the original,
- # unreduced batch
- # For every finished beam item
- # sentence index in the current (possibly reduced) batch
- seen = (sent << 32) + unfin_idx
- unique_seen: List[int] = torch.unique(seen).tolist()
-
- if self.match_source_len:
- condition = step > torch.index_select(src_lengths, 0, unfin_idx)
- eos_scores = torch.where(condition, torch.tensor(-math.inf), eos_scores)
- sent_list: List[int] = sent.tolist()
- for i in range(bbsz_idx.size()[0]):
- # An input sentence (among those in a batch) is finished when
- # beam_size hypotheses have been collected for it
- if len(finalized[sent_list[i]]) < beam_size:
- if attn_clone is not None:
- # remove padding tokens from attn scores
- hypo_attn = attn_clone[i]
- else:
- hypo_attn = torch.empty(0)
-
- finalized[sent_list[i]].append(
- {
- "tokens": tokens_clone[i],
- "score": eos_scores[i],
- "attention": hypo_attn, # src_len x tgt_len
- "alignment": torch.empty(0),
- "positional_scores": pos_scores[i],
- }
- )
-
- newly_finished: List[int] = []
- for unique_s in unique_seen:
- # check termination conditions for this sentence
- unique_sent: int = unique_s >> 32
- unique_unfin_idx: int = unique_s - (unique_sent << 32)
-
- if not finished[unique_sent] and self.is_finished(
- step, unique_unfin_idx, max_len, len(finalized[unique_sent]), beam_size
- ):
- finished[unique_sent] = True
- newly_finished.append(unique_unfin_idx)
-
- return newly_finished
-
- def is_finished(
- self,
- step: int,
- unfin_idx: int,
- max_len: int,
- finalized_sent_len: int,
- beam_size: int,
- ):
- """
- Check whether decoding for a sentence is finished, which
- occurs when the list of finalized sentences has reached the
- beam size, or when we reach the maximum length.
- """
- assert finalized_sent_len <= beam_size
- if finalized_sent_len == beam_size or step == max_len:
- return True
- return False
-
-
-class EnsembleModel(nn.Module):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__()
- self.models_size = len(models)
- # method '__len__' is not supported in ModuleList for torch script
- self.single_model = models[0]
- self.models = nn.ModuleList(models)
-
- self.has_incremental: bool = False
- if all(
- hasattr(m, "decoder") and isinstance(m.decoder, FairseqIncrementalDecoder)
- for m in models
- ):
- self.has_incremental = True
-
- def forward(self):
- pass
-
- def has_encoder(self):
- return hasattr(self.single_model, "encoder")
-
- def has_incremental_states(self):
- return self.has_incremental
-
- def max_decoder_positions(self):
- return min([m.max_decoder_positions() for m in self.models if hasattr(m, "max_decoder_positions")] + [sys.maxsize])
-
- @torch.jit.export
- def forward_encoder(self, net_input: Dict[str, Tensor]):
- if not self.has_encoder():
- return None
- return [model.encoder.forward_torchscript(net_input) for model in self.models]
-
- @torch.jit.export
- def forward_decoder(
- self,
- tokens,
- encoder_outs: List[Dict[str, List[Tensor]]],
- incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]],
- temperature: float = 1.0,
- constraint_trie=None,
- constraint_start=None,
- constraint_end=None,
- gen_code=False,
- zero_shot=False,
- prefix_tokens=None
- ):
- log_probs = []
- avg_attn: Optional[Tensor] = None
- encoder_out: Optional[Dict[str, List[Tensor]]] = None
- code_mask = (tokens.new_ones(tokens.size(0))*gen_code).bool()
- for i, model in enumerate(self.models):
- if self.has_encoder():
- encoder_out = encoder_outs[i]
- # decode each model
- if self.has_incremental_states():
- decoder_out = model.decoder.forward(
- tokens,
- code_masks=code_mask,
- encoder_out=encoder_out,
- incremental_state=incremental_states[i],
- )
- else:
- if hasattr(model, "decoder"):
- decoder_out = model.decoder.forward(tokens, code_masks=code_mask, encoder_out=encoder_out)
- else:
- decoder_out = model.forward(tokens)
-
- attn: Optional[Tensor] = None
- decoder_len = len(decoder_out)
- if decoder_len > 1 and decoder_out[1] is not None:
- if isinstance(decoder_out[1], Tensor):
- attn = decoder_out[1]
- else:
- attn_holder = decoder_out[1]["attn"]
- if isinstance(attn_holder, Tensor):
- attn = attn_holder
- elif attn_holder is not None:
- attn = attn_holder[0]
- if attn is not None:
- attn = attn[:, -1, :]
-
- decoder_out_tuple = (
- decoder_out[0][:, -1:, :].div_(temperature),
- None if decoder_len <= 1 else decoder_out[1],
- )
-
- beam_size = decoder_out_tuple[0].size(0) // prefix_tokens.size(0) if prefix_tokens is not None else 0
- if constraint_trie is not None and not zero_shot:
- assert constraint_start is None and constraint_end is None
- constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool()
- constraint_prefix_tokens = tokens.tolist()
- for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens):
- prefix_len = prefix_tokens[token_index // beam_size].ne(1).sum().item() if prefix_tokens is not None else 0
- if len(constraint_prefix_token) > prefix_len:
- constraint_prefix_token = [0] + constraint_prefix_token[prefix_len+1:]
- constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token)
- constraint_masks[token_index][:, constraint_nodes] = True
- else:
- constraint_masks[token_index] = True
- decoder_out_tuple[0].masked_fill_(~constraint_masks, -math.inf)
- if constraint_start is not None and constraint_end is not None and not zero_shot:
- assert constraint_trie is None
- decoder_out_tuple[0][:, :, 4:constraint_start] = -math.inf
- decoder_out_tuple[0][:, :, constraint_end:] = -math.inf
-
- probs = model.get_normalized_probs(
- decoder_out_tuple, log_probs=True, sample=None
- )
- if constraint_trie is not None and zero_shot:
- assert constraint_start is None and constraint_end is None
- constraint_masks = decoder_out_tuple[0].new_zeros(decoder_out_tuple[0].size()).bool()
- constraint_prefix_tokens = tokens.tolist()
- for token_index, constraint_prefix_token in enumerate(constraint_prefix_tokens):
- constraint_nodes = constraint_trie.get_next_layer(constraint_prefix_token)
- constraint_masks[token_index][:, constraint_nodes] = True
- probs.masked_fill_(~constraint_masks, -math.inf)
- if constraint_start is not None and constraint_end is not None and zero_shot:
- assert constraint_trie is None
- probs[:, :, 4:constraint_start] = -math.inf
- probs[:, :, constraint_end:] = -math.inf
- probs = probs[:, -1, :]
- if self.models_size == 1:
- return probs, attn
-
- log_probs.append(probs)
- if attn is not None:
- if avg_attn is None:
- avg_attn = attn
- else:
- avg_attn.add_(attn)
-
- avg_probs = torch.logsumexp(torch.stack(log_probs, dim=0), dim=0) - math.log(
- self.models_size
- )
-
- if avg_attn is not None:
- avg_attn.div_(self.models_size)
- return avg_probs, avg_attn
-
- @torch.jit.export
- def reorder_encoder_out(
- self, encoder_outs: Optional[List[Dict[str, List[Tensor]]]], new_order
- ):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- new_outs: List[Dict[str, List[Tensor]]] = []
- if not self.has_encoder():
- return new_outs
- for i, model in enumerate(self.models):
- assert encoder_outs is not None
- new_outs.append(
- model.encoder.reorder_encoder_out(encoder_outs[i], new_order)
- )
- return new_outs
-
- @torch.jit.export
- def reorder_incremental_state(
- self,
- incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]],
- new_order,
- ):
- if not self.has_incremental_states():
- return
- for i, model in enumerate(self.models):
- model.decoder.reorder_incremental_state_scripting(
- incremental_states[i], new_order
- )
-
-
-class SequenceGeneratorWithAlignment(SequenceGenerator):
- def __init__(
- self, models, tgt_dict, left_pad_target=False, print_alignment="hard", **kwargs
- ):
- """Generates translations of a given source sentence.
-
- Produces alignments following "Jointly Learning to Align and
- Translate with Transformer Models" (Garg et al., EMNLP 2019).
-
- Args:
- left_pad_target (bool, optional): Whether or not the
- hypothesis should be left padded or not when they are
- teacher forced for generating alignments.
- """
- super().__init__(EnsembleModelWithAlignment(models), tgt_dict, **kwargs)
- self.left_pad_target = left_pad_target
-
- if print_alignment == "hard":
- self.extract_alignment = utils.extract_hard_alignment
- elif print_alignment == "soft":
- self.extract_alignment = utils.extract_soft_alignment
-
- @torch.no_grad()
- def generate(self, models, sample, **kwargs):
- finalized = super()._generate(sample, **kwargs)
-
- src_tokens = sample["net_input"]["src_tokens"]
- bsz = src_tokens.shape[0]
- beam_size = self.beam_size
- (
- src_tokens,
- src_lengths,
- prev_output_tokens,
- tgt_tokens,
- ) = self._prepare_batch_for_alignment(sample, finalized)
- if any(getattr(m, "full_context_alignment", False) for m in self.model.models):
- attn = self.model.forward_align(src_tokens, src_lengths, prev_output_tokens)
- else:
- attn = [
- finalized[i // beam_size][i % beam_size]["attention"].transpose(1, 0)
- for i in range(bsz * beam_size)
- ]
-
- if src_tokens.device != "cpu":
- src_tokens = src_tokens.to("cpu")
- tgt_tokens = tgt_tokens.to("cpu")
- attn = [i.to("cpu") for i in attn]
-
- # Process the attn matrix to extract hard alignments.
- for i in range(bsz * beam_size):
- alignment = self.extract_alignment(
- attn[i], src_tokens[i], tgt_tokens[i], self.pad, self.eos
- )
- finalized[i // beam_size][i % beam_size]["alignment"] = alignment
- return finalized
-
- def _prepare_batch_for_alignment(self, sample, hypothesis):
- src_tokens = sample["net_input"]["src_tokens"]
- bsz = src_tokens.shape[0]
- src_tokens = (
- src_tokens[:, None, :]
- .expand(-1, self.beam_size, -1)
- .contiguous()
- .view(bsz * self.beam_size, -1)
- )
- src_lengths = sample["net_input"]["src_lengths"]
- src_lengths = (
- src_lengths[:, None]
- .expand(-1, self.beam_size)
- .contiguous()
- .view(bsz * self.beam_size)
- )
- prev_output_tokens = data_utils.collate_tokens(
- [beam["tokens"] for example in hypothesis for beam in example],
- self.pad,
- self.eos,
- self.left_pad_target,
- move_eos_to_beginning=True,
- )
- tgt_tokens = data_utils.collate_tokens(
- [beam["tokens"] for example in hypothesis for beam in example],
- self.pad,
- self.eos,
- self.left_pad_target,
- move_eos_to_beginning=False,
- )
- return src_tokens, src_lengths, prev_output_tokens, tgt_tokens
-
-
-class EnsembleModelWithAlignment(EnsembleModel):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__(models)
-
- def forward_align(self, src_tokens, src_lengths, prev_output_tokens):
- avg_attn = None
- for model in self.models:
- decoder_out = model(src_tokens, src_lengths, prev_output_tokens)
- attn = decoder_out[1]["attn"][0]
- if avg_attn is None:
- avg_attn = attn
- else:
- avg_attn.add_(attn)
- if len(self.models) > 1:
- avg_attn.div_(len(self.models))
- return avg_attn
diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py
deleted file mode 100644
index b41bfbe38789ba14e6a5ea938c75d761424c00ab..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py
+++ /dev/null
@@ -1,92 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-import argparse
-import glob
-
-import numpy as np
-
-
-DIM = 1024
-
-
-def compute_dist(source_embs, target_embs, k=5, return_sim_mat=False):
- target_ids = [tid for tid in target_embs]
- source_mat = np.stack(source_embs.values(), axis=0)
- normalized_source_mat = source_mat / np.linalg.norm(
- source_mat, axis=1, keepdims=True
- )
- target_mat = np.stack(target_embs.values(), axis=0)
- normalized_target_mat = target_mat / np.linalg.norm(
- target_mat, axis=1, keepdims=True
- )
- sim_mat = normalized_source_mat.dot(normalized_target_mat.T)
- if return_sim_mat:
- return sim_mat
- neighbors_map = {}
- for i, sentence_id in enumerate(source_embs):
- idx = np.argsort(sim_mat[i, :])[::-1][:k]
- neighbors_map[sentence_id] = [target_ids[tid] for tid in idx]
- return neighbors_map
-
-
-def load_embeddings(directory, LANGS):
- sentence_embeddings = {}
- sentence_texts = {}
- for lang in LANGS:
- sentence_embeddings[lang] = {}
- sentence_texts[lang] = {}
- lang_dir = f"{directory}/{lang}"
- embedding_files = glob.glob(f"{lang_dir}/all_avg_pool.{lang}.*")
- for embed_file in embedding_files:
- shard_id = embed_file.split(".")[-1]
- embeddings = np.fromfile(embed_file, dtype=np.float32)
- num_rows = embeddings.shape[0] // DIM
- embeddings = embeddings.reshape((num_rows, DIM))
-
- with open(f"{lang_dir}/sentences.{lang}.{shard_id}") as sentence_file:
- for idx, line in enumerate(sentence_file):
- sentence_id, sentence = line.strip().split("\t")
- sentence_texts[lang][sentence_id] = sentence
- sentence_embeddings[lang][sentence_id] = embeddings[idx, :]
-
- return sentence_embeddings, sentence_texts
-
-
-def compute_accuracy(directory, LANGS):
- sentence_embeddings, sentence_texts = load_embeddings(directory, LANGS)
-
- top_1_accuracy = {}
-
- top1_str = " ".join(LANGS) + "\n"
- for source_lang in LANGS:
- top_1_accuracy[source_lang] = {}
- top1_str += f"{source_lang} "
- for target_lang in LANGS:
- top1 = 0
- top5 = 0
- neighbors_map = compute_dist(
- sentence_embeddings[source_lang], sentence_embeddings[target_lang]
- )
- for sentence_id, neighbors in neighbors_map.items():
- if sentence_id == neighbors[0]:
- top1 += 1
- if sentence_id in neighbors[:5]:
- top5 += 1
- n = len(sentence_embeddings[target_lang])
- top1_str += f"{top1/n} "
- top1_str += "\n"
-
- print(top1_str)
- print(top1_str, file=open(f"{directory}/accuracy", "w"))
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Analyze encoder outputs")
- parser.add_argument("directory", help="Source language corpus")
- parser.add_argument("--langs", help="List of langs")
- args = parser.parse_args()
- langs = args.langs.split(",")
- compute_accuracy(args.directory, langs)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/modules/transformer_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/modules/transformer_layer.py
deleted file mode 100644
index 7ab53c6e5f12f15562717effb86ab8cb8d6b4fa3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/modules/transformer_layer.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.model_parallel.modules import ModelParallelMultiheadAttention
-from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- ColumnParallelLinear,
- RowParallelLinear,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-class ModelParallelTransformerEncoderLayer(TransformerEncoderLayer):
- """Encoder layer block over multiple gpus.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- )
-
-
-class ModelParallelTransformerDecoderLayer(TransformerDecoderLayer):
- """Decoder layer block.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=not getattr(args, "cross_self_attention", False),
- )
-
- def build_encoder_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- kdim=getattr(args, "encoder_embed_dim", None),
- vdim=getattr(args, "encoder_embed_dim", None),
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
diff --git a/spaces/OlaWod/FreeVC/speaker_encoder/params_data.py b/spaces/OlaWod/FreeVC/speaker_encoder/params_data.py
deleted file mode 100644
index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000
--- a/spaces/OlaWod/FreeVC/speaker_encoder/params_data.py
+++ /dev/null
@@ -1,29 +0,0 @@
-
-## Mel-filterbank
-mel_window_length = 25 # In milliseconds
-mel_window_step = 10 # In milliseconds
-mel_n_channels = 40
-
-
-## Audio
-sampling_rate = 16000
-# Number of spectrogram frames in a partial utterance
-partials_n_frames = 160 # 1600 ms
-# Number of spectrogram frames at inference
-inference_n_frames = 80 # 800 ms
-
-
-## Voice Activation Detection
-# Window size of the VAD. Must be either 10, 20 or 30 milliseconds.
-# This sets the granularity of the VAD. Should not need to be changed.
-vad_window_length = 30 # In milliseconds
-# Number of frames to average together when performing the moving average smoothing.
-# The larger this value, the larger the VAD variations must be to not get smoothed out.
-vad_moving_average_width = 8
-# Maximum number of consecutive silent frames a segment can have.
-vad_max_silence_length = 6
-
-
-## Audio volume normalization
-audio_norm_target_dBFS = -30
-
diff --git a/spaces/Omnibus/2-button-Story-Board/README.md b/spaces/Omnibus/2-button-Story-Board/README.md
deleted file mode 100644
index 84bd079a6c839b3fe58cfc4e45efb054e8b6dca1..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/2-button-Story-Board/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 2 Button Story Book
-emoji: 🌖
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-duplicated_from: Omnibus/2-button-Story-Book
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/testing.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/testing.py
deleted file mode 100644
index 161fa6b80845ecabb6f71f28aa3333c3178c8756..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/testing.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import io
-import numpy as np
-import torch
-
-from detectron2 import model_zoo
-from detectron2.data import DatasetCatalog
-from detectron2.data.detection_utils import read_image
-from detectron2.modeling import build_model
-from detectron2.structures import Boxes, Instances, ROIMasks
-from detectron2.utils.file_io import PathManager
-
-
-"""
-Internal utilities for tests. Don't use except for writing tests.
-"""
-
-
-def get_model_no_weights(config_path):
- """
- Like model_zoo.get, but do not load any weights (even pretrained)
- """
- cfg = model_zoo.get_config(config_path)
- if not torch.cuda.is_available():
- cfg.MODEL.DEVICE = "cpu"
- return build_model(cfg)
-
-
-def random_boxes(num_boxes, max_coord=100, device="cpu"):
- """
- Create a random Nx4 boxes tensor, with coordinates < max_coord.
- """
- boxes = torch.rand(num_boxes, 4, device=device) * (max_coord * 0.5)
- boxes.clamp_(min=1.0) # tiny boxes cause numerical instability in box regression
- # Note: the implementation of this function in torchvision is:
- # boxes[:, 2:] += torch.rand(N, 2) * 100
- # but it does not guarantee non-negative widths/heights constraints:
- # boxes[:, 2] >= boxes[:, 0] and boxes[:, 3] >= boxes[:, 1]:
- boxes[:, 2:] += boxes[:, :2]
- return boxes
-
-
-def get_sample_coco_image(tensor=True):
- """
- Args:
- tensor (bool): if True, returns 3xHxW tensor.
- else, returns a HxWx3 numpy array.
-
- Returns:
- an image, in BGR color.
- """
- try:
- file_name = DatasetCatalog.get("coco_2017_val_100")[0]["file_name"]
- if not PathManager.exists(file_name):
- raise FileNotFoundError()
- except IOError:
- # for public CI to run
- file_name = PathManager.get_local_path(
- "http://images.cocodataset.org/train2017/000000000009.jpg"
- )
- ret = read_image(file_name, format="BGR")
- if tensor:
- ret = torch.from_numpy(np.ascontiguousarray(ret.transpose(2, 0, 1)))
- return ret
-
-
-def convert_scripted_instances(instances):
- """
- Convert a scripted Instances object to a regular :class:`Instances` object
- """
- assert hasattr(
- instances, "image_size"
- ), f"Expect an Instances object, but got {type(instances)}!"
- ret = Instances(instances.image_size)
- for name in instances._field_names:
- val = getattr(instances, "_" + name, None)
- if val is not None:
- ret.set(name, val)
- return ret
-
-
-def assert_instances_allclose(input, other, *, rtol=1e-5, msg="", size_as_tensor=False):
- """
- Args:
- input, other (Instances):
- size_as_tensor: compare image_size of the Instances as tensors (instead of tuples).
- Useful for comparing outputs of tracing.
- """
- if not isinstance(input, Instances):
- input = convert_scripted_instances(input)
- if not isinstance(other, Instances):
- other = convert_scripted_instances(other)
-
- if not msg:
- msg = "Two Instances are different! "
- else:
- msg = msg.rstrip() + " "
-
- size_error_msg = msg + f"image_size is {input.image_size} vs. {other.image_size}!"
- if size_as_tensor:
- assert torch.equal(
- torch.tensor(input.image_size), torch.tensor(other.image_size)
- ), size_error_msg
- else:
- assert input.image_size == other.image_size, size_error_msg
- fields = sorted(input.get_fields().keys())
- fields_other = sorted(other.get_fields().keys())
- assert fields == fields_other, msg + f"Fields are {fields} vs {fields_other}!"
-
- for f in fields:
- val1, val2 = input.get(f), other.get(f)
- if isinstance(val1, (Boxes, ROIMasks)):
- # boxes in the range of O(100) and can have a larger tolerance
- assert torch.allclose(val1.tensor, val2.tensor, atol=100 * rtol), (
- msg + f"Field {f} differs too much!"
- )
- elif isinstance(val1, torch.Tensor):
- if val1.dtype.is_floating_point:
- mag = torch.abs(val1).max().cpu().item()
- assert torch.allclose(val1, val2, atol=mag * rtol), (
- msg + f"Field {f} differs too much!"
- )
- else:
- assert torch.equal(val1, val2), msg + f"Field {f} is different!"
- else:
- raise ValueError(f"Don't know how to compare type {type(val1)}")
-
-
-def reload_script_model(module):
- """
- Save a jit module and load it back.
- Similar to the `getExportImportCopy` function in torch/testing/
- """
- buffer = io.BytesIO()
- torch.jit.save(module, buffer)
- buffer.seek(0)
- return torch.jit.load(buffer)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py
deleted file mode 100644
index 267f4b3c56630acd85f9bdc630b7be09abab0aba..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py
+++ /dev/null
@@ -1,336 +0,0 @@
-# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa
-
-from os import path as osp
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import _pair
-from torch.onnx.operators import shape_as_tensor
-
-
-def bilinear_grid_sample(im, grid, align_corners=False):
- """Given an input and a flow-field grid, computes the output using input
- values and pixel locations from grid. Supported only bilinear interpolation
- method to sample the input pixels.
-
- Args:
- im (torch.Tensor): Input feature map, shape (N, C, H, W)
- grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2)
- align_corners {bool}: If set to True, the extrema (-1 and 1) are
- considered as referring to the center points of the input’s
- corner pixels. If set to False, they are instead considered as
- referring to the corner points of the input’s corner pixels,
- making the sampling more resolution agnostic.
- Returns:
- torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg)
- """
- n, c, h, w = im.shape
- gn, gh, gw, _ = grid.shape
- assert n == gn
-
- x = grid[:, :, :, 0]
- y = grid[:, :, :, 1]
-
- if align_corners:
- x = ((x + 1) / 2) * (w - 1)
- y = ((y + 1) / 2) * (h - 1)
- else:
- x = ((x + 1) * w - 1) / 2
- y = ((y + 1) * h - 1) / 2
-
- x = x.view(n, -1)
- y = y.view(n, -1)
-
- x0 = torch.floor(x).long()
- y0 = torch.floor(y).long()
- x1 = x0 + 1
- y1 = y0 + 1
-
- wa = ((x1 - x) * (y1 - y)).unsqueeze(1)
- wb = ((x1 - x) * (y - y0)).unsqueeze(1)
- wc = ((x - x0) * (y1 - y)).unsqueeze(1)
- wd = ((x - x0) * (y - y0)).unsqueeze(1)
-
- # Apply default for grid_sample function zero padding
- im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0)
- padded_h = h + 2
- padded_w = w + 2
- # save points positions after padding
- x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1
-
- # Clip coordinates to padded image size
- x0 = torch.where(x0 < 0, torch.tensor(0), x0)
- x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0)
- x1 = torch.where(x1 < 0, torch.tensor(0), x1)
- x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1)
- y0 = torch.where(y0 < 0, torch.tensor(0), y0)
- y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0)
- y1 = torch.where(y1 < 0, torch.tensor(0), y1)
- y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1)
-
- im_padded = im_padded.view(n, c, -1)
-
- x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
- x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
-
- Ia = torch.gather(im_padded, 2, x0_y0)
- Ib = torch.gather(im_padded, 2, x0_y1)
- Ic = torch.gather(im_padded, 2, x1_y0)
- Id = torch.gather(im_padded, 2, x1_y1)
-
- return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw)
-
-
-def is_in_onnx_export_without_custom_ops():
- from annotator.uniformer.mmcv.ops import get_onnxruntime_op_path
- ort_custom_op_path = get_onnxruntime_op_path()
- return torch.onnx.is_in_onnx_export(
- ) and not osp.exists(ort_custom_op_path)
-
-
-def normalize(grid):
- """Normalize input grid from [-1, 1] to [0, 1]
- Args:
- grid (Tensor): The grid to be normalize, range [-1, 1].
- Returns:
- Tensor: Normalized grid, range [0, 1].
- """
-
- return (grid + 1.0) / 2.0
-
-
-def denormalize(grid):
- """Denormalize input grid from range [0, 1] to [-1, 1]
- Args:
- grid (Tensor): The grid to be denormalize, range [0, 1].
- Returns:
- Tensor: Denormalized grid, range [-1, 1].
- """
-
- return grid * 2.0 - 1.0
-
-
-def generate_grid(num_grid, size, device):
- """Generate regular square grid of points in [0, 1] x [0, 1] coordinate
- space.
-
- Args:
- num_grid (int): The number of grids to sample, one for each region.
- size (tuple(int, int)): The side size of the regular grid.
- device (torch.device): Desired device of returned tensor.
-
- Returns:
- (torch.Tensor): A tensor of shape (num_grid, size[0]*size[1], 2) that
- contains coordinates for the regular grids.
- """
-
- affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device)
- grid = F.affine_grid(
- affine_trans, torch.Size((1, 1, *size)), align_corners=False)
- grid = normalize(grid)
- return grid.view(1, -1, 2).expand(num_grid, -1, -1)
-
-
-def rel_roi_point_to_abs_img_point(rois, rel_roi_points):
- """Convert roi based relative point coordinates to image based absolute
- point coordinates.
-
- Args:
- rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5)
- rel_roi_points (Tensor): Point coordinates inside RoI, relative to
- RoI, location, range (0, 1), shape (N, P, 2)
- Returns:
- Tensor: Image based absolute point coordinates, shape (N, P, 2)
- """
-
- with torch.no_grad():
- assert rel_roi_points.size(0) == rois.size(0)
- assert rois.dim() == 2
- assert rel_roi_points.dim() == 3
- assert rel_roi_points.size(2) == 2
- # remove batch idx
- if rois.size(1) == 5:
- rois = rois[:, 1:]
- abs_img_points = rel_roi_points.clone()
- # To avoid an error during exporting to onnx use independent
- # variables instead inplace computation
- xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0])
- ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1])
- xs += rois[:, None, 0]
- ys += rois[:, None, 1]
- abs_img_points = torch.stack([xs, ys], dim=2)
- return abs_img_points
-
-
-def get_shape_from_feature_map(x):
- """Get spatial resolution of input feature map considering exporting to
- onnx mode.
-
- Args:
- x (torch.Tensor): Input tensor, shape (N, C, H, W)
- Returns:
- torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2)
- """
- if torch.onnx.is_in_onnx_export():
- img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to(
- x.device).float()
- else:
- img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to(
- x.device).float()
- return img_shape
-
-
-def abs_img_point_to_rel_img_point(abs_img_points, img, spatial_scale=1.):
- """Convert image based absolute point coordinates to image based relative
- coordinates for sampling.
-
- Args:
- abs_img_points (Tensor): Image based absolute point coordinates,
- shape (N, P, 2)
- img (tuple/Tensor): (height, width) of image or feature map.
- spatial_scale (float): Scale points by this factor. Default: 1.
-
- Returns:
- Tensor: Image based relative point coordinates for sampling,
- shape (N, P, 2)
- """
-
- assert (isinstance(img, tuple) and len(img) == 2) or \
- (isinstance(img, torch.Tensor) and len(img.shape) == 4)
-
- if isinstance(img, tuple):
- h, w = img
- scale = torch.tensor([w, h],
- dtype=torch.float,
- device=abs_img_points.device)
- scale = scale.view(1, 1, 2)
- else:
- scale = get_shape_from_feature_map(img)
-
- return abs_img_points / scale * spatial_scale
-
-
-def rel_roi_point_to_rel_img_point(rois,
- rel_roi_points,
- img,
- spatial_scale=1.):
- """Convert roi based relative point coordinates to image based absolute
- point coordinates.
-
- Args:
- rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5)
- rel_roi_points (Tensor): Point coordinates inside RoI, relative to
- RoI, location, range (0, 1), shape (N, P, 2)
- img (tuple/Tensor): (height, width) of image or feature map.
- spatial_scale (float): Scale points by this factor. Default: 1.
-
- Returns:
- Tensor: Image based relative point coordinates for sampling,
- shape (N, P, 2)
- """
-
- abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points)
- rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img,
- spatial_scale)
-
- return rel_img_point
-
-
-def point_sample(input, points, align_corners=False, **kwargs):
- """A wrapper around :func:`grid_sample` to support 3D point_coords tensors
- Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to
- lie inside ``[0, 1] x [0, 1]`` square.
-
- Args:
- input (Tensor): Feature map, shape (N, C, H, W).
- points (Tensor): Image based absolute point coordinates (normalized),
- range [0, 1] x [0, 1], shape (N, P, 2) or (N, Hgrid, Wgrid, 2).
- align_corners (bool): Whether align_corners. Default: False
-
- Returns:
- Tensor: Features of `point` on `input`, shape (N, C, P) or
- (N, C, Hgrid, Wgrid).
- """
-
- add_dim = False
- if points.dim() == 3:
- add_dim = True
- points = points.unsqueeze(2)
- if is_in_onnx_export_without_custom_ops():
- # If custom ops for onnx runtime not compiled use python
- # implementation of grid_sample function to make onnx graph
- # with supported nodes
- output = bilinear_grid_sample(
- input, denormalize(points), align_corners=align_corners)
- else:
- output = F.grid_sample(
- input, denormalize(points), align_corners=align_corners, **kwargs)
- if add_dim:
- output = output.squeeze(3)
- return output
-
-
-class SimpleRoIAlign(nn.Module):
-
- def __init__(self, output_size, spatial_scale, aligned=True):
- """Simple RoI align in PointRend, faster than standard RoIAlign.
-
- Args:
- output_size (tuple[int]): h, w
- spatial_scale (float): scale the input boxes by this number
- aligned (bool): if False, use the legacy implementation in
- MMDetection, align_corners=True will be used in F.grid_sample.
- If True, align the results more perfectly.
- """
-
- super(SimpleRoIAlign, self).__init__()
- self.output_size = _pair(output_size)
- self.spatial_scale = float(spatial_scale)
- # to be consistent with other RoI ops
- self.use_torchvision = False
- self.aligned = aligned
-
- def forward(self, features, rois):
- num_imgs = features.size(0)
- num_rois = rois.size(0)
- rel_roi_points = generate_grid(
- num_rois, self.output_size, device=rois.device)
-
- if torch.onnx.is_in_onnx_export():
- rel_img_points = rel_roi_point_to_rel_img_point(
- rois, rel_roi_points, features, self.spatial_scale)
- rel_img_points = rel_img_points.reshape(num_imgs, -1,
- *rel_img_points.shape[1:])
- point_feats = point_sample(
- features, rel_img_points, align_corners=not self.aligned)
- point_feats = point_feats.transpose(1, 2)
- else:
- point_feats = []
- for batch_ind in range(num_imgs):
- # unravel batch dim
- feat = features[batch_ind].unsqueeze(0)
- inds = (rois[:, 0].long() == batch_ind)
- if inds.any():
- rel_img_points = rel_roi_point_to_rel_img_point(
- rois[inds], rel_roi_points[inds], feat,
- self.spatial_scale).unsqueeze(0)
- point_feat = point_sample(
- feat, rel_img_points, align_corners=not self.aligned)
- point_feat = point_feat.squeeze(0).transpose(0, 1)
- point_feats.append(point_feat)
-
- point_feats = torch.cat(point_feats, dim=0)
-
- channels = features.size(1)
- roi_feats = point_feats.reshape(num_rois, channels, *self.output_size)
-
- return roi_feats
-
- def __repr__(self):
- format_str = self.__class__.__name__
- format_str += '(output_size={}, spatial_scale={}'.format(
- self.output_size, self.spatial_scale)
- return format_str
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/debug.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/debug.go
deleted file mode 100644
index 144b588c51b385b9900c9041da4ba20d76c4d8e9..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/debug.go and /dev/null differ
diff --git a/spaces/Pengyey/bingo-chuchu/src/app/page.tsx b/spaces/Pengyey/bingo-chuchu/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/Pie31415/control-animation/text_to_animation/model_flax.py b/spaces/Pie31415/control-animation/text_to_animation/model_flax.py
deleted file mode 100644
index 8b50766a24994557a065157883679fa0aa63f382..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/text_to_animation/model_flax.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import torch
-from enum import Enum
-import gc
-import numpy as np
-import jax.numpy as jnp
-import jax
-
-from PIL import Image
-from typing import List
-
-from flax.training.common_utils import shard
-from flax.jax_utils import replicate
-from flax import jax_utils
-import einops
-
-from transformers import CLIPTokenizer, CLIPFeatureExtractor, FlaxCLIPTextModel
-from diffusers import (
- FlaxDDIMScheduler,
- FlaxAutoencoderKL,
- FlaxUNet2DConditionModel as VanillaFlaxUNet2DConditionModel,
-)
-from text_to_animation.models.unet_2d_condition_flax import FlaxUNet2DConditionModel
-from diffusers import FlaxControlNetModel
-
-from text_to_animation.pipelines.text_to_video_pipeline_flax import (
- FlaxTextToVideoPipeline,
-)
-
-import utils.utils as utils
-import utils.gradio_utils as gradio_utils
-import os
-
-on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR"
-
-unshard = lambda x: einops.rearrange(x, "d b ... -> (d b) ...")
-
-
-class ModelType(Enum):
- Text2Video = 1
- ControlNetPose = 2
- StableDiffusion = 3
-
-
-def replicate_devices(array):
- return jnp.expand_dims(array, 0).repeat(jax.device_count(), 0)
-
-
-class ControlAnimationModel:
- def __init__(self, dtype, **kwargs):
- self.dtype = dtype
- self.rng = jax.random.PRNGKey(0)
- self.pipe = None
- self.model_type = None
-
- self.states = {}
- self.model_name = ""
-
- def set_model(
- self,
- model_id: str,
- **kwargs,
- ):
- if hasattr(self, "pipe") and self.pipe is not None:
- del self.pipe
- self.pipe = None
- gc.collect()
-
- controlnet, controlnet_params = FlaxControlNetModel.from_pretrained(
- "fusing/stable-diffusion-v1-5-controlnet-openpose",
- from_pt=True,
- dtype=jnp.float16,
- )
-
- scheduler, scheduler_state = FlaxDDIMScheduler.from_pretrained(
- model_id, subfolder="scheduler", from_pt=True
- )
- tokenizer = CLIPTokenizer.from_pretrained(model_id, subfolder="tokenizer")
- feature_extractor = CLIPFeatureExtractor.from_pretrained(
- model_id, subfolder="feature_extractor"
- )
- unet, unet_params = FlaxUNet2DConditionModel.from_pretrained(
- model_id, subfolder="unet", from_pt=True, dtype=self.dtype
- )
- unet_vanilla = VanillaFlaxUNet2DConditionModel.from_config(
- model_id, subfolder="unet", from_pt=True, dtype=self.dtype
- )
- vae, vae_params = FlaxAutoencoderKL.from_pretrained(
- model_id, subfolder="vae", from_pt=True, dtype=self.dtype
- )
- text_encoder = FlaxCLIPTextModel.from_pretrained(
- model_id, subfolder="text_encoder", from_pt=True, dtype=self.dtype
- )
- self.pipe = FlaxTextToVideoPipeline(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- unet_vanilla=unet_vanilla,
- controlnet=controlnet,
- scheduler=scheduler,
- safety_checker=None,
- feature_extractor=feature_extractor,
- )
- self.params = {
- "unet": unet_params,
- "vae": vae_params,
- "scheduler": scheduler_state,
- "controlnet": controlnet_params,
- "text_encoder": text_encoder.params,
- }
- self.p_params = jax_utils.replicate(self.params)
- self.model_name = model_id
-
- def generate_initial_frames(
- self,
- prompt: str,
- video_path: str,
- n_prompt: str = "",
- num_imgs: int = 4,
- resolution: int = 512,
- model_id: str = "runwayml/stable-diffusion-v1-5",
- ) -> List[Image.Image]:
- self.set_model(model_id=model_id)
-
- video_path = gradio_utils.motion_to_video_path(video_path)
-
- added_prompt = "high quality, best quality, HD, clay stop-motion, claymation, HQ, masterpiece, art, smooth"
- prompts = added_prompt + ", " + prompt
-
- added_n_prompt = "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly"
- negative_prompts = added_n_prompt + ", " + n_prompt
-
- video, fps = utils.prepare_video(
- video_path, resolution, None, self.dtype, False, output_fps=4
- )
- control = utils.pre_process_pose(video, apply_pose_detect=False)
-
- seeds = [seed for seed in jax.random.randint(self.rng, [num_imgs], 0, 65536)]
- prngs = [jax.random.PRNGKey(seed) for seed in seeds]
- print(seeds)
- images = self.pipe.generate_starting_frames(
- params=self.p_params,
- prngs=prngs,
- controlnet_image=control,
- prompt=prompts,
- neg_prompt=negative_prompts,
- )
-
- images = [np.array(images[i]) for i in range(images.shape[0])]
-
- return images
-
- def generate_video_from_frame(self, controlnet_video, prompt, seed, neg_prompt=""):
- # generate a video using the seed provided
- prng_seed = jax.random.PRNGKey(seed)
- len_vid = controlnet_video.shape[0]
- # print(f"Generating video from prompt {' style '+ prompt}, with {controlnet_video.shape[0]} frames and prng seed {seed}")
- added_prompt = "high quality, best quality, HD, clay stop-motion, claymation, HQ, masterpiece, art, smooth"
- prompts = added_prompt + ", " + prompt
-
- added_n_prompt = "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly"
- negative_prompts = added_n_prompt + ", " + neg_prompt
-
- # prompt_ids = self.pipe.prepare_text_inputs(["aardman style "+ prompt]*len_vid)
- # n_prompt_ids = self.pipe.prepare_text_inputs([neg_prompt]*len_vid)
-
- prompt_ids = self.pipe.prepare_text_inputs([prompts] * len_vid)
- n_prompt_ids = self.pipe.prepare_text_inputs([negative_prompts] * len_vid)
- prng = replicate_devices(
- prng_seed
- ) # jax.random.split(prng, jax.device_count())
- image = replicate_devices(controlnet_video)
- prompt_ids = replicate_devices(prompt_ids)
- n_prompt_ids = replicate_devices(n_prompt_ids)
- motion_field_strength_x = replicate_devices(jnp.array(3))
- motion_field_strength_y = replicate_devices(jnp.array(4))
- smooth_bg_strength = replicate_devices(jnp.array(0.8))
- vid = (
- self.pipe(
- image=image,
- prompt_ids=prompt_ids,
- neg_prompt_ids=n_prompt_ids,
- params=self.p_params,
- prng_seed=prng,
- jit=True,
- smooth_bg_strength=smooth_bg_strength,
- motion_field_strength_x=motion_field_strength_x,
- motion_field_strength_y=motion_field_strength_y,
- ).images
- )[0]
- return utils.create_gif(np.array(vid), 4, path=None, watermark=None)
diff --git a/spaces/Plachta/VALL-E-X/models/__init__.py b/spaces/Plachta/VALL-E-X/models/__init__.py
deleted file mode 100644
index 3964a73a02c98de656da931b2c3f6121dbad7a28..0000000000000000000000000000000000000000
--- a/spaces/Plachta/VALL-E-X/models/__init__.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import argparse
-
-import torch.nn as nn
-# from icefall.utils import AttributeDict, str2bool
-
-from .macros import (
- NUM_AUDIO_TOKENS,
- NUM_MEL_BINS,
- NUM_SPEAKER_CLASSES,
- NUM_TEXT_TOKENS,
- SPEAKER_EMBEDDING_DIM,
-)
-from .vallex import VALLE, VALLF
-
-
-def add_model_arguments(parser: argparse.ArgumentParser):
- parser.add_argument(
- "--model-name",
- type=str,
- default="VALL-E",
- help="VALL-E, VALL-F, Transformer.",
- )
- parser.add_argument(
- "--decoder-dim",
- type=int,
- default=1024,
- help="Embedding dimension in the decoder model.",
- )
- parser.add_argument(
- "--nhead",
- type=int,
- default=16,
- help="Number of attention heads in the Decoder layers.",
- )
- parser.add_argument(
- "--num-decoder-layers",
- type=int,
- default=12,
- help="Number of Decoder layers.",
- )
- parser.add_argument(
- "--scale-factor",
- type=float,
- default=1.0,
- help="Model scale factor which will be assigned different meanings in different models.",
- )
- parser.add_argument(
- "--norm-first",
- type=bool,
- default=True,
- help="Pre or Post Normalization.",
- )
- parser.add_argument(
- "--add-prenet",
- type=bool,
- default=False,
- help="Whether add PreNet after Inputs.",
- )
-
- # VALL-E & F
- parser.add_argument(
- "--prefix-mode",
- type=int,
- default=1,
- help="The mode for how to prefix VALL-E NAR Decoder, "
- "0: no prefix, 1: 0 to random, 2: random to random, 4: chunk of pre or post utterance.",
- )
- parser.add_argument(
- "--share-embedding",
- type=bool,
- default=True,
- help="Share the parameters of the output projection layer with the parameters of the acoustic embedding.",
- )
- parser.add_argument(
- "--prepend-bos",
- type=bool,
- default=False,
- help="Whether prepend to the acoustic tokens -> AR Decoder inputs.",
- )
- parser.add_argument(
- "--num-quantizers",
- type=int,
- default=8,
- help="Number of Audio/Semantic quantization layers.",
- )
-
- # Transformer
- parser.add_argument(
- "--scaling-xformers",
- type=bool,
- default=False,
- help="Apply Reworked Conformer scaling on Transformers.",
- )
-
-
-def get_model(params) -> nn.Module:
- if params.model_name.lower() in ["vall-f", "vallf"]:
- model = VALLF(
- params.decoder_dim,
- params.nhead,
- params.num_decoder_layers,
- norm_first=params.norm_first,
- add_prenet=params.add_prenet,
- prefix_mode=params.prefix_mode,
- share_embedding=params.share_embedding,
- nar_scale_factor=params.scale_factor,
- prepend_bos=params.prepend_bos,
- num_quantizers=params.num_quantizers,
- )
- elif params.model_name.lower() in ["vall-e", "valle"]:
- model = VALLE(
- params.decoder_dim,
- params.nhead,
- params.num_decoder_layers,
- norm_first=params.norm_first,
- add_prenet=params.add_prenet,
- prefix_mode=params.prefix_mode,
- share_embedding=params.share_embedding,
- nar_scale_factor=params.scale_factor,
- prepend_bos=params.prepend_bos,
- num_quantizers=params.num_quantizers,
- )
- else:
- raise ValueError("No such model")
-
- return model
diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/longcode/jpgd.cpp b/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/longcode/jpgd.cpp
deleted file mode 100644
index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/longcode/jpgd.cpp
+++ /dev/null
@@ -1,3276 +0,0 @@
-// jpgd.cpp - C++ class for JPEG decompression.
-// Public domain, Rich Geldreich
-// Last updated Apr. 16, 2011
-// Alex Evans: Linear memory allocator (taken from jpge.h).
-//
-// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2.
-//
-// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling.
-// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain"
-// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html
-
-#include "jpgd.h"
-#include
-
-#include
-// BEGIN EPIC MOD
-#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0
-// END EPIC MOD
-
-#ifdef _MSC_VER
-#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable
-#endif
-
-// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling).
-// This is slower, but results in higher quality on images with highly saturated colors.
-#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1
-
-#define JPGD_TRUE (1)
-#define JPGD_FALSE (0)
-
-#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b))
-#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b))
-
-namespace jpgd {
-
- static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
- static inline void jpgd_free(void *p) { FMemory::Free(p); }
-
-// BEGIN EPIC MOD
-//@UE3 - use UE3 BGRA encoding instead of assuming RGBA
- // stolen from IImageWrapper.h
- enum ERGBFormatJPG
- {
- Invalid = -1,
- RGBA = 0,
- BGRA = 1,
- Gray = 2,
- };
- static ERGBFormatJPG jpg_format;
-// END EPIC MOD
-
- // DCT coefficients are stored in this sequence.
- static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
-
- enum JPEG_MARKER
- {
- M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8,
- M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC,
- M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7,
- M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF,
- M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0
- };
-
- enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 };
-
-#define CONST_BITS 13
-#define PASS1_BITS 2
-#define SCALEDONE ((int32)1)
-
-#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */
-#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */
-#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */
-#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */
-#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */
-#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */
-#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */
-#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */
-#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */
-#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */
-#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */
-#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */
-
-#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n))
-#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n))
-
-#define MULTIPLY(var, cnst) ((var) * (cnst))
-
-#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i))
-
- // Compiler creates a fast path 1D IDCT for X non-zero columns
- template
- struct Row
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- // ACCESS_COL() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0)
-
- const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS);
- pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS);
- pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS);
- pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS);
- pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS);
- pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS);
- pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS);
- pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS);
- }
- };
-
- template <>
- struct Row<0>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
-#ifdef _MSC_VER
- pTemp; pSrc;
-#endif
- }
- };
-
- template <>
- struct Row<1>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- const int dcval = (pSrc[0] << PASS1_BITS);
-
- pTemp[0] = dcval;
- pTemp[1] = dcval;
- pTemp[2] = dcval;
- pTemp[3] = dcval;
- pTemp[4] = dcval;
- pTemp[5] = dcval;
- pTemp[6] = dcval;
- pTemp[7] = dcval;
- }
- };
-
- // Compiler creates a fast path 1D IDCT for X non-zero rows
- template
- struct Col
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- // ACCESS_ROW() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0)
-
- const int z2 = ACCESS_ROW(2);
- const int z3 = ACCESS_ROW(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*0] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*7] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*1] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*6] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*2] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*5] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*3] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*4] = (uint8)CLAMP(i);
- }
- };
-
- template <>
- struct Col<1>
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3);
- const uint8 dcval_clamped = (uint8)CLAMP(dcval);
- pDst_ptr[0*8] = dcval_clamped;
- pDst_ptr[1*8] = dcval_clamped;
- pDst_ptr[2*8] = dcval_clamped;
- pDst_ptr[3*8] = dcval_clamped;
- pDst_ptr[4*8] = dcval_clamped;
- pDst_ptr[5*8] = dcval_clamped;
- pDst_ptr[6*8] = dcval_clamped;
- pDst_ptr[7*8] = dcval_clamped;
- }
- };
-
- static const uint8 s_idct_row_table[] =
- {
- 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0,
- 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0,
- 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0,
- 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0,
- 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2,
- 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2,
- 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4,
- 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8,
- };
-
- static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
-
- void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag)
- {
- JPGD_ASSERT(block_max_zag >= 1);
- JPGD_ASSERT(block_max_zag <= 64);
-
- if (block_max_zag == 1)
- {
- int k = ((pSrc_ptr[0] + 4) >> 3) + 128;
- k = CLAMP(k);
- k = k | (k<<8);
- k = k | (k<<16);
-
- for (int i = 8; i > 0; i--)
- {
- *(int*)&pDst_ptr[0] = k;
- *(int*)&pDst_ptr[4] = k;
- pDst_ptr += 8;
- }
- return;
- }
-
- int temp[64];
-
- const jpgd_block_t* pSrc = pSrc_ptr;
- int* pTemp = temp;
-
- const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8];
- int i;
- for (i = 8; i > 0; i--, pRow_tab++)
- {
- switch (*pRow_tab)
- {
- case 0: Row<0>::idct(pTemp, pSrc); break;
- case 1: Row<1>::idct(pTemp, pSrc); break;
- case 2: Row<2>::idct(pTemp, pSrc); break;
- case 3: Row<3>::idct(pTemp, pSrc); break;
- case 4: Row<4>::idct(pTemp, pSrc); break;
- case 5: Row<5>::idct(pTemp, pSrc); break;
- case 6: Row<6>::idct(pTemp, pSrc); break;
- case 7: Row<7>::idct(pTemp, pSrc); break;
- case 8: Row<8>::idct(pTemp, pSrc); break;
- }
-
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
-
- const int nonzero_rows = s_idct_col_table[block_max_zag - 1];
- for (i = 8; i > 0; i--)
- {
- switch (nonzero_rows)
- {
- case 1: Col<1>::idct(pDst_ptr, pTemp); break;
- case 2: Col<2>::idct(pDst_ptr, pTemp); break;
- case 3: Col<3>::idct(pDst_ptr, pTemp); break;
- case 4: Col<4>::idct(pDst_ptr, pTemp); break;
- case 5: Col<5>::idct(pDst_ptr, pTemp); break;
- case 6: Col<6>::idct(pDst_ptr, pTemp); break;
- case 7: Col<7>::idct(pDst_ptr, pTemp); break;
- case 8: Col<8>::idct(pDst_ptr, pTemp); break;
- }
-
- pTemp++;
- pDst_ptr++;
- }
- }
-
- void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr)
- {
- int temp[64];
- int* pTemp = temp;
- const jpgd_block_t* pSrc = pSrc_ptr;
-
- for (int i = 4; i > 0; i--)
- {
- Row<4>::idct(pTemp, pSrc);
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
- for (int i = 8; i > 0; i--)
- {
- Col<4>::idct(pDst_ptr, pTemp);
- pTemp++;
- pDst_ptr++;
- }
- }
-
- // Retrieve one character from the input stream.
- inline uint jpeg_decoder::get_char()
- {
- // Any bytes remaining in buffer?
- if (!m_in_buf_left)
- {
- // Try to get more bytes.
- prep_in_buffer();
- // Still nothing to get?
- if (!m_in_buf_left)
- {
- // Pad the end of the stream with 0xFF 0xD9 (EOI marker)
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Same as previous method, except can indicate if the character is a pad character or not.
- inline uint jpeg_decoder::get_char(bool *pPadding_flag)
- {
- if (!m_in_buf_left)
- {
- prep_in_buffer();
- if (!m_in_buf_left)
- {
- *pPadding_flag = true;
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- *pPadding_flag = false;
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Inserts a previously retrieved character back into the input buffer.
- inline void jpeg_decoder::stuff_char(uint8 q)
- {
- *(--m_pIn_buf_ofs) = q;
- m_in_buf_left++;
- }
-
- // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered.
- inline uint8 jpeg_decoder::get_octet()
- {
- bool padding_flag;
- int c = get_char(&padding_flag);
-
- if (c == 0xFF)
- {
- if (padding_flag)
- return 0xFF;
-
- c = get_char(&padding_flag);
- if (padding_flag)
- {
- stuff_char(0xFF);
- return 0xFF;
- }
-
- if (c == 0x00)
- return 0xFF;
- else
- {
- stuff_char(static_cast(c));
- stuff_char(0xFF);
- return 0xFF;
- }
- }
-
- return static_cast(c);
- }
-
- // Retrieves a variable number of bits from the input stream. Does not recognize markers.
- inline uint jpeg_decoder::get_bits(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- uint c1 = get_char();
- uint c2 = get_char();
- m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2;
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered.
- inline uint jpeg_decoder::get_bits_no_markers(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF))
- {
- uint c1 = get_octet();
- uint c2 = get_octet();
- m_bit_buf |= (c1 << 8) | c2;
- }
- else
- {
- m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1];
- m_in_buf_left -= 2;
- m_pIn_buf_ofs += 2;
- }
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0)
- {
- // Decode more bits, use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
- }
- else
- get_bits_no_markers(pH->code_size[symbol]);
-
- return symbol;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0)
- {
- // Use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
-
- extra_bits = get_bits_no_markers(symbol & 0xF);
- }
- else
- {
- JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0));
-
- if (symbol & 0x8000)
- {
- get_bits_no_markers((symbol >> 8) & 31);
- extra_bits = symbol >> 16;
- }
- else
- {
- int code_size = (symbol >> 8) & 31;
- int num_extra_bits = symbol & 0xF;
- int bits = code_size + num_extra_bits;
- if (bits <= (m_bits_left + 16))
- extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1);
- else
- {
- get_bits_no_markers(code_size);
- extra_bits = get_bits_no_markers(num_extra_bits);
- }
- }
-
- symbol &= 0xFF;
- }
-
- return symbol;
- }
-
- // Tables and macro used to fully decode the DPCM differences.
- static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 };
- static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 };
- static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) };
-#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x))
-
- // Clamps a value between 0-255.
- inline uint8 jpeg_decoder::clamp(int i)
- {
- if (static_cast(i) > 255)
- i = (((~i) >> 31) & 0xFF);
-
- return static_cast(i);
- }
-
- namespace DCT_Upsample
- {
- struct Matrix44
- {
- typedef int Element_Type;
- enum { NUM_ROWS = 4, NUM_COLS = 4 };
-
- Element_Type v[NUM_ROWS][NUM_COLS];
-
- inline int rows() const { return NUM_ROWS; }
- inline int cols() const { return NUM_COLS; }
-
- inline const Element_Type & at(int r, int c) const { return v[r][c]; }
- inline Element_Type & at(int r, int c) { return v[r][c]; }
-
- inline Matrix44() { }
-
- inline Matrix44& operator += (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) += a.at(r, 0);
- at(r, 1) += a.at(r, 1);
- at(r, 2) += a.at(r, 2);
- at(r, 3) += a.at(r, 3);
- }
- return *this;
- }
-
- inline Matrix44& operator -= (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) -= a.at(r, 0);
- at(r, 1) -= a.at(r, 1);
- at(r, 2) -= a.at(r, 2);
- at(r, 3) -= a.at(r, 3);
- }
- return *this;
- }
-
- friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) + b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) + b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) + b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) + b.at(r, 3);
- }
- return ret;
- }
-
- friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) - b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) - b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) - b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) - b.at(r, 3);
- }
- return ret;
- }
-
- static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3));
- }
- }
-
- static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3));
- }
- }
- };
-
- const int FRACT_BITS = 10;
- const int SCALE = 1 << FRACT_BITS;
-
- typedef int Temp_Type;
-#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS)
-#define F(i) ((int)((i) * SCALE + .5f))
-
- // Any decent C++ compiler will optimize this at compile time to a 0, or an array access.
-#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8])
-
- // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix
- template
- struct P_Q
- {
- static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X000 = AT(0, 0);
- const Temp_Type X001 = AT(0, 1);
- const Temp_Type X002 = AT(0, 2);
- const Temp_Type X003 = AT(0, 3);
- const Temp_Type X004 = AT(0, 4);
- const Temp_Type X005 = AT(0, 5);
- const Temp_Type X006 = AT(0, 6);
- const Temp_Type X007 = AT(0, 7);
- const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0));
- const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1));
- const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2));
- const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3));
- const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4));
- const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5));
- const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6));
- const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7));
- const Temp_Type X020 = AT(4, 0);
- const Temp_Type X021 = AT(4, 1);
- const Temp_Type X022 = AT(4, 2);
- const Temp_Type X023 = AT(4, 3);
- const Temp_Type X024 = AT(4, 4);
- const Temp_Type X025 = AT(4, 5);
- const Temp_Type X026 = AT(4, 6);
- const Temp_Type X027 = AT(4, 7);
- const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0));
- const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1));
- const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2));
- const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3));
- const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4));
- const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5));
- const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6));
- const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7));
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- P.at(0, 0) = X000;
- P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f));
- P.at(0, 2) = X004;
- P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f));
- P.at(1, 0) = X010;
- P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f));
- P.at(1, 2) = X014;
- P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f));
- P.at(2, 0) = X020;
- P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f));
- P.at(2, 2) = X024;
- P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f));
- P.at(3, 0) = X030;
- P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f));
- P.at(3, 2) = X034;
- P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f));
- // 40 muls 24 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f));
- Q.at(0, 1) = X002;
- Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f));
- Q.at(0, 3) = X006;
- Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f));
- Q.at(1, 1) = X012;
- Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f));
- Q.at(1, 3) = X016;
- Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f));
- Q.at(2, 1) = X022;
- Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f));
- Q.at(2, 3) = X026;
- Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f));
- Q.at(3, 1) = X032;
- Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f));
- Q.at(3, 3) = X036;
- // 40 muls 24 adds
- }
- };
-
- template
- struct R_S
- {
- static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0));
- const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1));
- const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2));
- const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3));
- const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4));
- const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5));
- const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6));
- const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7));
- const Temp_Type X110 = AT(2, 0);
- const Temp_Type X111 = AT(2, 1);
- const Temp_Type X112 = AT(2, 2);
- const Temp_Type X113 = AT(2, 3);
- const Temp_Type X114 = AT(2, 4);
- const Temp_Type X115 = AT(2, 5);
- const Temp_Type X116 = AT(2, 6);
- const Temp_Type X117 = AT(2, 7);
- const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0));
- const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1));
- const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2));
- const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3));
- const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4));
- const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5));
- const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6));
- const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7));
- const Temp_Type X130 = AT(6, 0);
- const Temp_Type X131 = AT(6, 1);
- const Temp_Type X132 = AT(6, 2);
- const Temp_Type X133 = AT(6, 3);
- const Temp_Type X134 = AT(6, 4);
- const Temp_Type X135 = AT(6, 5);
- const Temp_Type X136 = AT(6, 6);
- const Temp_Type X137 = AT(6, 7);
- // 80 muls 48 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- R.at(0, 0) = X100;
- R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f));
- R.at(0, 2) = X104;
- R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f));
- R.at(1, 0) = X110;
- R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f));
- R.at(1, 2) = X114;
- R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f));
- R.at(2, 0) = X120;
- R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f));
- R.at(2, 2) = X124;
- R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f));
- R.at(3, 0) = X130;
- R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f));
- R.at(3, 2) = X134;
- R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f));
- // 40 muls 24 adds
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f));
- S.at(0, 1) = X102;
- S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f));
- S.at(0, 3) = X106;
- S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f));
- S.at(1, 1) = X112;
- S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f));
- S.at(1, 3) = X116;
- S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f));
- S.at(2, 1) = X122;
- S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f));
- S.at(2, 3) = X126;
- S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f));
- S.at(3, 1) = X132;
- S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f));
- S.at(3, 3) = X136;
- // 40 muls 24 adds
- }
- };
- } // end namespace DCT_Upsample
-
- // Unconditionally frees all allocated m_blocks.
- void jpeg_decoder::free_all_blocks()
- {
- m_pStream = NULL;
- for (mem_block *b = m_pMem_blocks; b; )
- {
- mem_block *n = b->m_pNext;
- jpgd_free(b);
- b = n;
- }
- m_pMem_blocks = NULL;
- }
-
- // This method handles all errors.
- // It could easily be changed to use C++ exceptions.
- void jpeg_decoder::stop_decoding(jpgd_status status)
- {
- m_error_code = status;
- free_all_blocks();
- longjmp(m_jmp_state, status);
-
- // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit
- // that this function doesn't return, otherwise we get this error:
- //
- // error : function declared 'noreturn' should not return
- exit(1);
- }
-
- void *jpeg_decoder::alloc(size_t nSize, bool zero)
- {
- nSize = (JPGD_MAX(nSize, 1) + 3) & ~3;
- char *rv = NULL;
- for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext)
- {
- if ((b->m_used_count + nSize) <= b->m_size)
- {
- rv = b->m_data + b->m_used_count;
- b->m_used_count += nSize;
- break;
- }
- }
- if (!rv)
- {
- int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047);
- mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity);
- if (!b) stop_decoding(JPGD_NOTENOUGHMEM);
- b->m_pNext = m_pMem_blocks; m_pMem_blocks = b;
- b->m_used_count = nSize;
- b->m_size = capacity;
- rv = b->m_data;
- }
- if (zero) memset(rv, 0, nSize);
- return rv;
- }
-
- void jpeg_decoder::word_clear(void *p, uint16 c, uint n)
- {
- uint8 *pD = (uint8*)p;
- const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF;
- while (n)
- {
- pD[0] = l; pD[1] = h; pD += 2;
- n--;
- }
- }
-
- // Refill the input buffer.
- // This method will sit in a loop until (A) the buffer is full or (B)
- // the stream's read() method reports and end of file condition.
- void jpeg_decoder::prep_in_buffer()
- {
- m_in_buf_left = 0;
- m_pIn_buf_ofs = m_in_buf;
-
- if (m_eof_flag)
- return;
-
- do
- {
- int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag);
- if (bytes_read == -1)
- stop_decoding(JPGD_STREAM_READ);
-
- m_in_buf_left += bytes_read;
- } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag));
-
- m_total_bytes_read += m_in_buf_left;
-
- // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid).
- // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.)
- word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64);
- }
-
- // Read a Huffman code table.
- void jpeg_decoder::read_dht_marker()
- {
- int i, index, count;
- uint8 huff_num[17];
- uint8 huff_val[256];
-
- uint num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- index = get_bits(8);
-
- huff_num[0] = 0;
-
- count = 0;
-
- for (i = 1; i <= 16; i++)
- {
- huff_num[i] = static_cast(get_bits(8));
- count += huff_num[i];
- }
-
- if (count > 255)
- stop_decoding(JPGD_BAD_DHT_COUNTS);
-
- for (i = 0; i < count; i++)
- huff_val[i] = static_cast(get_bits(8));
-
- i = 1 + 16 + count;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= i;
-
- if ((index & 0x10) > 0x10)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1);
-
- if (index >= JPGD_MAX_HUFF_TABLES)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- if (!m_huff_num[index])
- m_huff_num[index] = (uint8 *)alloc(17);
-
- if (!m_huff_val[index])
- m_huff_val[index] = (uint8 *)alloc(256);
-
- m_huff_ac[index] = (index & 0x10) != 0;
- memcpy(m_huff_num[index], huff_num, 17);
- memcpy(m_huff_val[index], huff_val, 256);
- }
- }
-
- // Read a quantization table.
- void jpeg_decoder::read_dqt_marker()
- {
- int n, i, prec;
- uint num_left;
- uint temp;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DQT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- n = get_bits(8);
- prec = n >> 4;
- n &= 0x0F;
-
- if (n >= JPGD_MAX_QUANT_TABLES)
- stop_decoding(JPGD_BAD_DQT_TABLE);
-
- if (!m_quant[n])
- m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t));
-
- // read quantization entries, in zag order
- for (i = 0; i < 64; i++)
- {
- temp = get_bits(8);
-
- if (prec)
- temp = (temp << 8) + get_bits(8);
-
- m_quant[n][i] = static_cast(temp);
- }
-
- i = 64 + 1;
-
- if (prec)
- i += 64;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DQT_LENGTH);
-
- num_left -= i;
- }
- }
-
- // Read the start of frame (SOF) marker.
- void jpeg_decoder::read_sof_marker()
- {
- int i;
- uint num_left;
-
- num_left = get_bits(16);
-
- if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */
- stop_decoding(JPGD_BAD_PRECISION);
-
- m_image_y_size = get_bits(16);
-
- if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT))
- stop_decoding(JPGD_BAD_HEIGHT);
-
- m_image_x_size = get_bits(16);
-
- if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH))
- stop_decoding(JPGD_BAD_WIDTH);
-
- m_comps_in_frame = get_bits(8);
-
- if (m_comps_in_frame > JPGD_MAX_COMPONENTS)
- stop_decoding(JPGD_TOO_MANY_COMPONENTS);
-
- if (num_left != (uint)(m_comps_in_frame * 3 + 8))
- stop_decoding(JPGD_BAD_SOF_LENGTH);
-
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_comp_ident[i] = get_bits(8);
- m_comp_h_samp[i] = get_bits(4);
- m_comp_v_samp[i] = get_bits(4);
- m_comp_quant[i] = get_bits(8);
- }
- }
-
- // Used to skip unrecognized markers.
- void jpeg_decoder::skip_variable_marker()
- {
- uint num_left;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_VARIABLE_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Read a define restart interval (DRI) marker.
- void jpeg_decoder::read_dri_marker()
- {
- if (get_bits(16) != 4)
- stop_decoding(JPGD_BAD_DRI_LENGTH);
-
- m_restart_interval = get_bits(16);
- }
-
- // Read a start of scan (SOS) marker.
- void jpeg_decoder::read_sos_marker()
- {
- uint num_left;
- int i, ci, n, c, cc;
-
- num_left = get_bits(16);
-
- n = get_bits(8);
-
- m_comps_in_scan = n;
-
- num_left -= 3;
-
- if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) )
- stop_decoding(JPGD_BAD_SOS_LENGTH);
-
- for (i = 0; i < n; i++)
- {
- cc = get_bits(8);
- c = get_bits(8);
- num_left -= 2;
-
- for (ci = 0; ci < m_comps_in_frame; ci++)
- if (cc == m_comp_ident[ci])
- break;
-
- if (ci >= m_comps_in_frame)
- stop_decoding(JPGD_BAD_SOS_COMP_ID);
-
- m_comp_list[i] = ci;
- m_comp_dc_tab[ci] = (c >> 4) & 15;
- m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1);
- }
-
- m_spectral_start = get_bits(8);
- m_spectral_end = get_bits(8);
- m_successive_high = get_bits(4);
- m_successive_low = get_bits(4);
-
- if (!m_progressive_flag)
- {
- m_spectral_start = 0;
- m_spectral_end = 63;
- }
-
- num_left -= 3;
-
- while (num_left) /* read past whatever is num_left */
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Finds the next marker.
- int jpeg_decoder::next_marker()
- {
- uint c, bytes;
-
- bytes = 0;
-
- do
- {
- do
- {
- bytes++;
- c = get_bits(8);
- } while (c != 0xFF);
-
- do
- {
- c = get_bits(8);
- } while (c == 0xFF);
-
- } while (c == 0);
-
- // If bytes > 0 here, there where extra bytes before the marker (not good).
-
- return c;
- }
-
- // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is
- // encountered.
- int jpeg_decoder::process_markers()
- {
- int c;
-
- for ( ; ; )
- {
- c = next_marker();
-
- switch (c)
- {
- case M_SOF0:
- case M_SOF1:
- case M_SOF2:
- case M_SOF3:
- case M_SOF5:
- case M_SOF6:
- case M_SOF7:
- // case M_JPG:
- case M_SOF9:
- case M_SOF10:
- case M_SOF11:
- case M_SOF13:
- case M_SOF14:
- case M_SOF15:
- case M_SOI:
- case M_EOI:
- case M_SOS:
- {
- return c;
- }
- case M_DHT:
- {
- read_dht_marker();
- break;
- }
- // No arithmitic support - dumb patents!
- case M_DAC:
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- case M_DQT:
- {
- read_dqt_marker();
- break;
- }
- case M_DRI:
- {
- read_dri_marker();
- break;
- }
- //case M_APP0: /* no need to read the JFIF marker */
-
- case M_JPG:
- case M_RST0: /* no parameters */
- case M_RST1:
- case M_RST2:
- case M_RST3:
- case M_RST4:
- case M_RST5:
- case M_RST6:
- case M_RST7:
- case M_TEM:
- {
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- break;
- }
- default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */
- {
- skip_variable_marker();
- break;
- }
- }
- }
- }
-
- // Finds the start of image (SOI) marker.
- // This code is rather defensive: it only checks the first 512 bytes to avoid
- // false positives.
- void jpeg_decoder::locate_soi_marker()
- {
- uint lastchar, thischar;
- uint bytesleft;
-
- lastchar = get_bits(8);
-
- thischar = get_bits(8);
-
- /* ok if it's a normal JPEG file without a special header */
-
- if ((lastchar == 0xFF) && (thischar == M_SOI))
- return;
-
- bytesleft = 4096; //512;
-
- for ( ; ; )
- {
- if (--bytesleft == 0)
- stop_decoding(JPGD_NOT_JPEG);
-
- lastchar = thischar;
-
- thischar = get_bits(8);
-
- if (lastchar == 0xFF)
- {
- if (thischar == M_SOI)
- break;
- else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end
- stop_decoding(JPGD_NOT_JPEG);
- }
- }
-
- // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad.
- thischar = (m_bit_buf >> 24) & 0xFF;
-
- if (thischar != 0xFF)
- stop_decoding(JPGD_NOT_JPEG);
- }
-
- // Find a start of frame (SOF) marker.
- void jpeg_decoder::locate_sof_marker()
- {
- locate_soi_marker();
-
- int c = process_markers();
-
- switch (c)
- {
- case M_SOF2:
- m_progressive_flag = JPGD_TRUE;
- case M_SOF0: /* baseline DCT */
- case M_SOF1: /* extended sequential DCT */
- {
- read_sof_marker();
- break;
- }
- case M_SOF9: /* Arithmitic coding */
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- default:
- {
- stop_decoding(JPGD_UNSUPPORTED_MARKER);
- break;
- }
- }
- }
-
- // Find a start of scan (SOS) marker.
- int jpeg_decoder::locate_sos_marker()
- {
- int c;
-
- c = process_markers();
-
- if (c == M_EOI)
- return JPGD_FALSE;
- else if (c != M_SOS)
- stop_decoding(JPGD_UNEXPECTED_MARKER);
-
- read_sos_marker();
-
- return JPGD_TRUE;
- }
-
- // Reset everything to default/uninitialized state.
- void jpeg_decoder::init(jpeg_decoder_stream *pStream)
- {
- m_pMem_blocks = NULL;
- m_error_code = JPGD_SUCCESS;
- m_ready_flag = false;
- m_image_x_size = m_image_y_size = 0;
- m_pStream = pStream;
- m_progressive_flag = JPGD_FALSE;
-
- memset(m_huff_ac, 0, sizeof(m_huff_ac));
- memset(m_huff_num, 0, sizeof(m_huff_num));
- memset(m_huff_val, 0, sizeof(m_huff_val));
- memset(m_quant, 0, sizeof(m_quant));
-
- m_scan_type = 0;
- m_comps_in_frame = 0;
-
- memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp));
- memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp));
- memset(m_comp_quant, 0, sizeof(m_comp_quant));
- memset(m_comp_ident, 0, sizeof(m_comp_ident));
- memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks));
- memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks));
-
- m_comps_in_scan = 0;
- memset(m_comp_list, 0, sizeof(m_comp_list));
- memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab));
- memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab));
-
- m_spectral_start = 0;
- m_spectral_end = 0;
- m_successive_low = 0;
- m_successive_high = 0;
- m_max_mcu_x_size = 0;
- m_max_mcu_y_size = 0;
- m_blocks_per_mcu = 0;
- m_max_blocks_per_row = 0;
- m_mcus_per_row = 0;
- m_mcus_per_col = 0;
- m_expanded_blocks_per_component = 0;
- m_expanded_blocks_per_mcu = 0;
- m_expanded_blocks_per_row = 0;
- m_freq_domain_chroma_upsample = false;
-
- memset(m_mcu_org, 0, sizeof(m_mcu_org));
-
- m_total_lines_left = 0;
- m_mcu_lines_left = 0;
- m_real_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_pixel = 0;
-
- memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs));
-
- memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs));
- memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs));
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_eob_run = 0;
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_pIn_buf_ofs = m_in_buf;
- m_in_buf_left = 0;
- m_eof_flag = false;
- m_tem_flag = 0;
-
- memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start));
- memset(m_in_buf, 0, sizeof(m_in_buf));
- memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end));
-
- m_restart_interval = 0;
- m_restarts_left = 0;
- m_next_restart_num = 0;
-
- m_max_mcus_per_row = 0;
- m_max_blocks_per_mcu = 0;
- m_max_mcus_per_col = 0;
-
- memset(m_last_dc_val, 0, sizeof(m_last_dc_val));
- m_pMCU_coefficients = NULL;
- m_pSample_buf = NULL;
-
- m_total_bytes_read = 0;
-
- m_pScan_line_0 = NULL;
- m_pScan_line_1 = NULL;
-
- // Ready the input buffer.
- prep_in_buffer();
-
- // Prime the bit buffer.
- m_bits_left = 16;
- m_bit_buf = 0;
-
- get_bits(16);
- get_bits(16);
-
- for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++)
- m_mcu_block_max_zag[i] = 64;
- }
-
-#define SCALEBITS 16
-#define ONE_HALF ((int) 1 << (SCALEBITS-1))
-#define FIX(x) ((int) ((x) * (1L<> SCALEBITS;
- m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS;
- m_crg[i] = (-FIX(0.71414f)) * k;
- m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF;
- }
- }
-
- // This method throws back into the stream any bytes that where read
- // into the bit buffer during initial marker scanning.
- void jpeg_decoder::fix_in_buffer()
- {
- // In case any 0xFF's where pulled into the buffer during marker scanning.
- JPGD_ASSERT((m_bits_left & 7) == 0);
-
- if (m_bits_left == 16)
- stuff_char( (uint8)(m_bit_buf & 0xFF));
-
- if (m_bits_left >= 8)
- stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF));
-
- stuff_char((uint8)((m_bit_buf >> 16) & 0xFF));
- stuff_char((uint8)((m_bit_buf >> 24) & 0xFF));
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- void jpeg_decoder::transform_mcu(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64;
-
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
- }
-
- static const uint8 s_max_rc[64] =
- {
- 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86,
- 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136
- };
-
- void jpeg_decoder::transform_mcu_expand(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64;
-
- // Y IDCT
- int mcu_block;
- for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
-
- // Chroma IDCT, with upsampling
- jpgd_block_t temp_block[64];
-
- for (int i = 0; i < 2; i++)
- {
- DCT_Upsample::Matrix44 P, Q, R, S;
-
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1);
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64);
-
- switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1])
- {
- case 1*16+1:
- DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr);
- break;
- case 1*16+2:
- DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr);
- break;
- case 2*16+2:
- DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+2:
- DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+3:
- DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+4:
- DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr);
- break;
- case 4*16+4:
- DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+4:
- DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+5:
- DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+6:
- DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr);
- break;
- case 6*16+6:
- DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+6:
- DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+7:
- DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+8:
- DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr);
- break;
- case 8*16+8:
- DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr);
- break;
- default:
- JPGD_ASSERT(false);
- }
-
- DCT_Upsample::Matrix44 a(P + Q); P -= Q;
- DCT_Upsample::Matrix44& b = P;
- DCT_Upsample::Matrix44 c(R + S); R -= S;
- DCT_Upsample::Matrix44& d = R;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- pSrc_ptr += 64;
- }
- }
-
- // Loads and dequantizes the next row of (already decoded) coefficients.
- // Progressive images only.
- void jpeg_decoder::load_next_row()
- {
- int i;
- jpgd_block_t *p;
- jpgd_quant_t *q;
- int mcu_row, mcu_block, row_block = 0;
- int component_num, component_id;
- int block_x_mcu[JPGD_MAX_COMPONENTS];
-
- memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
- q = m_quant[m_comp_quant[component_id]];
-
- p = m_pMCU_coefficients + 64 * mcu_block;
-
- jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- p[0] = pDC[0];
- memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t));
-
- for (i = 63; i > 0; i--)
- if (p[g_ZAG[i]])
- break;
-
- m_mcu_block_max_zag[mcu_block] = i + 1;
-
- for ( ; i >= 0; i--)
- if (p[g_ZAG[i]])
- p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]);
-
- row_block++;
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
-
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
-
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
-
- // Restart interval processing.
- void jpeg_decoder::process_restart()
- {
- int i;
- int c = 0;
-
- // Align to a byte boundry
- // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers!
- //get_bits_no_markers(m_bits_left & 7);
-
- // Let's scan a little bit to find the marker, but not _too_ far.
- // 1536 is a "fudge factor" that determines how much to scan.
- for (i = 1536; i > 0; i--)
- if (get_char() == 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- for ( ; i > 0; i--)
- if ((c = get_char()) != 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Is it the expected marker? If not, something bad happened.
- if (c != (m_next_restart_num + M_RST0))
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Reset each component's DC prediction values.
- memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- m_restarts_left = m_restart_interval;
-
- m_next_restart_num = (m_next_restart_num + 1) & 7;
-
- // Get the bit buffer going again...
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- static inline int dequantize_ac(int c, int q) { c *= q; return c; }
-
- // Decodes and dequantizes the next row of coefficients.
- void jpeg_decoder::decode_next_row()
- {
- int row_block = 0;
-
- for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- jpgd_block_t* p = m_pMCU_coefficients;
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64)
- {
- int component_id = m_mcu_org[mcu_block];
- jpgd_quant_t* q = m_quant[m_comp_quant[component_id]];
-
- int r, s;
- s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r);
- s = HUFF_EXTEND(r, s);
-
- m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]);
-
- p[0] = static_cast(s * q[0]);
-
- int prev_num_set = m_mcu_block_max_zag[mcu_block];
-
- huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]];
-
- int k;
- for (k = 1; k < 64; k++)
- {
- int extra_bits;
- s = huff_decode(pH, extra_bits);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (r)
- {
- if ((k + r) > 63)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(r, prev_num_set - k);
- int kt = k;
- while (n--)
- p[g_ZAG[kt++]] = 0;
- }
-
- k += r;
- }
-
- s = HUFF_EXTEND(extra_bits, s);
-
- JPGD_ASSERT(k < 64);
-
- p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k];
- }
- else
- {
- if (r == 15)
- {
- if ((k + 16) > 64)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(16, prev_num_set - k);
- int kt = k;
- while (n--)
- {
- JPGD_ASSERT(kt <= 63);
- p[g_ZAG[kt++]] = 0;
- }
- }
-
- k += 16 - 1; // - 1 because the loop counter is k
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0);
- // END EPIC MOD
- }
- else
- break;
- }
- }
-
- if (k < prev_num_set)
- {
- int kt = k;
- while (kt < prev_num_set)
- p[g_ZAG[kt++]] = 0;
- }
-
- m_mcu_block_max_zag[mcu_block] = k;
-
- row_block++;
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
-
- m_restarts_left--;
- }
- }
-
- // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int y = s[j];
- int cb = s[64+j];
- int cr = s[128+j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
- d += 4;
- }
-
- s += 64*3;
- }
- }
-
- // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *y = m_pSample_buf + row * 8;
- uint8 *c = m_pSample_buf + 2*64 + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 4; j++)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j<<1];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- }
-
- d0 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*4 - 64*2;
- c += 64*4 - 8;
- }
- }
-
- // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*1 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*2 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int cb = c[0+j];
- int cr = c[64+j];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- }
-
- d0 += 4;
- d1 += 4;
- }
-
- y += 64*4;
- c += 64*4;
- }
- }
-
- // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*2 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*4 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 8; j += 2)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+bc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+rc);
- d1[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+rc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+bc);
- d1[7] = 255;
- }
-
- d0 += 8;
- d1 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*6 - 64*2;
- c += 64*6 - 8;
- }
- }
-
- // Y (1 block per MCU) to 8-bit grayscale
- void jpeg_decoder::gray_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- *(uint *)d = *(uint *)s;
- *(uint *)(&d[4]) = *(uint *)(&s[4]);
-
- s += 64;
- d += 8;
- }
- }
-
- void jpeg_decoder::expanded_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
-
- uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8;
-
- uint8* d = m_pScan_line_0;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int k = 0; k < m_max_mcu_x_size; k += 8)
- {
- const int Y_ofs = k * 8;
- const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component;
- const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2;
- for (int j = 0; j < 8; j++)
- {
- int y = Py[Y_ofs + j];
- int cb = Py[Cb_ofs + j];
- int cr = Py[Cr_ofs + j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
-
- d += 4;
- }
- }
-
- Py += 64 * m_expanded_blocks_per_mcu;
- }
- }
-
- // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream.
- void jpeg_decoder::find_eoi()
- {
- if (!m_progressive_flag)
- {
- // Attempt to read the EOI marker.
- //get_bits_no_markers(m_bits_left & 7);
-
- // Prime the bit buffer
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
-
- // The next marker _should_ be EOI
- process_markers();
- }
-
- m_total_bytes_read -= m_in_buf_left;
- }
-
- int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len)
- {
- if ((m_error_code) || (!m_ready_flag))
- return JPGD_FAILED;
-
- if (m_total_lines_left == 0)
- return JPGD_DONE;
-
- if (m_mcu_lines_left == 0)
- {
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- if (m_progressive_flag)
- load_next_row();
- else
- decode_next_row();
-
- // Find the EOI marker if that was the last row.
- if (m_total_lines_left <= m_max_mcu_y_size)
- find_eoi();
-
- m_mcu_lines_left = m_max_mcu_y_size;
- }
-
- if (m_freq_domain_chroma_upsample)
- {
- expanded_convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- {
- switch (m_scan_type)
- {
- case JPGD_YH2V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H2V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH2V1:
- {
- H2V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_YH1V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H1V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH1V1:
- {
- H1V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_GRAYSCALE:
- {
- gray_convert();
- *pScan_line = m_pScan_line_0;
-
- break;
- }
- }
- }
-
- *pScan_line_len = m_real_dest_bytes_per_scan_line;
-
- m_mcu_lines_left--;
- m_total_lines_left--;
-
- return JPGD_SUCCESS;
- }
-
- // Creates the tables needed for efficient Huffman decoding.
- void jpeg_decoder::make_huff_table(int index, huff_tables *pH)
- {
- int p, i, l, si;
- uint8 huffsize[257];
- uint huffcode[257];
- uint code;
- uint subtree;
- int code_size;
- int lastp;
- int nextfreeentry;
- int currententry;
-
- pH->ac_table = m_huff_ac[index] != 0;
-
- p = 0;
-
- for (l = 1; l <= 16; l++)
- {
- for (i = 1; i <= m_huff_num[index][l]; i++)
- huffsize[p++] = static_cast(l);
- }
-
- huffsize[p] = 0;
-
- lastp = p;
-
- code = 0;
- si = huffsize[0];
- p = 0;
-
- while (huffsize[p])
- {
- while (huffsize[p] == si)
- {
- huffcode[p++] = code;
- code++;
- }
-
- code <<= 1;
- si++;
- }
-
- memset(pH->look_up, 0, sizeof(pH->look_up));
- memset(pH->look_up2, 0, sizeof(pH->look_up2));
- memset(pH->tree, 0, sizeof(pH->tree));
- memset(pH->code_size, 0, sizeof(pH->code_size));
-
- nextfreeentry = -1;
-
- p = 0;
-
- while (p < lastp)
- {
- i = m_huff_val[index][p];
- code = huffcode[p];
- code_size = huffsize[p];
-
- pH->code_size[i] = static_cast(code_size);
-
- if (code_size <= 8)
- {
- code <<= (8 - code_size);
-
- for (l = 1 << (8 - code_size); l > 0; l--)
- {
- JPGD_ASSERT(i < 256);
-
- pH->look_up[code] = i;
-
- bool has_extrabits = false;
- int extra_bits = 0;
- int num_extra_bits = i & 15;
-
- int bits_to_fetch = code_size;
- if (num_extra_bits)
- {
- int total_codesize = code_size + num_extra_bits;
- if (total_codesize <= 8)
- {
- has_extrabits = true;
- extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize));
- JPGD_ASSERT(extra_bits <= 0x7FFF);
- bits_to_fetch += num_extra_bits;
- }
- }
-
- if (!has_extrabits)
- pH->look_up2[code] = i | (bits_to_fetch << 8);
- else
- pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8);
-
- code++;
- }
- }
- else
- {
- subtree = (code >> (code_size - 8)) & 0xFF;
-
- currententry = pH->look_up[subtree];
-
- if (currententry == 0)
- {
- pH->look_up[subtree] = currententry = nextfreeentry;
- pH->look_up2[subtree] = currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
-
- code <<= (16 - (code_size - 8));
-
- for (l = code_size; l > 9; l--)
- {
- if ((code & 0x8000) == 0)
- currententry--;
-
- if (pH->tree[-currententry - 1] == 0)
- {
- pH->tree[-currententry - 1] = nextfreeentry;
-
- currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
- else
- currententry = pH->tree[-currententry - 1];
-
- code <<= 1;
- }
-
- if ((code & 0x8000) == 0)
- currententry--;
-
- pH->tree[-currententry - 1] = i;
- }
-
- p++;
- }
- }
-
- // Verifies the quantization tables needed for this scan are available.
- void jpeg_decoder::check_quant_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL)
- stop_decoding(JPGD_UNDEFINED_QUANT_TABLE);
- }
-
- // Verifies that all the Huffman tables needed for this scan are available.
- void jpeg_decoder::check_huff_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- {
- if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
-
- if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
- }
-
- for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++)
- if (m_huff_num[i])
- {
- if (!m_pHuff_tabs[i])
- m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables));
-
- make_huff_table(i, m_pHuff_tabs[i]);
- }
- }
-
- // Determines the component order inside each MCU.
- // Also calcs how many MCU's are on each row, etc.
- void jpeg_decoder::calc_mcu_block_order()
- {
- int component_num, component_id;
- int max_h_samp = 0, max_v_samp = 0;
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- if (m_comp_h_samp[component_id] > max_h_samp)
- max_h_samp = m_comp_h_samp[component_id];
-
- if (m_comp_v_samp[component_id] > max_v_samp)
- max_v_samp = m_comp_v_samp[component_id];
- }
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8;
- m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]];
- m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]];
- }
- else
- {
- m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp;
- m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcu_org[0] = m_comp_list[0];
-
- m_blocks_per_mcu = 1;
- }
- else
- {
- m_blocks_per_mcu = 0;
-
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- int num_blocks;
-
- component_id = m_comp_list[component_num];
-
- num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id];
-
- while (num_blocks--)
- m_mcu_org[m_blocks_per_mcu++] = component_id;
- }
- }
- }
-
- // Starts a new scan.
- int jpeg_decoder::init_scan()
- {
- if (!locate_sos_marker())
- return JPGD_FALSE;
-
- calc_mcu_block_order();
-
- check_huff_tables();
-
- check_quant_tables();
-
- memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- if (m_restart_interval)
- {
- m_restarts_left = m_restart_interval;
- m_next_restart_num = 0;
- }
-
- fix_in_buffer();
-
- return JPGD_TRUE;
- }
-
- // Starts a frame. Determines if the number of components or sampling factors
- // are supported.
- void jpeg_decoder::init_frame()
- {
- int i;
-
- if (m_comps_in_frame == 1)
- {
- if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1))
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- m_scan_type = JPGD_GRAYSCALE;
- m_max_blocks_per_mcu = 1;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if (m_comps_in_frame == 3)
- {
- if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) ||
- ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) )
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH1V1;
-
- m_max_blocks_per_mcu = 3;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH2V1;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH1V2;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 16;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH2V2;
- m_max_blocks_per_mcu = 6;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 16;
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size;
- m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size;
-
- // These values are for the *destination* pixels: after conversion.
- if (m_scan_type == JPGD_GRAYSCALE)
- m_dest_bytes_per_pixel = 1;
- else
- m_dest_bytes_per_pixel = 4;
-
- m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel;
-
- m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel);
-
- // Initialize two scan line buffers.
- m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
- if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2))
- m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
-
- m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu;
-
- // Should never happen
- if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW)
- stop_decoding(JPGD_ASSERTION_ERROR);
-
- // Allocate the coefficient buffer, enough for one MCU
- m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t));
-
- for (i = 0; i < m_max_blocks_per_mcu; i++)
- m_mcu_block_max_zag[i] = 64;
-
- m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0];
- m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame;
- m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu;
- // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor.
-// BEGIN EPIC MOD
-#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING
- m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3);
-#else
- m_freq_domain_chroma_upsample = 0;
-#endif
-// END EPIC MOD
-
- if (m_freq_domain_chroma_upsample)
- m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64);
- else
- m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64);
-
- m_total_lines_left = m_image_y_size;
-
- m_mcu_lines_left = 0;
-
- create_look_ups();
- }
-
- // The coeff_buf series of methods originally stored the coefficients
- // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache
- // was used to make this process more efficient. Now, we can store the entire
- // thing in RAM.
- jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y)
- {
- coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf));
-
- cb->block_num_x = block_num_x;
- cb->block_num_y = block_num_y;
- cb->block_len_x = block_len_x;
- cb->block_len_y = block_len_y;
- cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t);
- cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true);
- return cb;
- }
-
- inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y)
- {
- JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y));
- return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x));
- }
-
- // The following methods decode the various types of m_blocks encountered
- // in progressively encoded images.
- void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, r;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0)
- {
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
- }
-
- pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]);
-
- p[0] = static_cast(s << pD->m_successive_low);
- }
-
- void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- if (pD->get_bits_no_markers(1))
- {
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- p[0] |= (1 << pD->m_successive_low);
- }
- }
-
- void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int k, s, r;
-
- if (pD->m_eob_run)
- {
- pD->m_eob_run--;
- return;
- }
-
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if ((k += r) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
-
- p[g_ZAG[k]] = static_cast(s << pD->m_successive_low);
- }
- else
- {
- if (r == 15)
- {
- if ((k += 15) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
- }
- else
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- pD->m_eob_run--;
-
- break;
- }
- }
- }
- }
-
- void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, k, r;
- int p1 = 1 << pD->m_successive_low;
- int m1 = (-1) << pD->m_successive_low;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- k = pD->m_spectral_start;
-
- if (pD->m_eob_run == 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (s != 1)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- if (pD->get_bits_no_markers(1))
- s = p1;
- else
- s = m1;
- }
- else
- {
- if (r != 15)
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- break;
- }
- }
-
- do
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- else
- {
- if (--r < 0)
- break;
- }
-
- k++;
-
- } while (k <= pD->m_spectral_end);
-
- if ((s) && (k < 64))
- {
- p[g_ZAG[k]] = static_cast(s);
- }
- }
- }
-
- if (pD->m_eob_run > 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- }
-
- pD->m_eob_run--;
- }
- }
-
- // Decode a scan in a progressively encoded image.
- void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func)
- {
- int mcu_row, mcu_col, mcu_block;
- int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS];
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++)
- {
- int component_num, component_id;
-
- memset(block_x_mcu, 0, sizeof(block_x_mcu));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
-
- decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- m_restarts_left--;
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
- }
-
- // Decode a progressively encoded image.
- void jpeg_decoder::init_progressive()
- {
- int i;
-
- if (m_comps_in_frame == 4)
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- // Allocate the coefficient buffers.
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1);
- m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8);
- }
-
- for ( ; ; )
- {
- int dc_only_scan, refinement_scan;
- pDecode_block_func decode_block_func;
-
- if (!init_scan())
- break;
-
- dc_only_scan = (m_spectral_start == 0);
- refinement_scan = (m_successive_high != 0);
-
- if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63))
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if (dc_only_scan)
- {
- if (m_spectral_end)
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
- }
- else if (m_comps_in_scan != 1) /* AC scans can only contain one component */
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if ((refinement_scan) && (m_successive_low != m_successive_high - 1))
- stop_decoding(JPGD_BAD_SOS_SUCCESSIVE);
-
- if (dc_only_scan)
- {
- if (refinement_scan)
- decode_block_func = decode_block_dc_refine;
- else
- decode_block_func = decode_block_dc_first;
- }
- else
- {
- if (refinement_scan)
- decode_block_func = decode_block_ac_refine;
- else
- decode_block_func = decode_block_ac_first;
- }
-
- decode_scan(decode_block_func);
-
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
- }
-
- m_comps_in_scan = m_comps_in_frame;
-
- for (i = 0; i < m_comps_in_frame; i++)
- m_comp_list[i] = i;
-
- calc_mcu_block_order();
- }
-
- void jpeg_decoder::init_sequential()
- {
- if (!init_scan())
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- }
-
- void jpeg_decoder::decode_start()
- {
- init_frame();
-
- if (m_progressive_flag)
- init_progressive();
- else
- init_sequential();
- }
-
- void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream)
- {
- init(pStream);
- locate_sof_marker();
- }
-
- jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream)
- {
- if (setjmp(m_jmp_state))
- return;
- decode_init(pStream);
- }
-
- int jpeg_decoder::begin_decoding()
- {
- if (m_ready_flag)
- return JPGD_SUCCESS;
-
- if (m_error_code)
- return JPGD_FAILED;
-
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- decode_start();
-
- m_ready_flag = true;
-
- return JPGD_SUCCESS;
- }
-
- jpeg_decoder::~jpeg_decoder()
- {
- free_all_blocks();
- }
-
- jpeg_decoder_file_stream::jpeg_decoder_file_stream()
- {
- m_pFile = NULL;
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- void jpeg_decoder_file_stream::close()
- {
- if (m_pFile)
- {
- fclose(m_pFile);
- m_pFile = NULL;
- }
-
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- jpeg_decoder_file_stream::~jpeg_decoder_file_stream()
- {
- close();
- }
-
- bool jpeg_decoder_file_stream::open(const char *Pfilename)
- {
- close();
-
- m_eof_flag = false;
- m_error_flag = false;
-
-#if defined(_MSC_VER)
- m_pFile = NULL;
- fopen_s(&m_pFile, Pfilename, "rb");
-#else
- m_pFile = fopen(Pfilename, "rb");
-#endif
- return m_pFile != NULL;
- }
-
- int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- if (!m_pFile)
- return -1;
-
- if (m_eof_flag)
- {
- *pEOF_flag = true;
- return 0;
- }
-
- if (m_error_flag)
- return -1;
-
- int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile));
- if (bytes_read < max_bytes_to_read)
- {
- if (ferror(m_pFile))
- {
- m_error_flag = true;
- return -1;
- }
-
- m_eof_flag = true;
- *pEOF_flag = true;
- }
-
- return bytes_read;
- }
-
- bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size)
- {
- close();
- m_pSrc_data = pSrc_data;
- m_ofs = 0;
- m_size = size;
- return true;
- }
-
- int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- *pEOF_flag = false;
-
- if (!m_pSrc_data)
- return -1;
-
- uint bytes_remaining = m_size - m_ofs;
- if ((uint)max_bytes_to_read > bytes_remaining)
- {
- max_bytes_to_read = bytes_remaining;
- *pEOF_flag = true;
- }
-
- memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read);
- m_ofs += max_bytes_to_read;
-
- return max_bytes_to_read;
- }
-
- unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps)
- {
- if (!actual_comps)
- return NULL;
- *actual_comps = 0;
-
- if ((!pStream) || (!width) || (!height) || (!req_comps))
- return NULL;
-
- if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4))
- return NULL;
-
- jpeg_decoder decoder(pStream);
- if (decoder.get_error_code() != JPGD_SUCCESS)
- return NULL;
-
- const int image_width = decoder.get_width(), image_height = decoder.get_height();
- *width = image_width;
- *height = image_height;
- *actual_comps = decoder.get_num_components();
-
- if (decoder.begin_decoding() != JPGD_SUCCESS)
- return NULL;
-
- const int dst_bpl = image_width * req_comps;
-
- uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height);
- if (!pImage_data)
- return NULL;
-
- for (int y = 0; y < image_height; y++)
- {
- const uint8* pScan_line = 0;
- uint scan_line_len;
- if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS)
- {
- jpgd_free(pImage_data);
- return NULL;
- }
-
- uint8 *pDst = pImage_data + y * dst_bpl;
-
- if (((req_comps == 4) && (decoder.get_num_components() == 3)) ||
- ((req_comps == 1) && (decoder.get_num_components() == 1)))
- {
- memcpy(pDst, pScan_line, dst_bpl);
- }
- else if (decoder.get_num_components() == 1)
- {
- if (req_comps == 3)
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst += 3;
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst[3] = 255;
- pDst += 4;
- }
- }
- }
- else if (decoder.get_num_components() == 3)
- {
- if (req_comps == 1)
- {
- const int YR = 19595, YG = 38470, YB = 7471;
- for (int x = 0; x < image_width; x++)
- {
- int r = pScan_line[x*4+0];
- int g = pScan_line[x*4+1];
- int b = pScan_line[x*4+2];
- *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16);
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- pDst[0] = pScan_line[x*4+0];
- pDst[1] = pScan_line[x*4+1];
- pDst[2] = pScan_line[x*4+2];
- pDst += 3;
- }
- }
- }
- }
-
- return pImage_data;
- }
-
-// BEGIN EPIC MOD
- unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format)
- {
- jpg_format = (ERGBFormatJPG)format;
-// EMD EPIC MOD
- jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size);
- return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps);
- }
-
- unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps)
- {
- jpgd::jpeg_decoder_file_stream file_stream;
- if (!file_stream.open(pSrc_filename))
- return NULL;
- return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps);
- }
-
-} // namespace jpgd
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py
deleted file mode 100644
index 4c25647930c6557d10e8a3ee92b68cfe3a07f7d7..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import logging
-from typing import Iterable, Set, Tuple
-
-from pip._internal.build_env import BuildEnvironment
-from pip._internal.distributions.base import AbstractDistribution
-from pip._internal.exceptions import InstallationError
-from pip._internal.index.package_finder import PackageFinder
-from pip._internal.metadata import BaseDistribution
-from pip._internal.utils.subprocess import runner_with_spinner_message
-
-logger = logging.getLogger(__name__)
-
-
-class SourceDistribution(AbstractDistribution):
- """Represents a source distribution.
-
- The preparation step for these needs metadata for the packages to be
- generated, either using PEP 517 or using the legacy `setup.py egg_info`.
- """
-
- def get_metadata_distribution(self) -> BaseDistribution:
- return self.req.get_dist()
-
- def prepare_distribution_metadata(
- self,
- finder: PackageFinder,
- build_isolation: bool,
- check_build_deps: bool,
- ) -> None:
- # Load pyproject.toml, to determine whether PEP 517 is to be used
- self.req.load_pyproject_toml()
-
- # Set up the build isolation, if this requirement should be isolated
- should_isolate = self.req.use_pep517 and build_isolation
- if should_isolate:
- # Setup an isolated environment and install the build backend static
- # requirements in it.
- self._prepare_build_backend(finder)
- # Check that if the requirement is editable, it either supports PEP 660 or
- # has a setup.py or a setup.cfg. This cannot be done earlier because we need
- # to setup the build backend to verify it supports build_editable, nor can
- # it be done later, because we want to avoid installing build requirements
- # needlessly. Doing it here also works around setuptools generating
- # UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory
- # without setup.py nor setup.cfg.
- self.req.isolated_editable_sanity_check()
- # Install the dynamic build requirements.
- self._install_build_reqs(finder)
- # Check if the current environment provides build dependencies
- should_check_deps = self.req.use_pep517 and check_build_deps
- if should_check_deps:
- pyproject_requires = self.req.pyproject_requires
- assert pyproject_requires is not None
- conflicting, missing = self.req.build_env.check_requirements(
- pyproject_requires
- )
- if conflicting:
- self._raise_conflicts("the backend dependencies", conflicting)
- if missing:
- self._raise_missing_reqs(missing)
- self.req.prepare_metadata()
-
- def _prepare_build_backend(self, finder: PackageFinder) -> None:
- # Isolate in a BuildEnvironment and install the build-time
- # requirements.
- pyproject_requires = self.req.pyproject_requires
- assert pyproject_requires is not None
-
- self.req.build_env = BuildEnvironment()
- self.req.build_env.install_requirements(
- finder, pyproject_requires, "overlay", kind="build dependencies"
- )
- conflicting, missing = self.req.build_env.check_requirements(
- self.req.requirements_to_check
- )
- if conflicting:
- self._raise_conflicts("PEP 517/518 supported requirements", conflicting)
- if missing:
- logger.warning(
- "Missing build requirements in pyproject.toml for %s.",
- self.req,
- )
- logger.warning(
- "The project does not specify a build backend, and "
- "pip cannot fall back to setuptools without %s.",
- " and ".join(map(repr, sorted(missing))),
- )
-
- def _get_build_requires_wheel(self) -> Iterable[str]:
- with self.req.build_env:
- runner = runner_with_spinner_message("Getting requirements to build wheel")
- backend = self.req.pep517_backend
- assert backend is not None
- with backend.subprocess_runner(runner):
- return backend.get_requires_for_build_wheel()
-
- def _get_build_requires_editable(self) -> Iterable[str]:
- with self.req.build_env:
- runner = runner_with_spinner_message(
- "Getting requirements to build editable"
- )
- backend = self.req.pep517_backend
- assert backend is not None
- with backend.subprocess_runner(runner):
- return backend.get_requires_for_build_editable()
-
- def _install_build_reqs(self, finder: PackageFinder) -> None:
- # Install any extra build dependencies that the backend requests.
- # This must be done in a second pass, as the pyproject.toml
- # dependencies must be installed before we can call the backend.
- if (
- self.req.editable
- and self.req.permit_editable_wheels
- and self.req.supports_pyproject_editable()
- ):
- build_reqs = self._get_build_requires_editable()
- else:
- build_reqs = self._get_build_requires_wheel()
- conflicting, missing = self.req.build_env.check_requirements(build_reqs)
- if conflicting:
- self._raise_conflicts("the backend dependencies", conflicting)
- self.req.build_env.install_requirements(
- finder, missing, "normal", kind="backend dependencies"
- )
-
- def _raise_conflicts(
- self, conflicting_with: str, conflicting_reqs: Set[Tuple[str, str]]
- ) -> None:
- format_string = (
- "Some build dependencies for {requirement} "
- "conflict with {conflicting_with}: {description}."
- )
- error_message = format_string.format(
- requirement=self.req,
- conflicting_with=conflicting_with,
- description=", ".join(
- f"{installed} is incompatible with {wanted}"
- for installed, wanted in sorted(conflicting_reqs)
- ),
- )
- raise InstallationError(error_message)
-
- def _raise_missing_reqs(self, missing: Set[str]) -> None:
- format_string = (
- "Some build dependencies for {requirement} are missing: {missing}."
- )
- error_message = format_string.format(
- requirement=self.req, missing=", ".join(map(repr, sorted(missing)))
- )
- raise InstallationError(error_message)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/providers.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/providers.py
deleted file mode 100644
index 7d0a9c22a4656951910a9fbb70af59a0706cadde..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/resolvelib/providers.py
+++ /dev/null
@@ -1,133 +0,0 @@
-class AbstractProvider(object):
- """Delegate class to provide requirement interface for the resolver."""
-
- def identify(self, requirement_or_candidate):
- """Given a requirement, return an identifier for it.
-
- This is used to identify a requirement, e.g. whether two requirements
- should have their specifier parts merged.
- """
- raise NotImplementedError
-
- def get_preference(
- self,
- identifier,
- resolutions,
- candidates,
- information,
- backtrack_causes,
- ):
- """Produce a sort key for given requirement based on preference.
-
- The preference is defined as "I think this requirement should be
- resolved first". The lower the return value is, the more preferred
- this group of arguments is.
-
- :param identifier: An identifier as returned by ``identify()``. This
- identifies the dependency matches of which should be returned.
- :param resolutions: Mapping of candidates currently pinned by the
- resolver. Each key is an identifier, and the value a candidate.
- The candidate may conflict with requirements from ``information``.
- :param candidates: Mapping of each dependency's possible candidates.
- Each value is an iterator of candidates.
- :param information: Mapping of requirement information of each package.
- Each value is an iterator of *requirement information*.
- :param backtrack_causes: Sequence of requirement information that were
- the requirements that caused the resolver to most recently backtrack.
-
- A *requirement information* instance is a named tuple with two members:
-
- * ``requirement`` specifies a requirement contributing to the current
- list of candidates.
- * ``parent`` specifies the candidate that provides (dependend on) the
- requirement, or ``None`` to indicate a root requirement.
-
- The preference could depend on a various of issues, including (not
- necessarily in this order):
-
- * Is this package pinned in the current resolution result?
- * How relaxed is the requirement? Stricter ones should probably be
- worked on first? (I don't know, actually.)
- * How many possibilities are there to satisfy this requirement? Those
- with few left should likely be worked on first, I guess?
- * Are there any known conflicts for this requirement? We should
- probably work on those with the most known conflicts.
-
- A sortable value should be returned (this will be used as the ``key``
- parameter of the built-in sorting function). The smaller the value is,
- the more preferred this requirement is (i.e. the sorting function
- is called with ``reverse=False``).
- """
- raise NotImplementedError
-
- def find_matches(self, identifier, requirements, incompatibilities):
- """Find all possible candidates that satisfy given constraints.
-
- :param identifier: An identifier as returned by ``identify()``. This
- identifies the dependency matches of which should be returned.
- :param requirements: A mapping of requirements that all returned
- candidates must satisfy. Each key is an identifier, and the value
- an iterator of requirements for that dependency.
- :param incompatibilities: A mapping of known incompatibilities of
- each dependency. Each key is an identifier, and the value an
- iterator of incompatibilities known to the resolver. All
- incompatibilities *must* be excluded from the return value.
-
- This should try to get candidates based on the requirements' types.
- For VCS, local, and archive requirements, the one-and-only match is
- returned, and for a "named" requirement, the index(es) should be
- consulted to find concrete candidates for this requirement.
-
- The return value should produce candidates ordered by preference; the
- most preferred candidate should come first. The return type may be one
- of the following:
-
- * A callable that returns an iterator that yields candidates.
- * An collection of candidates.
- * An iterable of candidates. This will be consumed immediately into a
- list of candidates.
- """
- raise NotImplementedError
-
- def is_satisfied_by(self, requirement, candidate):
- """Whether the given requirement can be satisfied by a candidate.
-
- The candidate is guarenteed to have been generated from the
- requirement.
-
- A boolean should be returned to indicate whether ``candidate`` is a
- viable solution to the requirement.
- """
- raise NotImplementedError
-
- def get_dependencies(self, candidate):
- """Get dependencies of a candidate.
-
- This should return a collection of requirements that `candidate`
- specifies as its dependencies.
- """
- raise NotImplementedError
-
-
-class AbstractResolver(object):
- """The thing that performs the actual resolution work."""
-
- base_exception = Exception
-
- def __init__(self, provider, reporter):
- self.provider = provider
- self.reporter = reporter
-
- def resolve(self, requirements, **kwargs):
- """Take a collection of constraints, spit out the resolution result.
-
- This returns a representation of the final resolution state, with one
- guarenteed attribute ``mapping`` that contains resolved candidates as
- values. The keys are their respective identifiers.
-
- :param requirements: A collection of constraints.
- :param kwargs: Additional keyword arguments that subclasses may accept.
-
- :raises: ``self.base_exception`` or its subclass.
- """
- raise NotImplementedError
diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/datasets/megadepth.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/datasets/megadepth.py
deleted file mode 100644
index c580607e910ce1926b7711b5473aa82b20865369..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/datasets/megadepth.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import os
-import random
-from PIL import Image
-import h5py
-import numpy as np
-import torch
-from torch.utils.data import Dataset, DataLoader, ConcatDataset
-
-from dkm.utils import get_depth_tuple_transform_ops, get_tuple_transform_ops
-import torchvision.transforms.functional as tvf
-from dkm.utils.transforms import GeometricSequential
-import kornia.augmentation as K
-
-
-class MegadepthScene:
- def __init__(
- self,
- data_root,
- scene_info,
- ht=384,
- wt=512,
- min_overlap=0.0,
- shake_t=0,
- rot_prob=0.0,
- normalize=True,
- ) -> None:
- self.data_root = data_root
- self.image_paths = scene_info["image_paths"]
- self.depth_paths = scene_info["depth_paths"]
- self.intrinsics = scene_info["intrinsics"]
- self.poses = scene_info["poses"]
- self.pairs = scene_info["pairs"]
- self.overlaps = scene_info["overlaps"]
- threshold = self.overlaps > min_overlap
- self.pairs = self.pairs[threshold]
- self.overlaps = self.overlaps[threshold]
- if len(self.pairs) > 100000:
- pairinds = np.random.choice(
- np.arange(0, len(self.pairs)), 100000, replace=False
- )
- self.pairs = self.pairs[pairinds]
- self.overlaps = self.overlaps[pairinds]
- # counts, bins = np.histogram(self.overlaps,20)
- # print(counts)
- self.im_transform_ops = get_tuple_transform_ops(
- resize=(ht, wt), normalize=normalize
- )
- self.depth_transform_ops = get_depth_tuple_transform_ops(
- resize=(ht, wt), normalize=False
- )
- self.wt, self.ht = wt, ht
- self.shake_t = shake_t
- self.H_generator = GeometricSequential(K.RandomAffine(degrees=90, p=rot_prob))
-
- def load_im(self, im_ref, crop=None):
- im = Image.open(im_ref)
- return im
-
- def load_depth(self, depth_ref, crop=None):
- depth = np.array(h5py.File(depth_ref, "r")["depth"])
- return torch.from_numpy(depth)
-
- def __len__(self):
- return len(self.pairs)
-
- def scale_intrinsic(self, K, wi, hi):
- sx, sy = self.wt / wi, self.ht / hi
- sK = torch.tensor([[sx, 0, 0], [0, sy, 0], [0, 0, 1]])
- return sK @ K
-
- def rand_shake(self, *things):
- t = np.random.choice(range(-self.shake_t, self.shake_t + 1), size=2)
- return [
- tvf.affine(thing, angle=0.0, translate=list(t), scale=1.0, shear=[0.0, 0.0])
- for thing in things
- ], t
-
- def __getitem__(self, pair_idx):
- # read intrinsics of original size
- idx1, idx2 = self.pairs[pair_idx]
- K1 = torch.tensor(self.intrinsics[idx1].copy(), dtype=torch.float).reshape(3, 3)
- K2 = torch.tensor(self.intrinsics[idx2].copy(), dtype=torch.float).reshape(3, 3)
-
- # read and compute relative poses
- T1 = self.poses[idx1]
- T2 = self.poses[idx2]
- T_1to2 = torch.tensor(np.matmul(T2, np.linalg.inv(T1)), dtype=torch.float)[
- :4, :4
- ] # (4, 4)
-
- # Load positive pair data
- im1, im2 = self.image_paths[idx1], self.image_paths[idx2]
- depth1, depth2 = self.depth_paths[idx1], self.depth_paths[idx2]
- im_src_ref = os.path.join(self.data_root, im1)
- im_pos_ref = os.path.join(self.data_root, im2)
- depth_src_ref = os.path.join(self.data_root, depth1)
- depth_pos_ref = os.path.join(self.data_root, depth2)
- # return torch.randn((1000,1000))
- im_src = self.load_im(im_src_ref)
- im_pos = self.load_im(im_pos_ref)
- depth_src = self.load_depth(depth_src_ref)
- depth_pos = self.load_depth(depth_pos_ref)
-
- # Recompute camera intrinsic matrix due to the resize
- K1 = self.scale_intrinsic(K1, im_src.width, im_src.height)
- K2 = self.scale_intrinsic(K2, im_pos.width, im_pos.height)
- # Process images
- im_src, im_pos = self.im_transform_ops((im_src, im_pos))
- depth_src, depth_pos = self.depth_transform_ops(
- (depth_src[None, None], depth_pos[None, None])
- )
- [im_src, im_pos, depth_src, depth_pos], t = self.rand_shake(
- im_src, im_pos, depth_src, depth_pos
- )
- im_src, Hq = self.H_generator(im_src[None])
- depth_src = self.H_generator.apply_transform(depth_src, Hq)
- K1[:2, 2] += t
- K2[:2, 2] += t
- K1 = Hq[0] @ K1
- data_dict = {
- "query": im_src[0],
- "query_identifier": self.image_paths[idx1].split("/")[-1].split(".jpg")[0],
- "support": im_pos,
- "support_identifier": self.image_paths[idx2]
- .split("/")[-1]
- .split(".jpg")[0],
- "query_depth": depth_src[0, 0],
- "support_depth": depth_pos[0, 0],
- "K1": K1,
- "K2": K2,
- "T_1to2": T_1to2,
- }
- return data_dict
-
-
-class MegadepthBuilder:
- def __init__(self, data_root="data/megadepth") -> None:
- self.data_root = data_root
- self.scene_info_root = os.path.join(data_root, "prep_scene_info")
- self.all_scenes = os.listdir(self.scene_info_root)
- self.test_scenes = ["0017.npy", "0004.npy", "0048.npy", "0013.npy"]
- self.test_scenes_loftr = ["0015.npy", "0022.npy"]
-
- def build_scenes(self, split="train", min_overlap=0.0, **kwargs):
- if split == "train":
- scene_names = set(self.all_scenes) - set(self.test_scenes)
- elif split == "train_loftr":
- scene_names = set(self.all_scenes) - set(self.test_scenes_loftr)
- elif split == "test":
- scene_names = self.test_scenes
- elif split == "test_loftr":
- scene_names = self.test_scenes_loftr
- else:
- raise ValueError(f"Split {split} not available")
- scenes = []
- for scene_name in scene_names:
- scene_info = np.load(
- os.path.join(self.scene_info_root, scene_name), allow_pickle=True
- ).item()
- scenes.append(
- MegadepthScene(
- self.data_root, scene_info, min_overlap=min_overlap, **kwargs
- )
- )
- return scenes
-
- def weight_scenes(self, concat_dataset, alpha=0.5):
- ns = []
- for d in concat_dataset.datasets:
- ns.append(len(d))
- ws = torch.cat([torch.ones(n) / n**alpha for n in ns])
- return ws
-
-
-if __name__ == "__main__":
- mega_test = ConcatDataset(MegadepthBuilder().build_scenes(split="train"))
- mega_test[0]
diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/train.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/train.py
deleted file mode 100644
index 2572e3a726d16ffef1bb734feeba0a7a19f4d354..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/train.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import torch
-from tqdm import tqdm
-from DeDoDe.utils import to_cuda
-
-
-def train_step(train_batch, model, objective, optimizer, grad_scaler=None, **kwargs):
- optimizer.zero_grad()
- out = model(train_batch)
- l = objective(out, train_batch)
- if grad_scaler is not None:
- grad_scaler.scale(l).backward()
- grad_scaler.unscale_(optimizer)
- torch.nn.utils.clip_grad_norm_(model.parameters(), 0.01)
- grad_scaler.step(optimizer)
- grad_scaler.update()
- else:
- l.backward()
- optimizer.step()
- return {"train_out": out, "train_loss": l.item()}
-
-
-def train_k_steps(
- n_0,
- k,
- dataloader,
- model,
- objective,
- optimizer,
- lr_scheduler,
- grad_scaler=None,
- progress_bar=True,
-):
- for n in tqdm(range(n_0, n_0 + k), disable=not progress_bar, mininterval=10.0):
- batch = next(dataloader)
- model.train(True)
- batch = to_cuda(batch)
- train_step(
- train_batch=batch,
- model=model,
- objective=objective,
- optimizer=optimizer,
- lr_scheduler=lr_scheduler,
- n=n,
- grad_scaler=grad_scaler,
- )
- lr_scheduler.step()
-
-
-def train_epoch(
- dataloader=None,
- model=None,
- objective=None,
- optimizer=None,
- lr_scheduler=None,
- epoch=None,
-):
- model.train(True)
- print(f"At epoch {epoch}")
- for batch in tqdm(dataloader, mininterval=5.0):
- batch = to_cuda(batch)
- train_step(
- train_batch=batch, model=model, objective=objective, optimizer=optimizer
- )
- lr_scheduler.step()
- return {
- "model": model,
- "optimizer": optimizer,
- "lr_scheduler": lr_scheduler,
- "epoch": epoch,
- }
-
-
-def train_k_epochs(
- start_epoch, end_epoch, dataloader, model, objective, optimizer, lr_scheduler
-):
- for epoch in range(start_epoch, end_epoch + 1):
- train_epoch(
- dataloader=dataloader,
- model=model,
- objective=objective,
- optimizer=optimizer,
- lr_scheduler=lr_scheduler,
- epoch=epoch,
- )
diff --git a/spaces/Redgon/bingo/src/components/ui/badge.tsx b/spaces/Redgon/bingo/src/components/ui/badge.tsx
deleted file mode 100644
index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/src/components/ui/badge.tsx
+++ /dev/null
@@ -1,36 +0,0 @@
-import * as React from 'react'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const badgeVariants = cva(
- 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2',
- {
- variants: {
- variant: {
- default:
- 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80',
- secondary:
- 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80',
- destructive:
- 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80',
- outline: 'text-foreground'
- }
- },
- defaultVariants: {
- variant: 'default'
- }
- }
-)
-
-export interface BadgeProps
- extends React.HTMLAttributes,
- VariantProps {}
-
-function Badge({ className, variant, ...props }: BadgeProps) {
- return (
-
- )
-}
-
-export { Badge, badgeVariants }
diff --git a/spaces/Redgon/bingo/src/components/ui/button.tsx b/spaces/Redgon/bingo/src/components/ui/button.tsx
deleted file mode 100644
index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/src/components/ui/button.tsx
+++ /dev/null
@@ -1,57 +0,0 @@
-import * as React from 'react'
-import { Slot } from '@radix-ui/react-slot'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const buttonVariants = cva(
- 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50',
- {
- variants: {
- variant: {
- default:
- 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90',
- destructive:
- 'bg-destructive text-destructive-foreground hover:bg-destructive/90',
- outline:
- 'border border-input hover:bg-accent hover:text-accent-foreground',
- secondary:
- 'bg-secondary text-secondary-foreground hover:bg-secondary/80',
- ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground',
- link: 'text-primary underline-offset-4 shadow-none hover:underline'
- },
- size: {
- default: 'h-8 px-4 py-2',
- sm: 'h-8 rounded-md px-3',
- lg: 'h-11 rounded-md px-8',
- icon: 'h-8 w-8 p-0'
- }
- },
- defaultVariants: {
- variant: 'default',
- size: 'default'
- }
- }
-)
-
-export interface ButtonProps
- extends React.ButtonHTMLAttributes,
- VariantProps {
- asChild?: boolean
-}
-
-const Button = React.forwardRef(
- ({ className, variant, size, asChild = false, ...props }, ref) => {
- const Comp = asChild ? Slot : 'button'
- return (
-
- )
- }
-)
-Button.displayName = 'Button'
-
-export { Button, buttonVariants }
diff --git a/spaces/Ritori/TTS_Yui/waveglow/mel2samp.py b/spaces/Ritori/TTS_Yui/waveglow/mel2samp.py
deleted file mode 100644
index f13f4af8a7a0d624010a0eb11e885830fed22b54..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/waveglow/mel2samp.py
+++ /dev/null
@@ -1,142 +0,0 @@
-# *****************************************************************************
-# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# * Neither the name of the NVIDIA CORPORATION nor the
-# names of its contributors may be used to endorse or promote products
-# derived from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
-# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# *****************************************************************************\
-import os
-import random
-import argparse
-import json
-import torch
-import torch.utils.data
-import sys
-from scipy.io.wavfile import read
-
-# We're using the audio processing from TacoTron2 to make sure it matches
-sys.path.insert(0, 'tacotron2')
-from tacotron2.layers import TacotronSTFT
-
-MAX_WAV_VALUE = 32768.0
-
-def files_to_list(filename):
- """
- Takes a text file of filenames and makes a list of filenames
- """
- with open(filename, encoding='utf-8') as f:
- files = f.readlines()
-
- files = [f.rstrip() for f in files]
- return files
-
-def load_wav_to_torch(full_path):
- """
- Loads wavdata into torch array
- """
- sampling_rate, data = read(full_path)
- return torch.from_numpy(data).float(), sampling_rate
-
-
-class Mel2Samp(torch.utils.data.Dataset):
- """
- This is the main class that calculates the spectrogram and returns the
- spectrogram, audio pair.
- """
- def __init__(self, training_files, segment_length, filter_length,
- hop_length, win_length, sampling_rate, mel_fmin, mel_fmax):
- self.audio_files = files_to_list(training_files)
- random.seed(1234)
- random.shuffle(self.audio_files)
- self.stft = TacotronSTFT(filter_length=filter_length,
- hop_length=hop_length,
- win_length=win_length,
- sampling_rate=sampling_rate,
- mel_fmin=mel_fmin, mel_fmax=mel_fmax)
- self.segment_length = segment_length
- self.sampling_rate = sampling_rate
-
- def get_mel(self, audio):
- audio_norm = audio / MAX_WAV_VALUE
- audio_norm = audio_norm.unsqueeze(0)
- audio_norm = torch.autograd.Variable(audio_norm, requires_grad=False)
- melspec = self.stft.mel_spectrogram(audio_norm)
- melspec = torch.squeeze(melspec, 0)
- return melspec
-
- def __getitem__(self, index):
- # Read audio
- filename = self.audio_files[index]
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
-
- # Take segment
- if audio.size(0) >= self.segment_length:
- max_audio_start = audio.size(0) - self.segment_length
- audio_start = random.randint(0, max_audio_start)
- audio = audio[audio_start:audio_start+self.segment_length]
- else:
- audio = torch.nn.functional.pad(audio, (0, self.segment_length - audio.size(0)), 'constant').data
-
- mel = self.get_mel(audio)
- audio = audio / MAX_WAV_VALUE
-
- return (mel, audio)
-
- def __len__(self):
- return len(self.audio_files)
-
-# ===================================================================
-# Takes directory of clean audio and makes directory of spectrograms
-# Useful for making test sets
-# ===================================================================
-if __name__ == "__main__":
- # Get defaults so it can work with no Sacred
- parser = argparse.ArgumentParser()
- parser.add_argument('-f', "--filelist_path", required=True)
- parser.add_argument('-c', '--config', type=str,
- help='JSON file for configuration')
- parser.add_argument('-o', '--output_dir', type=str,
- help='Output directory')
- args = parser.parse_args()
-
- with open(args.config) as f:
- data = f.read()
- data_config = json.loads(data)["data_config"]
- mel2samp = Mel2Samp(**data_config)
-
- filepaths = files_to_list(args.filelist_path)
-
- # Make directory if it doesn't exist
- if not os.path.isdir(args.output_dir):
- os.makedirs(args.output_dir)
- os.chmod(args.output_dir, 0o775)
-
- for filepath in filepaths:
- audio, sr = load_wav_to_torch(filepath)
- melspectrogram = mel2samp.get_mel(audio)
- filename = os.path.basename(filepath)
- new_filepath = args.output_dir + '/' + filename + '.pt'
- print(new_filepath)
- torch.save(melspectrogram, new_filepath)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/three_interpolate.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/three_interpolate.py
deleted file mode 100644
index 203f47f05d58087e034fb3cd8cd6a09233947b4a..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/three_interpolate.py
+++ /dev/null
@@ -1,68 +0,0 @@
-from typing import Tuple
-
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['three_interpolate_forward', 'three_interpolate_backward'])
-
-
-class ThreeInterpolate(Function):
- """Performs weighted linear interpolation on 3 features.
-
- Please refer to `Paper of PointNet++ `_
- for more details.
- """
-
- @staticmethod
- def forward(ctx, features: torch.Tensor, indices: torch.Tensor,
- weight: torch.Tensor) -> torch.Tensor:
- """
- Args:
- features (Tensor): (B, C, M) Features descriptors to be
- interpolated
- indices (Tensor): (B, n, 3) index three nearest neighbors
- of the target features in features
- weight (Tensor): (B, n, 3) weights of interpolation
-
- Returns:
- Tensor: (B, C, N) tensor of the interpolated features
- """
- assert features.is_contiguous()
- assert indices.is_contiguous()
- assert weight.is_contiguous()
-
- B, c, m = features.size()
- n = indices.size(1)
- ctx.three_interpolate_for_backward = (indices, weight, m)
- output = torch.cuda.FloatTensor(B, c, n)
-
- ext_module.three_interpolate_forward(
- features, indices, weight, output, b=B, c=c, m=m, n=n)
- return output
-
- @staticmethod
- def backward(
- ctx, grad_out: torch.Tensor
- ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
- """
- Args:
- grad_out (Tensor): (B, C, N) tensor with gradients of outputs
-
- Returns:
- Tensor: (B, C, M) tensor with gradients of features
- """
- idx, weight, m = ctx.three_interpolate_for_backward
- B, c, n = grad_out.size()
-
- grad_features = torch.cuda.FloatTensor(B, c, m).zero_()
- grad_out_data = grad_out.data.contiguous()
-
- ext_module.three_interpolate_backward(
- grad_out_data, idx, weight, grad_features.data, b=B, c=c, n=n, m=m)
- return grad_features, None, None
-
-
-three_interpolate = ThreeInterpolate.apply
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/free_anchor_retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/free_anchor_retina_head.py
deleted file mode 100644
index 79879fdc3171b8e34b606b27eb1ceb67f4473e3e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/free_anchor_retina_head.py
+++ /dev/null
@@ -1,270 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-from mmdet.core import bbox_overlaps
-from ..builder import HEADS
-from .retina_head import RetinaHead
-
-EPS = 1e-12
-
-
-@HEADS.register_module()
-class FreeAnchorRetinaHead(RetinaHead):
- """FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466.
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- stacked_convs (int): Number of conv layers in cls and reg tower.
- Default: 4.
- conv_cfg (dict): dictionary to construct and config conv layer.
- Default: None.
- norm_cfg (dict): dictionary to construct and config norm layer.
- Default: norm_cfg=dict(type='GN', num_groups=32,
- requires_grad=True).
- pre_anchor_topk (int): Number of boxes that be token in each bag.
- bbox_thr (float): The threshold of the saturated linear function. It is
- usually the same with the IoU threshold used in NMS.
- gamma (float): Gamma parameter in focal loss.
- alpha (float): Alpha parameter in focal loss.
- """ # noqa: W605
-
- def __init__(self,
- num_classes,
- in_channels,
- stacked_convs=4,
- conv_cfg=None,
- norm_cfg=None,
- pre_anchor_topk=50,
- bbox_thr=0.6,
- gamma=2.0,
- alpha=0.5,
- **kwargs):
- super(FreeAnchorRetinaHead,
- self).__init__(num_classes, in_channels, stacked_convs, conv_cfg,
- norm_cfg, **kwargs)
-
- self.pre_anchor_topk = pre_anchor_topk
- self.bbox_thr = bbox_thr
- self.gamma = gamma
- self.alpha = alpha
-
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]): each item are the truth boxes for each
- image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == len(self.anchor_generator.base_anchors)
-
- anchor_list, _ = self.get_anchors(featmap_sizes, img_metas)
- anchors = [torch.cat(anchor) for anchor in anchor_list]
-
- # concatenate each level
- cls_scores = [
- cls.permute(0, 2, 3,
- 1).reshape(cls.size(0), -1, self.cls_out_channels)
- for cls in cls_scores
- ]
- bbox_preds = [
- bbox_pred.permute(0, 2, 3, 1).reshape(bbox_pred.size(0), -1, 4)
- for bbox_pred in bbox_preds
- ]
- cls_scores = torch.cat(cls_scores, dim=1)
- bbox_preds = torch.cat(bbox_preds, dim=1)
-
- cls_prob = torch.sigmoid(cls_scores)
- box_prob = []
- num_pos = 0
- positive_losses = []
- for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_,
- bbox_preds_) in enumerate(
- zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds)):
-
- with torch.no_grad():
- if len(gt_bboxes_) == 0:
- image_box_prob = torch.zeros(
- anchors_.size(0),
- self.cls_out_channels).type_as(bbox_preds_)
- else:
- # box_localization: a_{j}^{loc}, shape: [j, 4]
- pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_)
-
- # object_box_iou: IoU_{ij}^{loc}, shape: [i, j]
- object_box_iou = bbox_overlaps(gt_bboxes_, pred_boxes)
-
- # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j]
- t1 = self.bbox_thr
- t2 = object_box_iou.max(
- dim=1, keepdim=True).values.clamp(min=t1 + 1e-12)
- object_box_prob = ((object_box_iou - t1) /
- (t2 - t1)).clamp(
- min=0, max=1)
-
- # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j]
- num_obj = gt_labels_.size(0)
- indices = torch.stack([
- torch.arange(num_obj).type_as(gt_labels_), gt_labels_
- ],
- dim=0)
- object_cls_box_prob = torch.sparse_coo_tensor(
- indices, object_box_prob)
-
- # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j]
- """
- from "start" to "end" implement:
- image_box_iou = torch.sparse.max(object_cls_box_prob,
- dim=0).t()
-
- """
- # start
- box_cls_prob = torch.sparse.sum(
- object_cls_box_prob, dim=0).to_dense()
-
- indices = torch.nonzero(box_cls_prob, as_tuple=False).t_()
- if indices.numel() == 0:
- image_box_prob = torch.zeros(
- anchors_.size(0),
- self.cls_out_channels).type_as(object_box_prob)
- else:
- nonzero_box_prob = torch.where(
- (gt_labels_.unsqueeze(dim=-1) == indices[0]),
- object_box_prob[:, indices[1]],
- torch.tensor([
- 0
- ]).type_as(object_box_prob)).max(dim=0).values
-
- # upmap to shape [j, c]
- image_box_prob = torch.sparse_coo_tensor(
- indices.flip([0]),
- nonzero_box_prob,
- size=(anchors_.size(0),
- self.cls_out_channels)).to_dense()
- # end
-
- box_prob.append(image_box_prob)
-
- # construct bags for objects
- match_quality_matrix = bbox_overlaps(gt_bboxes_, anchors_)
- _, matched = torch.topk(
- match_quality_matrix,
- self.pre_anchor_topk,
- dim=1,
- sorted=False)
- del match_quality_matrix
-
- # matched_cls_prob: P_{ij}^{cls}
- matched_cls_prob = torch.gather(
- cls_prob_[matched], 2,
- gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk,
- 1)).squeeze(2)
-
- # matched_box_prob: P_{ij}^{loc}
- matched_anchors = anchors_[matched]
- matched_object_targets = self.bbox_coder.encode(
- matched_anchors,
- gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors))
- loss_bbox = self.loss_bbox(
- bbox_preds_[matched],
- matched_object_targets,
- reduction_override='none').sum(-1)
- matched_box_prob = torch.exp(-loss_bbox)
-
- # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )}
- num_pos += len(gt_bboxes_)
- positive_losses.append(
- self.positive_bag_loss(matched_cls_prob, matched_box_prob))
- positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos)
-
- # box_prob: P{a_{j} \in A_{+}}
- box_prob = torch.stack(box_prob, dim=0)
-
- # negative_loss:
- # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B||
- negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max(
- 1, num_pos * self.pre_anchor_topk)
-
- # avoid the absence of gradients in regression subnet
- # when no ground-truth in a batch
- if num_pos == 0:
- positive_loss = bbox_preds.sum() * 0
-
- losses = {
- 'positive_bag_loss': positive_loss,
- 'negative_bag_loss': negative_loss
- }
- return losses
-
- def positive_bag_loss(self, matched_cls_prob, matched_box_prob):
- """Compute positive bag loss.
-
- :math:`-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )`.
-
- :math:`P_{ij}^{cls}`: matched_cls_prob, classification probability of matched samples.
-
- :math:`P_{ij}^{loc}`: matched_box_prob, box probability of matched samples.
-
- Args:
- matched_cls_prob (Tensor): Classification probabilty of matched
- samples in shape (num_gt, pre_anchor_topk).
- matched_box_prob (Tensor): BBox probability of matched samples,
- in shape (num_gt, pre_anchor_topk).
-
- Returns:
- Tensor: Positive bag loss in shape (num_gt,).
- """ # noqa: E501, W605
- # bag_prob = Mean-max(matched_prob)
- matched_prob = matched_cls_prob * matched_box_prob
- weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None)
- weight /= weight.sum(dim=1).unsqueeze(dim=-1)
- bag_prob = (weight * matched_prob).sum(dim=1)
- # positive_bag_loss = -self.alpha * log(bag_prob)
- return self.alpha * F.binary_cross_entropy(
- bag_prob, torch.ones_like(bag_prob), reduction='none')
-
- def negative_bag_loss(self, cls_prob, box_prob):
- """Compute negative bag loss.
-
- :math:`FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))`.
-
- :math:`P_{a_{j} \in A_{+}}`: Box_probability of matched samples.
-
- :math:`P_{j}^{bg}`: Classification probability of negative samples.
-
- Args:
- cls_prob (Tensor): Classification probability, in shape
- (num_img, num_anchors, num_classes).
- box_prob (Tensor): Box probability, in shape
- (num_img, num_anchors, num_classes).
-
- Returns:
- Tensor: Negative bag loss in shape (num_img, num_anchors, num_classes).
- """ # noqa: E501, W605
- prob = cls_prob * (1 - box_prob)
- # There are some cases when neg_prob = 0.
- # This will cause the neg_prob.log() to be inf without clamp.
- prob = prob.clamp(min=EPS, max=1 - EPS)
- negative_bag_loss = prob**self.gamma * F.binary_cross_entropy(
- prob, torch.zeros_like(prob), reduction='none')
- return (1 - self.alpha) * negative_bag_loss
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/gfl.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/gfl.py
deleted file mode 100644
index 64d65cb2dfb7a56f57e08c3fcad67e1539e1e841..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/gfl.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class GFL(SingleStageDetector):
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(GFL, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/varifocal_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/varifocal_loss.py
deleted file mode 100644
index 7f00bd6916c04fef45a9aeecb50888266420daf9..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/varifocal_loss.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import mmcv
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weight_reduce_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-def varifocal_loss(pred,
- target,
- weight=None,
- alpha=0.75,
- gamma=2.0,
- iou_weighted=True,
- reduction='mean',
- avg_factor=None):
- """`Varifocal Loss `_
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C), C is the
- number of classes
- target (torch.Tensor): The learning target of the iou-aware
- classification score with shape (N, C), C is the number of classes.
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- alpha (float, optional): A balance factor for the negative part of
- Varifocal Loss, which is different from the alpha of Focal Loss.
- Defaults to 0.75.
- gamma (float, optional): The gamma for calculating the modulating
- factor. Defaults to 2.0.
- iou_weighted (bool, optional): Whether to weight the loss of the
- positive example with the iou target. Defaults to True.
- reduction (str, optional): The method used to reduce the loss into
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
- "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- """
- # pred and target should be of the same size
- assert pred.size() == target.size()
- pred_sigmoid = pred.sigmoid()
- target = target.type_as(pred)
- if iou_weighted:
- focal_weight = target * (target > 0.0).float() + \
- alpha * (pred_sigmoid - target).abs().pow(gamma) * \
- (target <= 0.0).float()
- else:
- focal_weight = (target > 0.0).float() + \
- alpha * (pred_sigmoid - target).abs().pow(gamma) * \
- (target <= 0.0).float()
- loss = F.binary_cross_entropy_with_logits(
- pred, target, reduction='none') * focal_weight
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
-
-@LOSSES.register_module()
-class VarifocalLoss(nn.Module):
-
- def __init__(self,
- use_sigmoid=True,
- alpha=0.75,
- gamma=2.0,
- iou_weighted=True,
- reduction='mean',
- loss_weight=1.0):
- """`Varifocal Loss `_
-
- Args:
- use_sigmoid (bool, optional): Whether the prediction is
- used for sigmoid or softmax. Defaults to True.
- alpha (float, optional): A balance factor for the negative part of
- Varifocal Loss, which is different from the alpha of Focal
- Loss. Defaults to 0.75.
- gamma (float, optional): The gamma for calculating the modulating
- factor. Defaults to 2.0.
- iou_weighted (bool, optional): Whether to weight the loss of the
- positive examples with the iou target. Defaults to True.
- reduction (str, optional): The method used to reduce the loss into
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
- "sum".
- loss_weight (float, optional): Weight of loss. Defaults to 1.0.
- """
- super(VarifocalLoss, self).__init__()
- assert use_sigmoid is True, \
- 'Only sigmoid varifocal loss supported now.'
- assert alpha >= 0.0
- self.use_sigmoid = use_sigmoid
- self.alpha = alpha
- self.gamma = gamma
- self.iou_weighted = iou_weighted
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning target of the prediction.
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Options are "none", "mean" and "sum".
-
- Returns:
- torch.Tensor: The calculated loss
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- loss_cls = self.loss_weight * varifocal_loss(
- pred,
- target,
- weight,
- alpha=self.alpha,
- gamma=self.gamma,
- iou_weighted=self.iou_weighted,
- reduction=reduction,
- avg_factor=avg_factor)
- else:
- raise NotImplementedError
- return loss_cls
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/log_buffer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/log_buffer.py
deleted file mode 100644
index d949e2941c5400088c7cd8a1dc893d8b233ae785..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/log_buffer.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from collections import OrderedDict
-
-import numpy as np
-
-
-class LogBuffer:
-
- def __init__(self):
- self.val_history = OrderedDict()
- self.n_history = OrderedDict()
- self.output = OrderedDict()
- self.ready = False
-
- def clear(self):
- self.val_history.clear()
- self.n_history.clear()
- self.clear_output()
-
- def clear_output(self):
- self.output.clear()
- self.ready = False
-
- def update(self, vars, count=1):
- assert isinstance(vars, dict)
- for key, var in vars.items():
- if key not in self.val_history:
- self.val_history[key] = []
- self.n_history[key] = []
- self.val_history[key].append(var)
- self.n_history[key].append(count)
-
- def average(self, n=0):
- """Average latest n values or all values."""
- assert n >= 0
- for key in self.val_history:
- values = np.array(self.val_history[key][-n:])
- nums = np.array(self.n_history[key][-n:])
- avg = np.sum(values * nums) / np.sum(nums)
- self.output[key] = avg
- self.ready = True
diff --git a/spaces/Rongjiehuang/GenerSpeech/vocoders/pwg.py b/spaces/Rongjiehuang/GenerSpeech/vocoders/pwg.py
deleted file mode 100644
index ca9b6891ab2ba5cb413eeca97a41534e5db129d5..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/vocoders/pwg.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import glob
-import re
-import librosa
-import torch
-import yaml
-from sklearn.preprocessing import StandardScaler
-from torch import nn
-from modules.parallel_wavegan.models import ParallelWaveGANGenerator
-from modules.parallel_wavegan.utils import read_hdf5
-from utils.hparams import hparams
-from utils.pitch_utils import f0_to_coarse
-from vocoders.base_vocoder import BaseVocoder, register_vocoder
-import numpy as np
-
-
-def load_pwg_model(config_path, checkpoint_path, stats_path):
- # load config
- with open(config_path) as f:
- config = yaml.load(f, Loader=yaml.Loader)
-
- # setup
- if torch.cuda.is_available():
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
- model = ParallelWaveGANGenerator(**config["generator_params"])
-
- ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
- if 'state_dict' not in ckpt_dict: # official vocoder
- model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["model"]["generator"])
- scaler = StandardScaler()
- if config["format"] == "hdf5":
- scaler.mean_ = read_hdf5(stats_path, "mean")
- scaler.scale_ = read_hdf5(stats_path, "scale")
- elif config["format"] == "npy":
- scaler.mean_ = np.load(stats_path)[0]
- scaler.scale_ = np.load(stats_path)[1]
- else:
- raise ValueError("support only hdf5 or npy format.")
- else: # custom PWG vocoder
- fake_task = nn.Module()
- fake_task.model_gen = model
- fake_task.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"], strict=False)
- scaler = None
-
- model.remove_weight_norm()
- model = model.eval().to(device)
- print(f"| Loaded model parameters from {checkpoint_path}.")
- print(f"| PWG device: {device}.")
- return model, scaler, config, device
-
-
-@register_vocoder
-class PWG(BaseVocoder):
- def __init__(self):
- if hparams['vocoder_ckpt'] == '': # load LJSpeech PWG pretrained model
- base_dir = 'wavegan_pretrained'
- ckpts = glob.glob(f'{base_dir}/checkpoint-*steps.pkl')
- ckpt = sorted(ckpts, key=
- lambda x: int(re.findall(f'{base_dir}/checkpoint-(\d+)steps.pkl', x)[0]))[-1]
- config_path = f'{base_dir}/config.yaml'
- print('| load PWG: ', ckpt)
- self.model, self.scaler, self.config, self.device = load_pwg_model(
- config_path=config_path,
- checkpoint_path=ckpt,
- stats_path=f'{base_dir}/stats.h5',
- )
- else:
- base_dir = hparams['vocoder_ckpt']
- print(base_dir)
- config_path = f'{base_dir}/config.yaml'
- ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
- lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
- print('| load PWG: ', ckpt)
- self.scaler = None
- self.model, _, self.config, self.device = load_pwg_model(
- config_path=config_path,
- checkpoint_path=ckpt,
- stats_path=f'{base_dir}/stats.h5',
- )
-
- def spec2wav(self, mel, **kwargs):
- # start generation
- config = self.config
- device = self.device
- pad_size = (config["generator_params"]["aux_context_window"],
- config["generator_params"]["aux_context_window"])
- c = mel
- if self.scaler is not None:
- c = self.scaler.transform(c)
-
- with torch.no_grad():
- z = torch.randn(1, 1, c.shape[0] * config["hop_size"]).to(device)
- c = np.pad(c, (pad_size, (0, 0)), "edge")
- c = torch.FloatTensor(c).unsqueeze(0).transpose(2, 1).to(device)
- p = kwargs.get('f0')
- if p is not None:
- p = f0_to_coarse(p)
- p = np.pad(p, (pad_size,), "edge")
- p = torch.LongTensor(p[None, :]).to(device)
- y = self.model(z, c, p).view(-1)
- wav_out = y.cpu().numpy()
- return wav_out
-
- @staticmethod
- def wav2spec(wav_fn, return_linear=False):
- from data_gen.tts.data_gen_utils import process_utterance
- res = process_utterance(
- wav_fn, fft_size=hparams['fft_size'],
- hop_size=hparams['hop_size'],
- win_length=hparams['win_size'],
- num_mels=hparams['audio_num_mel_bins'],
- fmin=hparams['fmin'],
- fmax=hparams['fmax'],
- sample_rate=hparams['audio_sample_rate'],
- loud_norm=hparams['loud_norm'],
- min_level_db=hparams['min_level_db'],
- return_linear=return_linear, vocoder='pwg', eps=float(hparams.get('wav2spec_eps', 1e-10)))
- if return_linear:
- return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft]
- else:
- return res[0], res[1].T
-
- @staticmethod
- def wav2mfcc(wav_fn):
- fft_size = hparams['fft_size']
- hop_size = hparams['hop_size']
- win_length = hparams['win_size']
- sample_rate = hparams['audio_sample_rate']
- wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
- mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
- n_fft=fft_size, hop_length=hop_size,
- win_length=win_length, pad_mode="constant", power=1.0)
- mfcc_delta = librosa.feature.delta(mfcc, order=1)
- mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
- mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
- return mfcc
diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
deleted file mode 100644
index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000
--- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
+++ /dev/null
@@ -1,509 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import ONNXVITS_modules as modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- self.w = None
- self.reverse = None
- self.noise_scale = None
- def forward(self, x, x_mask, g=None):
- w = self.w
- reverse = self.reverse
- noise_scale = self.noise_scale
-
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- self.reverse = None
- def forward(self, x, x_mask, g=None):
- reverse = self.reverse
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t]
- x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask # z, m, logs : [b, h, t]
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
-
- if n_speakers > 0:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None):
- torch.onnx.export(
- self.enc_p,
- (x, x_lengths),
- "ONNX_net/enc_p.onnx",
- input_names=["x", "x_lengths"],
- output_names=["xout", "m_p", "logs_p", "x_mask"],
- dynamic_axes={
- "x" : [1],
- "xout" : [2],
- "m_p" : [2],
- "logs_p" : [2],
- "x_mask" : [2]
- },
- verbose=True,
- )
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
-
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- self.dp.reverse = True
- self.dp.noise_scale = noise_scale_w
- torch.onnx.export(
- self.dp,
- (x, x_mask, g),
- "ONNX_net/dp.onnx",
- input_names=["x", "x_mask", "g"],
- output_names=["logw"],
- dynamic_axes={
- "x" : [2],
- "x_mask" : [2],
- "logw" : [2]
- },
- verbose=True,
- )
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
-
- self.flow.reverse = True
- torch.onnx.export(
- self.flow,
- (z_p, y_mask, g),
- "ONNX_net/flow.onnx",
- input_names=["z_p", "y_mask", "g"],
- output_names=["z"],
- dynamic_axes={
- "z_p" : [2],
- "y_mask" : [2],
- "z" : [2]
- },
- verbose=True,
- )
- z = self.flow(z_p, y_mask, g=g)
- z_in = (z * y_mask)[:,:,:max_len]
-
- torch.onnx.export(
- self.dec,
- (z_in, g),
- "ONNX_net/dec.onnx",
- input_names=["z_in", "g"],
- output_names=["o"],
- dynamic_axes={
- "z_in" : [2],
- "o" : [2]
- },
- verbose=True,
- )
- o = self.dec(z_in, g=g)
- return o
diff --git a/spaces/SRDdev/EchoSense/app.py b/spaces/SRDdev/EchoSense/app.py
deleted file mode 100644
index c20703da46eda6e8a96cec7e09a1fc9da8633840..0000000000000000000000000000000000000000
--- a/spaces/SRDdev/EchoSense/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import torch
-import gradio as gr
-from PIL import Image
-from gtts import gTTS
-from transformers import BlipProcessor, BlipForConditionalGeneration
-
-model = "Salesforce/blip-image-captioning-large"
-processor = BlipProcessor.from_pretrained(model)
-head = BlipForConditionalGeneration.from_pretrained(model)
-
-def predict(image):
- inputs = processor(image, return_tensors="pt")
- output = head.generate(**inputs)
- caption = processor.decode(output[0], skip_special_tokens=True)
- audio = gTTS(caption, lang="en", tld="co.in")
- audio.save('caption.mp3')
- filepath = 'caption.mp3'
- return caption, filepath
-
-inputs = gr.inputs.Image(label="Upload any Image")
-outputs = [
- gr.components.Textbox(type="text",label="Captions"),
- gr.components.Audio(type="filepath",label="audio")
-]
-
-description = """
-
🔉 EchoSense Image to Audio Playground
-
This spaces helps generate audio descriptions for input Images
-
Please note:This space is for demonstration purposes only.
-
Visit Shreyas Dixit's personal website for more information about the creator.
-
"""
-
-article="""Echo Sense is an innovative image captioning application that utilizes cutting-edge technology, specifically the powerful Transformer Model Architecture. This state-of-the-art approach has revolutionized Natural Language Processing (NLP) tasks, including image captioning, making it highly accurate and efficient. By leveraging pretrained models from Hugging Face and fine-tuning them on the COCO dataset, Echo Sense achieves exceptional performance while significantly reducing the computational cost and training time. The result is a versatile and reliable solution that not only produces accurate image captions but also generalizes well across various tasks. Experience the power of Echo Sense and witness firsthand the remarkable capabilities of the Transformer Model Architecture."""
-
-interface = gr.Interface(
- fn=predict,
- inputs=inputs,
- outputs=outputs,
- title="",
- description=description,
- article=article,
- theme="grass",
- font=[
- gr.themes.GoogleFont("Open Sans"),
- "ui-sans-serif",
- "system-ui",
- "sans-serif",
- ],
-)
-interface.launch()
\ No newline at end of file
diff --git a/spaces/Sapiensia/diffuse-the-rest/build/_app/immutable/assets/+page-376b236d.css b/spaces/Sapiensia/diffuse-the-rest/build/_app/immutable/assets/+page-376b236d.css
deleted file mode 100644
index 54f1eed0ee54d701018006d3764fc3323df69aa7..0000000000000000000000000000000000000000
--- a/spaces/Sapiensia/diffuse-the-rest/build/_app/immutable/assets/+page-376b236d.css
+++ /dev/null
@@ -1 +0,0 @@
-span[contenteditable].svelte-1wfa7x9:empty:before{content:var(--placeholder);color:#9ca3af}
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/bovine besnoitiosis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/bovine besnoitiosis.md
deleted file mode 100644
index acc020d23baf7857d45a91376bc5d59b1ff35e7f..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/bovine besnoitiosis.md
+++ /dev/null
@@ -1,38 +0,0 @@
-## Bovine besnoitiosis
-
-**Information** : Bovine besnoitiosis is a parasitic disease of cattle caused by a protozoan parasite called Besnoitia besnoiti. The parasite is spread through the bite of infected biting flies, such as the stable fly (Stomoxys calcitrans) and the horn fly (Haematobia irritans).
-
-**Symptoms**
-
-The symptoms of bovine besnoitiosis can vary depending on the severity of the infection and the animal's individual immune response. Some infected cattle may show no symptoms at all, while others may develop a range of symptoms, including:
-
-* Fever
-* Depression
-* Weight loss
-* Anemia
-* Enlarged lymph nodes
-* Lameness
-* Skin lesions
-* Abortion
-* Death
-
-**Remedies**
-
-There is no specific treatment for bovine besnoitiosis. Treatment is usually supportive and may include:
-
-* Administering fluids and electrolytes
-* Treating secondary bacterial infections
-
-**Causes**
-
-Bovine besnoitiosis is caused by a protozoan parasite called Besnoitia besnoiti. The parasite is spread through the bite of infected biting flies, such as the stable fly (Stomoxys calcitrans) and the horn fly (Haematobia irritans).
-
-**Prevention**
-
-There is no vaccine available for bovine besnoitiosis. However, there are some preventive measures that can be taken to reduce the risk of infection, such as:
-
-* Controlling biting flies
-* Vaccinating cattle against other diseases that can weaken the immune system, such as bovine viral diarrhea virus (BVDV) and rotavirus
-* Testing cattle for bovine besnoitiosis
-* Isolating infected animals from healthy animals
-* Treating contaminated feed and water
diff --git a/spaces/Saturdays/ClassificationPeripheralBloodCell/about_pj.py b/spaces/Saturdays/ClassificationPeripheralBloodCell/about_pj.py
deleted file mode 100644
index bb2006e2661450dcb05a23d3dd748306c547a44c..0000000000000000000000000000000000000000
--- a/spaces/Saturdays/ClassificationPeripheralBloodCell/about_pj.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Tue Dec 27 16:16:06 2022
-
-@author: Usuario
-"""
-import streamlit as st
-import imagen_subida as ims
-
-
-#ABout the project!
-#def add_bg_from_url():
-# st.markdown(
-# f"""
-#
-# """,
-# unsafe_allow_html=True
-# )
-
-#add_bg_from_url()
-
-def textito(idioma):
-
- if idioma == 1:
- st.title('About the project')
- container = st.container()
- st.markdown('
This project of Peripheral Blood Cells Classification have been made by Silvia García, María Ortiz and Jorge González (more information in About us button), for Final Master''s Thesis of the 3th Edition master''s degree in Deep Learning from SaturdaysAI.
', unsafe_allow_html=True)
-
- st.markdown('
In this project, attention has been focused on the automation of the classification of peripheral blood cells using the Transfer Learning methodology, which consists of using a pre-trained artificial intelligence model, in this case the vgg19 model, and training it with an image dataset composed of 8 different classes (basophils, eosinophils, erythroblasts, immature granulocytes, lymphocytes, monocytes, neutrophils, and platelets) of different cell types.
', unsafe_allow_html=True)
- st.markdown('
The vgg19 pre-trained network architecture; a variant of the vgg model, consisting of 19 layers (16 convolutional and 3 connected layers, 5 MaxPool layers and one Softmax layer). The following image represents the structure of this network:
This confusion matrix indicates the accuracy of the model when classifying cell types. As can be seen, the vgg19 model predicts the different images with great accuracy.
', unsafe_allow_html=True)
-
-
- st.markdown('
Tensorflow Projector (https://projector.tensorflow.org/) is a visual tool that allows us to interact and analyze multidimensional data (embeddings) and project them into a two- or three-dimensional space. Each embedding is represented by a point that has a certain position in space and these will form certain clusters based on a similarity score. Thanks to this tool, we are able to observe how the model is capable of distinguishing the different classes (ig, leukocytes, etc), and where it has the greatest problems in distinguishing them through the appearance of certain points of different classes within a cluster of a different class.
', unsafe_allow_html=True)
- st.markdown('
Dimensionality reduction methods such as t-stochastic neighbor embedding (t-SNE) allow us to visualize our embeddings in a three-dimensional way, constructing a probability distribution over pairs of embeddings in space, such that the most similar ones are more likely to be included in each other. the same cluster, reducing the dimensionality of the sample.
As can be seen in this figure, there are various insertions of certain groups within clusters belonging to other classes. In this case, the model is more confused giving a correct classification when dealing with neutrophils and immature granulocytes. Other notable insertions are erythroblasts, which are confused with platelets, neutrophils with basophils, and immature granulocytes with monocytes. Even so, the precision of the model when classifying the different cell types is very high.
', unsafe_allow_html=True)
-
-
-
- else:
- st.title('Acerca del proyecto')
- container = st.container()
- #text_ini = '**Este trabajo de clasificación de células sanguíneas periféricas es un proyecto realizado por Silvia García, María Ortiz y Jorge González (más información en el apartado *Sobre nosotros*), para el Trabajo de Fin de Máster de la tercera edición del máster en Deep Learning de SaturdaysAI.**'
- st.markdown('
Este trabajo de clasificación de células sanguíneas periféricas es un proyecto realizado por Silvia García, María Ortiz y Jorge González (más información en el apartado Sobre nosotros), para el Trabajo de Fin de Máster de la tercera edición del máster en Deep Learning de SaturdaysAI.
', unsafe_allow_html=True)
-
- st.markdown('
En este proyecto, se ha centrado la atención a la automatización de la clasificación de células sanguíneas periféricas utilizando la metodología de Transfer Learning, la cual consiste en utilizar un modelo de inteligencia artificial pre-entrenado, en este caso el modelo vgg19, y entrenarlo con un dataset de imágenes compuesto por 8 clases diferentes (basófilos, eosinófilos, eritroblastos, granulocitos inmaduros, linfocitos, monocitos, neutrófilos y plaquetas) de diferentes tipos celulares.
', unsafe_allow_html=True)
- st.markdown('
La arquitectura de red pre-entrenada vgg19; una variante del modelo vgg, que consta de 19 capas (16 de convolución y 3 capas conectadas, 5 capas de MaxPool y una de Softmax). La siguiente imagen representa la estructura de esta red:
', unsafe_allow_html=True)
-
- #st.write(text_ini)
- # text1 = 'En este proyecto, se ha centrado la atención a la automatización de la clasificación de células sanguíneas periféricas utilizando la metodología de *Transfer Learning*, la cual consiste en utilizar un modelo de inteligencia artificial pre-entrenado, en este caso el modelo *vgg19*, y entrenarlo con un dataset de imágenes compuesto por 8 clases diferentes (basófilos, eosinófilos, eritroblastos, granulocitos inmaduros, linfocitos, monocitos, neutrófilos y plaquetas) de diferentes tipos celulares.'
- # = 'La arquitectura de red pre-entrenada *vgg19*; una variante del modelo *vgg*, que consta de 19 capas (16 de convolución y 3 capas conectadas, 5 capas de MaxPool y una de Softmax). La siguiente imagen representa la estructura de esta red:'
- # st.write(text1)
- #st.write(text2)
- st.image('./images/vgg19.png', use_column_width= True)
- st.markdown('
Los resultados obtenidos, fueron bastante prometedores con un porcentaje de precisión en la clasificación superior al 99% en todas las clases.
', unsafe_allow_html=True)
-
- #text3 = 'Los resultados obtenidos, fueron bastante prometedores con un porcentaje de precisión en la clasificación superior al 99% en todas las clases.'
- #st.write(text3)
- st.image('./images/confusion_matrix.png', use_column_width= True)
- st.markdown('
Esta matriz de confusión nos indica la precisión del modelo a la hora de clasificar los tipos celulares. Como se puede observar, el modelo vgg19 predice con gran exactitud las diferentes imágenes.
', unsafe_allow_html=True)
-
- st.markdown('
Tensorflow Projector (https://projector.tensorflow.org/) es una herramienta visual que nos permite interactuar y analizar datos multidimensionales (embeddings) y proyectarlos en un espacio bi o tridimensional. Cada embedding es representado por un punto que tiene una posición determinada en el espacio y estos formarán determinados clusters basándose en una puntuación de similitud. Gracias a esta herramienta, somos capaces de observar cómo el modelo es capaz de distinguir las diferentes clases (ig, leucocitos, etc), y dónde tiene los mayores problemas para distinguirlas mediante la aparición de ciertos puntos de diferentes clases dentro de un cluster de una clase diferente.
', unsafe_allow_html=True)
- st.markdown('
Métodos de reducción de dimensionalidad como t-stochastic neighbor embedding (t-SNE) nos permiten visualizar nuestros embeddings de manera tridimensional, construyendo una distribución de probabilidad sobre parejas de embeddings en el espacio, de forma que los más similares son más probables de incluirse en un mismo cluster, reduciendo la dimensionalidad de la muestra.
', unsafe_allow_html=True)
-
- #text4 = 'Esta matriz de confusión nos indica la precisión del modelo a la hora de clasificar los tipos celulares. Como se puede observar, el modelo *vgg19* predice con gran exactitud las diferentes imágenes.'
- #st.write(text4)
- #text5 = 'Tensorflow Projector (https://projector.tensorflow.org/) es una herramienta visual que nos permite interactuar y analizar datos multidimensionales (embeddings) y proyectarlos en un espacio bi o tridimensional. Cada embedding es representado por un punto que tiene una posición determinada en el espacio y estos formarán determinados clusters basándose en una puntuación de similitud. Gracias a esta herramienta, somos capaces de observar cómo el modelo es capaz de distinguir las diferentes clases (ig, leucocitos, etc), y dónde tiene los mayores problemas para distinguirlas mediante la aparición de ciertos puntos de diferentes clases dentro de un cluster de una clase diferente. '
- #st.write(text5)
- #text6 = 'Métodos de reducción de dimensionalidad como t-stochastic neighbor embedding (t-SNE) nos permiten visualizar nuestros embeddings de manera tridimensional, construyendo una distribución de probabilidad sobre parejas de embeddings en el espacio, de forma que los más similares son más probables de incluirse en un mismo cluster, reduciendo la dimensionalidad de la muestra. '
- #st.write(text6)
- st.image('./images/tensor.png', use_column_width= True)
- st.markdown('
Como se puede observar en esta figura, existen diversas inserciones de ciertos grupos dentro de clusters pertenecientes a otras clases. En este caso, el modelo se encuentra más confuso dando una clasificación correcta cuando se trata de neutrófilos y granulocitos inmaduros. Otras inserciones destacables son los eritroblastos, que son confundidos con plaquetas, los neutrófilos con basófilos, y los granulocitos inmaduros con monocitos. Aun así, la precisión del modelo a la hora de clasificar los diferentes tipos celulares es muy alta.
', unsafe_allow_html=True)
-
- #text7 = 'Como se puede observar en esta figura, existen diversas inserciones de ciertos grupos dentro de clusters pertenecientes a otras clases. En este caso, el modelo se encuentra más confuso dando una clasificación correcta cuando se trata de neutrófilos y granulocitos inmaduros. Otras inserciones destacables son los eritroblastos, que son confundidos con plaquetas, los neutrófilos con basófilos, y los granulocitos inmaduros con monocitos. Aun así, la precisión del modelo a la hora de clasificar los diferentes tipos celulares es muy alta.'
- #st.write(text7)
\ No newline at end of file
diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/modeling_t5.py b/spaces/SeViLA/SeViLA/lavis/models/blip2_models/modeling_t5.py
deleted file mode 100644
index 10e4d56f2c21b0cbe639e0f568bd352a6cb76351..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/modeling_t5.py
+++ /dev/null
@@ -1,2063 +0,0 @@
-# coding=utf-8
-# Copyright 2018 Mesh TensorFlow authors, T5 Authors and HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" PyTorch T5 model."""
-
-
-import copy
-import math
-import os
-import warnings
-from typing import Optional, Tuple, Union
-
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss
-from torch.utils.checkpoint import checkpoint
-
-from transformers.activations import ACT2FN
-from transformers.modeling_outputs import (
- BaseModelOutput,
- BaseModelOutputWithPastAndCrossAttentions,
- Seq2SeqLMOutput,
- Seq2SeqModelOutput,
-)
-from transformers.modeling_utils import PreTrainedModel
-from transformers.pytorch_utils import (
- ALL_LAYERNORM_LAYERS,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import (
- DUMMY_INPUTS,
- DUMMY_MASK,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- is_torch_fx_proxy,
- logging,
- replace_return_docstrings,
-)
-from transformers.utils.model_parallel_utils import assert_device_map, get_device_map
-from transformers.models.t5.configuration_t5 import T5Config
-
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "T5Config"
-_TOKENIZER_FOR_DOC = "T5Tokenizer"
-_CHECKPOINT_FOR_DOC = "t5-small"
-
-####################################################
-# This dict contains ids and associated url
-# for the pretrained weights provided with the models
-####################################################
-T5_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "t5-small",
- "t5-base",
- "t5-large",
- "t5-3b",
- "t5-11b",
- # See all T5 models at https://huggingface.co/models?filter=t5
-]
-
-
-####################################################
-# This is a conversion method from TF 1.0 to PyTorch
-# More details: https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28
-####################################################
-def load_tf_weights_in_t5(model, config, tf_checkpoint_path):
- """Load tf checkpoints in a pytorch model."""
- try:
- import re
-
- import numpy as np
- import tensorflow as tf
- except ImportError:
- logger.error(
- "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
- "https://www.tensorflow.org/install/ for installation instructions."
- )
- raise
- tf_path = os.path.abspath(tf_checkpoint_path)
- logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
- # Load weights from TF model
- init_vars = tf.train.list_variables(tf_path)
- names = []
- tf_weights = {}
- for name, shape in init_vars:
- logger.info(f"Loading TF weight {name} with shape {shape}")
- array = tf.train.load_variable(tf_path, name)
- names.append(name)
- tf_weights[name] = array
-
- for txt_name in names:
- name = txt_name.split("/")
- # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
- # which are not required for using pretrained model
- if any(
- n
- in [
- "adam_v",
- "adam_m",
- "AdamWeightDecayOptimizer",
- "AdamWeightDecayOptimizer_1",
- "global_step",
- ]
- for n in name
- ):
- logger.info(f"Skipping {'/'.join(name)}")
- tf_weights.pop(txt_name, None)
- continue
- if "_slot_" in name[-1]:
- logger.info(f"Skipping {'/'.join(name)}")
- tf_weights.pop(txt_name, None)
- continue
- pointer = model
- array = tf_weights[txt_name]
-
- for m_name in name:
- if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
- scope_names = re.split(r"_(\d+)", m_name)
- else:
- scope_names = [m_name]
- if scope_names[0] in ["kernel", "scale", "embedding"]:
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "self_attention":
- pointer = getattr(pointer, "layer")
- pointer = pointer[0]
- elif scope_names[0] == "enc_dec_attention":
- pointer = getattr(pointer, "layer")
- pointer = pointer[1]
- elif scope_names[0] == "dense_relu_dense":
- pointer = getattr(pointer, "layer")
- pointer = pointer[2]
- elif scope_names[0] == "rms_norm":
- if hasattr(pointer, "layer_norm"):
- pointer = getattr(pointer, "layer_norm")
- elif hasattr(pointer, "final_layer_norm"):
- pointer = getattr(pointer, "final_layer_norm")
- elif scope_names[0] == "scale":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
- pointer = getattr(pointer, "bias")
- elif scope_names[0] == "squad":
- pointer = getattr(pointer, "classifier")
- elif scope_names[0] == "decoder" and name[1] == "logits":
- continue
- elif scope_names[0] == "logits":
- pointer = getattr(pointer, "lm_head")
- elif (
- scope_names[0] == "wi"
- and len(scope_names) > 1
- and scope_names[1].isdigit()
- ):
- pointer = getattr(pointer, f"wi_{scope_names[1]}")
- continue
- else:
- try:
- pointer = getattr(pointer, scope_names[0])
- except AttributeError:
- logger.info(f"Skipping {'/'.join(name)}")
- continue
- if len(scope_names) >= 2:
- num = int(scope_names[1])
- pointer = pointer[num]
- if scope_names[0] not in ["kernel", "scale", "embedding"]:
- pointer = getattr(pointer, "weight")
- if scope_names[0] != "embedding":
- logger.info(f"Transposing numpy weight of shape {array.shape} for {name}")
- array = np.transpose(array)
- try:
- assert (
- pointer.shape == array.shape
- ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
- except AssertionError as e:
- e.args += (pointer.shape, array.shape)
- raise
- logger.info(f"Initialize PyTorch weight {name}")
- pointer.data = torch.from_numpy(array.astype(np.float32))
- tf_weights.pop(txt_name, None)
-
- logger.info(f"Weights not copied to PyTorch model: {', '.join(tf_weights.keys())}.")
- return model
-
-
-####################################################
-# PyTorch Models are constructed by sub-classing
-# - torch.nn.Module for the layers and
-# - PreTrainedModel for the models (it-self a sub-class of nn.Module)
-####################################################
-PARALLELIZE_DOCSTRING = r"""
- This is an experimental feature and is a subject to change at a moment's notice.
-
- Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
- it will evenly distribute blocks across all devices.
-
- Args:
- device_map (`Dict[int, list]`, optional, defaults to None):
- A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
- automatically mapped to the first device (for esoteric reasons). That means that the first device should
- have fewer attention modules mapped to it than other devices. For reference, the t5 models have the
- following number of attention modules:
-
- - t5-small: 6
- - t5-base: 12
- - t5-large: 24
- - t5-3b: 24
- - t5-11b: 24
-
- Example:
-
- ```python
- # Here is an example of a device map on a machine with 4 GPUs using t5-3b, which has a total of 24 attention modules:
- model = T5ForConditionalGeneration.from_pretrained("t5-3b")
- device_map = {
- 0: [0, 1, 2],
- 1: [3, 4, 5, 6, 7, 8, 9],
- 2: [10, 11, 12, 13, 14, 15, 16],
- 3: [17, 18, 19, 20, 21, 22, 23],
- }
- model.parallelize(device_map)
- ```
-"""
-DEPARALLELIZE_DOCSTRING = r"""
- Moves the model to cpu from a model parallel state.
-
- Example:
-
- ```python
- # On a 4 GPU machine with t5-3b:
- model = T5ForConditionalGeneration.from_pretrained("t5-3b")
- device_map = {
- 0: [0, 1, 2],
- 1: [3, 4, 5, 6, 7, 8, 9],
- 2: [10, 11, 12, 13, 14, 15, 16],
- 3: [17, 18, 19, 20, 21, 22, 23],
- }
- model.parallelize(device_map) # Splits the model across several devices
- model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
- ```
-"""
-
-
-class T5LayerNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-6):
- """
- Construct a layernorm module in the T5 style. No bias and no subtraction of mean.
- """
- super().__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, hidden_states):
-
- # T5 uses a layer_norm which only scales and doesn't shift, which is also known as Root Mean
- # Square Layer Normalization https://arxiv.org/abs/1910.07467 thus varience is calculated
- # w/o mean and there is no bias. Additionally we want to make sure that the accumulation for
- # half-precision inputs is done in fp32
-
- variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
- hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
-
- # convert into half-precision if necessary
- if self.weight.dtype in [torch.float16, torch.bfloat16]:
- hidden_states = hidden_states.to(self.weight.dtype)
-
- return self.weight * hidden_states
-
-
-try:
- from apex.normalization import FusedRMSNorm
-
- T5LayerNorm = FusedRMSNorm # noqa
-
- logger.info(
- "Discovered apex.normalization.FusedRMSNorm - will use it instead of T5LayerNorm"
- )
-except ImportError:
- # using the normal T5LayerNorm
- pass
-except Exception:
- logger.warning("discovered apex but it failed to load, falling back to T5LayerNorm")
- pass
-
-ALL_LAYERNORM_LAYERS.append(T5LayerNorm)
-
-
-class T5DenseActDense(nn.Module):
- def __init__(self, config: T5Config):
- super().__init__()
- self.wi = nn.Linear(config.d_model, config.d_ff, bias=False)
- self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
- self.dropout = nn.Dropout(config.dropout_rate)
- self.act = ACT2FN[config.dense_act_fn]
-
- def forward(self, hidden_states):
- hidden_states = self.wi(hidden_states)
- hidden_states = self.act(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.wo(hidden_states)
- return hidden_states
-
-
-class T5DenseGatedActDense(nn.Module):
- def __init__(self, config: T5Config):
- super().__init__()
- self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)
- self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)
- self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
- self.dropout = nn.Dropout(config.dropout_rate)
- self.act = ACT2FN[config.dense_act_fn]
-
- def forward(self, hidden_states):
- hidden_gelu = self.act(self.wi_0(hidden_states))
- hidden_linear = self.wi_1(hidden_states)
- hidden_states = hidden_gelu * hidden_linear
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.wo(hidden_states)
- return hidden_states
-
-
-class T5LayerFF(nn.Module):
- def __init__(self, config: T5Config):
- super().__init__()
- if config.is_gated_act:
- self.DenseReluDense = T5DenseGatedActDense(config)
- else:
- self.DenseReluDense = T5DenseActDense(config)
-
- self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(self, hidden_states):
- forwarded_states = self.layer_norm(hidden_states)
- forwarded_states = self.DenseReluDense(forwarded_states)
- hidden_states = hidden_states + self.dropout(forwarded_states)
- return hidden_states
-
-
-class T5Attention(nn.Module):
- def __init__(self, config: T5Config, has_relative_attention_bias=False):
- super().__init__()
- self.is_decoder = config.is_decoder
- self.has_relative_attention_bias = has_relative_attention_bias
- self.relative_attention_num_buckets = config.relative_attention_num_buckets
- self.relative_attention_max_distance = config.relative_attention_max_distance
- self.d_model = config.d_model
- self.key_value_proj_dim = config.d_kv
- self.n_heads = config.num_heads
- self.dropout = config.dropout_rate
- self.inner_dim = self.n_heads * self.key_value_proj_dim
-
- # Mesh TensorFlow initialization to avoid scaling before softmax
- self.q = nn.Linear(self.d_model, self.inner_dim, bias=False)
- self.k = nn.Linear(self.d_model, self.inner_dim, bias=False)
- self.v = nn.Linear(self.d_model, self.inner_dim, bias=False)
- self.o = nn.Linear(self.inner_dim, self.d_model, bias=False)
-
- if self.has_relative_attention_bias:
- self.relative_attention_bias = nn.Embedding(
- self.relative_attention_num_buckets, self.n_heads
- )
- self.pruned_heads = set()
- self.gradient_checkpointing = False
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.n_heads, self.key_value_proj_dim, self.pruned_heads
- )
- # Prune linear layers
- self.q = prune_linear_layer(self.q, index)
- self.k = prune_linear_layer(self.k, index)
- self.v = prune_linear_layer(self.v, index)
- self.o = prune_linear_layer(self.o, index, dim=1)
- # Update hyper params
- self.n_heads = self.n_heads - len(heads)
- self.inner_dim = self.key_value_proj_dim * self.n_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- @staticmethod
- def _relative_position_bucket(
- relative_position, bidirectional=True, num_buckets=32, max_distance=128
- ):
- """
- Adapted from Mesh Tensorflow:
- https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
-
- Translate relative position to a bucket number for relative attention. The relative position is defined as
- memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to
- position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for
- small absolute relative_position and larger buckets for larger absolute relative_positions. All relative
- positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket.
- This should allow for more graceful generalization to longer sequences than the model has been trained on
-
- Args:
- relative_position: an int32 Tensor
- bidirectional: a boolean - whether the attention is bidirectional
- num_buckets: an integer
- max_distance: an integer
-
- Returns:
- a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets)
- """
- relative_buckets = 0
- if bidirectional:
- num_buckets //= 2
- relative_buckets += (relative_position > 0).to(torch.long) * num_buckets
- relative_position = torch.abs(relative_position)
- else:
- relative_position = -torch.min(
- relative_position, torch.zeros_like(relative_position)
- )
- # now relative_position is in the range [0, inf)
-
- # half of the buckets are for exact increments in positions
- max_exact = num_buckets // 2
- is_small = relative_position < max_exact
-
- # The other half of the buckets are for logarithmically bigger bins in positions up to max_distance
- relative_position_if_large = max_exact + (
- torch.log(relative_position.float() / max_exact)
- / math.log(max_distance / max_exact)
- * (num_buckets - max_exact)
- ).to(torch.long)
- relative_position_if_large = torch.min(
- relative_position_if_large,
- torch.full_like(relative_position_if_large, num_buckets - 1),
- )
-
- relative_buckets += torch.where(
- is_small, relative_position, relative_position_if_large
- )
- return relative_buckets
-
- def compute_bias(self, query_length, key_length, device=None):
- """Compute binned relative position bias"""
- if device is None:
- device = self.relative_attention_bias.weight.device
- context_position = torch.arange(query_length, dtype=torch.long, device=device)[
- :, None
- ]
- memory_position = torch.arange(key_length, dtype=torch.long, device=device)[
- None, :
- ]
- relative_position = (
- memory_position - context_position
- ) # shape (query_length, key_length)
- relative_position_bucket = self._relative_position_bucket(
- relative_position, # shape (query_length, key_length)
- bidirectional=(not self.is_decoder),
- num_buckets=self.relative_attention_num_buckets,
- max_distance=self.relative_attention_max_distance,
- )
- values = self.relative_attention_bias(
- relative_position_bucket
- ) # shape (query_length, key_length, num_heads)
- values = values.permute([2, 0, 1]).unsqueeze(
- 0
- ) # shape (1, num_heads, query_length, key_length)
- return values
-
- def forward(
- self,
- hidden_states,
- mask=None,
- key_value_states=None,
- position_bias=None,
- past_key_value=None,
- layer_head_mask=None,
- query_length=None,
- use_cache=False,
- output_attentions=False,
- ):
- """
- Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
- """
- # Input is (batch_size, seq_length, dim)
- # Mask is (batch_size, key_length) (non-causal) or (batch_size, key_length, key_length)
- # past_key_value[0] is (batch_size, n_heads, q_len - 1, dim_per_head)
- batch_size, seq_length = hidden_states.shape[:2]
-
- real_seq_length = seq_length
-
- if past_key_value is not None:
- assert (
- len(past_key_value) == 2
- ), f"past_key_value should have 2 past states: keys and values. Got { len(past_key_value)} past states"
- real_seq_length += (
- past_key_value[0].shape[2] if query_length is None else query_length
- )
-
- key_length = (
- real_seq_length if key_value_states is None else key_value_states.shape[1]
- )
-
- def shape(states):
- """projection"""
- return states.view(
- batch_size, -1, self.n_heads, self.key_value_proj_dim
- ).transpose(1, 2)
-
- def unshape(states):
- """reshape"""
- return (
- states.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim)
- )
-
- def project(hidden_states, proj_layer, key_value_states, past_key_value):
- """projects hidden states correctly to key/query states"""
- if key_value_states is None:
- # self-attn
- # (batch_size, n_heads, seq_length, dim_per_head)
- hidden_states = shape(proj_layer(hidden_states))
- elif past_key_value is None:
- # cross-attn
- # (batch_size, n_heads, seq_length, dim_per_head)
- hidden_states = shape(proj_layer(key_value_states))
-
- if past_key_value is not None:
- if key_value_states is None:
- # self-attn
- # (batch_size, n_heads, key_length, dim_per_head)
- hidden_states = torch.cat([past_key_value, hidden_states], dim=2)
- else:
- # cross-attn
- hidden_states = past_key_value
- return hidden_states
-
- # get query states
- query_states = shape(
- self.q(hidden_states)
- ) # (batch_size, n_heads, seq_length, dim_per_head)
-
- # get key/value states
- key_states = project(
- hidden_states,
- self.k,
- key_value_states,
- past_key_value[0] if past_key_value is not None else None,
- )
- value_states = project(
- hidden_states,
- self.v,
- key_value_states,
- past_key_value[1] if past_key_value is not None else None,
- )
-
- # compute scores
- scores = torch.matmul(
- query_states, key_states.transpose(3, 2)
- ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9
-
- if position_bias is None:
- if not self.has_relative_attention_bias:
- position_bias = torch.zeros(
- (1, self.n_heads, real_seq_length, key_length),
- device=scores.device,
- dtype=scores.dtype,
- )
- if self.gradient_checkpointing and self.training:
- position_bias.requires_grad = True
- else:
- position_bias = self.compute_bias(
- real_seq_length, key_length, device=scores.device
- )
-
- # if key and values are already calculated
- # we want only the last query position bias
- if past_key_value is not None:
- position_bias = position_bias[:, :, -hidden_states.size(1) :, :]
-
- if mask is not None:
- position_bias = (
- position_bias + mask
- ) # (batch_size, n_heads, seq_length, key_length)
-
- if self.pruned_heads:
- mask = torch.ones(position_bias.shape[1])
- mask[list(self.pruned_heads)] = 0
- position_bias_masked = position_bias[:, mask.bool()]
- else:
- position_bias_masked = position_bias
-
- scores += position_bias_masked
- attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(
- scores
- ) # (batch_size, n_heads, seq_length, key_length)
- attn_weights = nn.functional.dropout(
- attn_weights, p=self.dropout, training=self.training
- ) # (batch_size, n_heads, seq_length, key_length)
-
- # Mask heads if we want to
- if layer_head_mask is not None:
- attn_weights = attn_weights * layer_head_mask
-
- attn_output = unshape(
- torch.matmul(attn_weights, value_states)
- ) # (batch_size, seq_length, dim)
- attn_output = self.o(attn_output)
-
- present_key_value_state = (
- (key_states, value_states) if (self.is_decoder and use_cache) else None
- )
- outputs = (attn_output,) + (present_key_value_state,) + (position_bias,)
-
- if output_attentions:
- outputs = outputs + (attn_weights,)
- return outputs
-
-
-class T5LayerSelfAttention(nn.Module):
- def __init__(self, config, has_relative_attention_bias=False):
- super().__init__()
- self.SelfAttention = T5Attention(
- config, has_relative_attention_bias=has_relative_attention_bias
- )
- self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- position_bias=None,
- layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- output_attentions=False,
- ):
- normed_hidden_states = self.layer_norm(hidden_states)
- attention_output = self.SelfAttention(
- normed_hidden_states,
- mask=attention_mask,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- hidden_states = hidden_states + self.dropout(attention_output[0])
- outputs = (hidden_states,) + attention_output[
- 1:
- ] # add attentions if we output them
- return outputs
-
-
-class T5LayerCrossAttention(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.EncDecAttention = T5Attention(config, has_relative_attention_bias=False)
- self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(
- self,
- hidden_states,
- key_value_states,
- attention_mask=None,
- position_bias=None,
- layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- query_length=None,
- output_attentions=False,
- ):
- normed_hidden_states = self.layer_norm(hidden_states)
- attention_output = self.EncDecAttention(
- normed_hidden_states,
- mask=attention_mask,
- key_value_states=key_value_states,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- query_length=query_length,
- output_attentions=output_attentions,
- )
- layer_output = hidden_states + self.dropout(attention_output[0])
- outputs = (layer_output,) + attention_output[
- 1:
- ] # add attentions if we output them
- return outputs
-
-
-class T5Block(nn.Module):
- def __init__(self, config, has_relative_attention_bias=False):
- super().__init__()
- self.is_decoder = config.is_decoder
- self.layer = nn.ModuleList()
- self.layer.append(
- T5LayerSelfAttention(
- config, has_relative_attention_bias=has_relative_attention_bias
- )
- )
- if self.is_decoder:
- self.layer.append(T5LayerCrossAttention(config))
-
- self.layer.append(T5LayerFF(config))
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- position_bias=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- encoder_decoder_position_bias=None,
- layer_head_mask=None,
- cross_attn_layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- output_attentions=False,
- return_dict=True,
- ):
-
- if past_key_value is not None:
- if not self.is_decoder:
- logger.warning(
- "`past_key_values` is passed to the encoder. Please make sure this is intended."
- )
- expected_num_past_key_values = 2 if encoder_hidden_states is None else 4
-
- if len(past_key_value) != expected_num_past_key_values:
- raise ValueError(
- f"There should be {expected_num_past_key_values} past states. "
- f"{'2 (past / key) for cross attention. ' if expected_num_past_key_values == 4 else ''}"
- f"Got {len(past_key_value)} past key / value states"
- )
-
- self_attn_past_key_value = past_key_value[:2]
- cross_attn_past_key_value = past_key_value[2:]
- else:
- self_attn_past_key_value, cross_attn_past_key_value = None, None
-
- self_attention_outputs = self.layer[0](
- hidden_states,
- attention_mask=attention_mask,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=self_attn_past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- hidden_states, present_key_value_state = self_attention_outputs[:2]
- attention_outputs = self_attention_outputs[
- 2:
- ] # Keep self-attention outputs and relative position weights
-
- # clamp inf values to enable fp16 training
- if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any():
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value
- )
-
- do_cross_attention = self.is_decoder and encoder_hidden_states is not None
- if do_cross_attention:
- # the actual query length is unknown for cross attention
- # if using past key value states. Need to inject it here
- if present_key_value_state is not None:
- query_length = present_key_value_state[0].shape[2]
- else:
- query_length = None
-
- cross_attention_outputs = self.layer[1](
- hidden_states,
- key_value_states=encoder_hidden_states,
- attention_mask=encoder_attention_mask,
- position_bias=encoder_decoder_position_bias,
- layer_head_mask=cross_attn_layer_head_mask,
- past_key_value=cross_attn_past_key_value,
- query_length=query_length,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- hidden_states = cross_attention_outputs[0]
-
- # clamp inf values to enable fp16 training
- if (
- hidden_states.dtype == torch.float16
- and torch.isinf(hidden_states).any()
- ):
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value
- )
-
- # Combine self attn and cross attn key value states
- if present_key_value_state is not None:
- present_key_value_state = (
- present_key_value_state + cross_attention_outputs[1]
- )
-
- # Keep cross-attention outputs and relative position weights
- attention_outputs = attention_outputs + cross_attention_outputs[2:]
-
- # Apply Feed Forward layer
- hidden_states = self.layer[-1](hidden_states)
-
- # clamp inf values to enable fp16 training
- if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any():
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value
- )
-
- outputs = (hidden_states,)
-
- if use_cache:
- outputs = outputs + (present_key_value_state,) + attention_outputs
- else:
- outputs = outputs + attention_outputs
-
- return outputs # hidden-states, present_key_value_states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)
-
-
-class T5PreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = T5Config
- load_tf_weights = load_tf_weights_in_t5
- base_model_prefix = "transformer"
- is_parallelizable = True
- supports_gradient_checkpointing = True
- _no_split_modules = ["T5Block"]
-
- @property
- def dummy_inputs(self):
- input_ids = torch.tensor(DUMMY_INPUTS)
- input_mask = torch.tensor(DUMMY_MASK)
- dummy_inputs = {
- "decoder_input_ids": input_ids,
- "input_ids": input_ids,
- "decoder_attention_mask": input_mask,
- }
- return dummy_inputs
-
- def _init_weights(self, module):
- """Initialize the weights"""
- factor = (
- self.config.initializer_factor
- ) # Used for testing weights initialization
- if isinstance(module, T5LayerNorm):
- module.weight.data.fill_(factor * 1.0)
- elif isinstance(module, (T5Model, T5ForConditionalGeneration, T5EncoderModel)):
- # Mesh TensorFlow embeddings initialization
- # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L1624
- module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0)
- if hasattr(module, "lm_head") and not self.config.tie_word_embeddings:
- module.lm_head.weight.data.normal_(mean=0.0, std=factor * 1.0)
- elif isinstance(module, T5DenseActDense):
- # Mesh TensorFlow FF initialization
- # See https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L56
- # and https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L89
- module.wi.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5)
- )
- if hasattr(module.wi, "bias") and module.wi.bias is not None:
- module.wi.bias.data.zero_()
- module.wo.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_ff) ** -0.5)
- )
- if hasattr(module.wo, "bias") and module.wo.bias is not None:
- module.wo.bias.data.zero_()
- elif isinstance(module, T5DenseGatedActDense):
- module.wi_0.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5)
- )
- if hasattr(module.wi_0, "bias") and module.wi_0.bias is not None:
- module.wi_0.bias.data.zero_()
- module.wi_1.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5)
- )
- if hasattr(module.wi_1, "bias") and module.wi_1.bias is not None:
- module.wi_1.bias.data.zero_()
- module.wo.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_ff) ** -0.5)
- )
- if hasattr(module.wo, "bias") and module.wo.bias is not None:
- module.wo.bias.data.zero_()
- elif isinstance(module, T5Attention):
- # Mesh TensorFlow attention initialization to avoid scaling before softmax
- # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136
- d_model = self.config.d_model
- key_value_proj_dim = self.config.d_kv
- n_heads = self.config.num_heads
- module.q.weight.data.normal_(
- mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5)
- )
- module.k.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5))
- module.v.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5))
- module.o.weight.data.normal_(
- mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5)
- )
- if module.has_relative_attention_bias:
- module.relative_attention_bias.weight.data.normal_(
- mean=0.0, std=factor * ((d_model) ** -0.5)
- )
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, (T5Attention, T5Stack)):
- module.gradient_checkpointing = value
-
- def _shift_right(self, input_ids):
- decoder_start_token_id = self.config.decoder_start_token_id
- pad_token_id = self.config.pad_token_id
-
- assert decoder_start_token_id is not None, (
- "self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id."
- " See T5 docs for more information"
- )
-
- # shift inputs to the right
- if is_torch_fx_proxy(input_ids):
- # Item assignment is not supported natively for proxies.
- shifted_input_ids = torch.full(
- input_ids.shape[:-1] + (1,), decoder_start_token_id
- )
- shifted_input_ids = torch.cat(
- [shifted_input_ids, input_ids[..., :-1]], dim=-1
- )
- else:
- shifted_input_ids = input_ids.new_zeros(input_ids.shape)
- shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
- shifted_input_ids[..., 0] = decoder_start_token_id
-
- assert (
- pad_token_id is not None
- ), "self.model.config.pad_token_id has to be defined."
- # replace possible -100 values in labels by `pad_token_id`
- shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
-
- return shifted_input_ids
-
-
-class T5Stack(T5PreTrainedModel):
- def __init__(self, config, embed_tokens=None):
- super().__init__(config)
-
- self.embed_tokens = embed_tokens
- self.is_decoder = config.is_decoder
-
- self.block = nn.ModuleList(
- [
- T5Block(config, has_relative_attention_bias=bool(i == 0))
- for i in range(config.num_layers)
- ]
- )
- self.final_layer_norm = T5LayerNorm(
- config.d_model, eps=config.layer_norm_epsilon
- )
- self.dropout = nn.Dropout(config.dropout_rate)
-
- # Initialize weights and apply final processing
- self.post_init()
- # Model parallel
- self.model_parallel = False
- self.device_map = None
- self.gradient_checkpointing = False
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- # Check validity of device_map
- self.device_map = (
- get_device_map(len(self.block), range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.block))
- self.model_parallel = True
- self.first_device = (
- "cpu"
- if "cpu" in self.device_map.keys()
- else "cuda:" + str(min(self.device_map.keys()))
- )
- self.last_device = "cuda:" + str(max(self.device_map.keys()))
- # Load onto devices
- for k, v in self.device_map.items():
- for layer in v:
- cuda_device = "cuda:" + str(k)
- self.block[layer] = self.block[layer].to(cuda_device)
-
- # Set embed_tokens to first layer
- self.embed_tokens = self.embed_tokens.to(self.first_device)
- # Set final layer norm to last device
- self.final_layer_norm = self.final_layer_norm.to(self.last_device)
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.model_parallel = False
- self.device_map = None
- self.first_device = "cpu"
- self.last_device = "cpu"
- for i in range(len(self.block)):
- self.block[i] = self.block[i].to("cpu")
- self.embed_tokens = self.embed_tokens.to("cpu")
- self.final_layer_norm = self.final_layer_norm.to("cpu")
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, new_embeddings):
- self.embed_tokens = new_embeddings
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- inputs_embeds=None,
- head_mask=None,
- cross_attn_head_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- # Model parallel
- if self.model_parallel:
- torch.cuda.set_device(self.first_device)
- self.embed_tokens = self.embed_tokens.to(self.first_device)
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- output_attentions = (
- output_attentions
- if output_attentions is not None
- else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- if input_ids is not None and inputs_embeds is not None:
- err_msg_prefix = "decoder_" if self.is_decoder else ""
- raise ValueError(
- f"You cannot specify both {err_msg_prefix}input_ids and {err_msg_prefix}inputs_embeds at the same time"
- )
- elif input_ids is not None:
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- err_msg_prefix = "decoder_" if self.is_decoder else ""
- raise ValueError(
- f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds"
- )
-
- if inputs_embeds is None:
- assert (
- self.embed_tokens is not None
- ), "You have to initialize the model with valid token embeddings"
- inputs_embeds = self.embed_tokens(input_ids)
-
- batch_size, seq_length = input_shape
-
- # required mask seq length can be calculated via length of past
- mask_seq_length = (
- past_key_values[0][0].shape[2] + seq_length
- if past_key_values is not None
- else seq_length
- )
-
- if use_cache is True:
- assert (
- self.is_decoder
- ), f"`use_cache` can only be set to `True` if {self} is used as a decoder"
-
- if attention_mask is None:
- attention_mask = torch.ones(
- batch_size, mask_seq_length, device=inputs_embeds.device
- )
- if (
- self.is_decoder
- and encoder_attention_mask is None
- and encoder_hidden_states is not None
- ):
- encoder_seq_length = encoder_hidden_states.shape[1]
- encoder_attention_mask = torch.ones(
- batch_size,
- encoder_seq_length,
- device=inputs_embeds.device,
- dtype=torch.long,
- )
-
- # initialize past_key_values with `None` if past does not exist
- if past_key_values is None:
- past_key_values = [None] * len(self.block)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask = self.get_extended_attention_mask(
- attention_mask, input_shape
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.is_decoder and encoder_hidden_states is not None:
- (
- encoder_batch_size,
- encoder_sequence_length,
- _,
- ) = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(
- encoder_hidden_shape, device=inputs_embeds.device
- )
- encoder_extended_attention_mask = self.invert_attention_mask(
- encoder_attention_mask
- )
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- head_mask = self.get_head_mask(head_mask, self.config.num_layers)
- cross_attn_head_mask = self.get_head_mask(
- cross_attn_head_mask, self.config.num_layers
- )
- present_key_value_states = () if use_cache else None
- all_hidden_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
- all_cross_attentions = () if (output_attentions and self.is_decoder) else None
- position_bias = None
- encoder_decoder_position_bias = None
-
- hidden_states = self.dropout(inputs_embeds)
-
- for i, (layer_module, past_key_value) in enumerate(
- zip(self.block, past_key_values)
- ):
- layer_head_mask = head_mask[i]
- cross_attn_layer_head_mask = cross_attn_head_mask[i]
- # Model parallel
- if self.model_parallel:
- torch.cuda.set_device(hidden_states.device)
- # Ensure that attention_mask is always on the same device as hidden_states
- if attention_mask is not None:
- attention_mask = attention_mask.to(hidden_states.device)
- if position_bias is not None:
- position_bias = position_bias.to(hidden_states.device)
- if encoder_hidden_states is not None:
- encoder_hidden_states = encoder_hidden_states.to(
- hidden_states.device
- )
- if encoder_extended_attention_mask is not None:
- encoder_extended_attention_mask = (
- encoder_extended_attention_mask.to(hidden_states.device)
- )
- if encoder_decoder_position_bias is not None:
- encoder_decoder_position_bias = encoder_decoder_position_bias.to(
- hidden_states.device
- )
- if layer_head_mask is not None:
- layer_head_mask = layer_head_mask.to(hidden_states.device)
- if cross_attn_layer_head_mask is not None:
- cross_attn_layer_head_mask = cross_attn_layer_head_mask.to(
- hidden_states.device
- )
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return tuple(module(*inputs, use_cache, output_attentions))
-
- return custom_forward
-
- layer_outputs = checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- extended_attention_mask,
- position_bias,
- encoder_hidden_states,
- encoder_extended_attention_mask,
- encoder_decoder_position_bias,
- layer_head_mask,
- cross_attn_layer_head_mask,
- None, # past_key_value is always None with gradient checkpointing
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask=extended_attention_mask,
- position_bias=position_bias,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- encoder_decoder_position_bias=encoder_decoder_position_bias,
- layer_head_mask=layer_head_mask,
- cross_attn_layer_head_mask=cross_attn_layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
-
- # layer_outputs is a tuple with:
- # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)
- if use_cache is False:
- layer_outputs = layer_outputs[:1] + (None,) + layer_outputs[1:]
-
- hidden_states, present_key_value_state = layer_outputs[:2]
-
- # We share the position biases between the layers - the first layer store them
- # layer_outputs = hidden-states, key-value-states (self-attention position bias), (self-attention weights),
- # (cross-attention position bias), (cross-attention weights)
- position_bias = layer_outputs[2]
- if self.is_decoder and encoder_hidden_states is not None:
- encoder_decoder_position_bias = layer_outputs[
- 4 if output_attentions else 3
- ]
- # append next layer key value states
- if use_cache:
- present_key_value_states = present_key_value_states + (
- present_key_value_state,
- )
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[3],)
- if self.is_decoder:
- all_cross_attentions = all_cross_attentions + (layer_outputs[5],)
-
- # Model Parallel: If it's the last layer for that device, put things on the next device
- if self.model_parallel:
- for k, v in self.device_map.items():
- if i == v[-1] and "cuda:" + str(k) != self.last_device:
- hidden_states = hidden_states.to("cuda:" + str(k + 1))
-
- hidden_states = self.final_layer_norm(hidden_states)
- hidden_states = self.dropout(hidden_states)
-
- # Add last layer
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- present_key_value_states,
- all_hidden_states,
- all_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=present_key_value_states,
- hidden_states=all_hidden_states,
- attentions=all_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-T5_START_DOCSTRING = r"""
-
- The T5 model was proposed in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text
- Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
- Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a
- text-to-text denoising generative setting.
-
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`T5Config`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-T5_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
- should be able to pad the inputs on both the right and the left.
-
- Indices can be obtained using [`T5Tokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for detail.
-
- [What are input IDs?](../glossary#input-ids)
-
- To know more on how to prepare `input_ids` for pretraining take a look a [T5 Training](./t5#training).
- attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Indices of decoder input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`T5Tokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are decoder input IDs?](../glossary#decoder-input-ids)
-
- T5 uses the `pad_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values`
- is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`).
-
- To know more on how to prepare `decoder_input_ids` for pretraining take a look at [T5
- Training](./t5#training).
- decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
- be used by default.
- head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in `[0,
- 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0,
- 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
- `[0, 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*):
- Tuple consists of (`last_hidden_state`, `optional`: *hidden_states*, `optional`: *attentions*)
- `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden states at
- the output of the last layer of the encoder. Used in the cross-attention of the decoder.
- past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded
- representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be
- input (see `past_key_values`). This is useful if you want more control over how to convert
- `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
-
- If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value
- of `inputs_embeds`.
-
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
-
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-T5_ENCODER_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
- should be able to pad the inputs on both the right and the left.
-
- Indices can be obtained using [`T5Tokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for detail.
-
- To know more on how to prepare `input_ids` for pretraining take a look a [T5 Training](./t5#training).
- attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
-__HEAD_MASK_WARNING_MSG = """
-The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently,
-`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions.
-If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers,
-num_heads)`.
-"""
-
-
-@add_start_docstrings(
- "The bare T5 Model transformer outputting raw hidden-states without any specific head on top.",
- T5_START_DOCSTRING,
-)
-class T5Model(T5PreTrainedModel):
- _keys_to_ignore_on_load_missing = [
- r"encoder.embed_tokens.weight",
- r"decoder.embed_tokens.weight",
- ]
- _keys_to_ignore_on_load_unexpected = [
- r"decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight",
- ]
-
- def __init__(self, config: T5Config):
- super().__init__(config)
- self.shared = nn.Embedding(config.vocab_size, config.d_model)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.is_decoder = False
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- decoder_config = copy.deepcopy(config)
- decoder_config.is_decoder = True
- decoder_config.is_encoder_decoder = False
- decoder_config.num_layers = config.num_decoder_layers
- self.decoder = T5Stack(decoder_config, self.shared)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block), range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.decoder.parallelize(self.device_map)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.decoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.decoder = self.decoder.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
- self.decoder.set_input_embeddings(new_embeddings)
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING)
- @replace_return_docstrings(
- output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.BoolTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- decoder_head_mask: Optional[torch.FloatTensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- decoder_inputs_embeds: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple[torch.FloatTensor], Seq2SeqModelOutput]:
- r"""
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import T5Tokenizer, T5Model
-
- >>> tokenizer = T5Tokenizer.from_pretrained("t5-small")
- >>> model = T5Model.from_pretrained("t5-small")
-
- >>> input_ids = tokenizer(
- ... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
- ... ).input_ids # Batch size 1
- >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
-
- >>> # preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
- >>> # This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
- >>> decoder_input_ids = model._shift_right(decoder_input_ids)
-
- >>> # forward pass
- >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
- >>> last_hidden_states = outputs.last_hidden_state
- ```"""
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
- if head_mask is not None and decoder_head_mask is None:
- if self.config.num_layers == self.config.num_decoder_layers:
- warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning)
- decoder_head_mask = head_mask
-
- # Encode if needed (training, first prediction pass)
- if encoder_outputs is None:
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
- )
-
- hidden_states = encoder_outputs[0]
-
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
- hidden_states = hidden_states.to(self.decoder.first_device)
- if decoder_input_ids is not None:
- decoder_input_ids = decoder_input_ids.to(self.decoder.first_device)
- if attention_mask is not None:
- attention_mask = attention_mask.to(self.decoder.first_device)
- if decoder_attention_mask is not None:
- decoder_attention_mask = decoder_attention_mask.to(
- self.decoder.first_device
- )
-
- # Decode
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- inputs_embeds=decoder_inputs_embeds,
- past_key_values=past_key_values,
- encoder_hidden_states=hidden_states,
- encoder_attention_mask=attention_mask,
- head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- if not return_dict:
- return decoder_outputs + encoder_outputs
-
- return Seq2SeqModelOutput(
- last_hidden_state=decoder_outputs.last_hidden_state,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """T5 Model with a `language modeling` head on top.""", T5_START_DOCSTRING
-)
-class T5ForConditionalGeneration(T5PreTrainedModel):
- _keys_to_ignore_on_load_missing = [
- r"encoder.embed_tokens.weight",
- r"decoder.embed_tokens.weight",
- r"lm_head.weight",
- ]
- _keys_to_ignore_on_load_unexpected = [
- r"decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight",
- ]
-
- def __init__(self, config: T5Config):
- super().__init__(config)
- self.model_dim = config.d_model
-
- self.shared = nn.Embedding(config.vocab_size, config.d_model)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.is_decoder = False
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- decoder_config = copy.deepcopy(config)
- decoder_config.is_decoder = True
- decoder_config.is_encoder_decoder = False
- decoder_config.num_layers = config.num_decoder_layers
- self.decoder = T5Stack(decoder_config, self.shared)
-
- self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block), range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.decoder.parallelize(self.device_map)
- self.lm_head = self.lm_head.to(self.decoder.first_device)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.decoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.decoder = self.decoder.to("cpu")
- self.lm_head = self.lm_head.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
- self.decoder.set_input_embeddings(new_embeddings)
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING)
- @replace_return_docstrings(
- output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.BoolTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- decoder_head_mask: Optional[torch.FloatTensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- reduction: Optional[str] = "mean",
- ) -> Union[Tuple[torch.FloatTensor], Seq2SeqLMOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ...,
- config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for
- labels in `[0, ..., config.vocab_size]`
-
- Returns:
-
- Examples:
-
- ```python
- >>> from transformers import T5Tokenizer, T5ForConditionalGeneration
-
- >>> tokenizer = T5Tokenizer.from_pretrained("t5-small")
- >>> model = T5ForConditionalGeneration.from_pretrained("t5-small")
-
- >>> # training
- >>> input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids
- >>> labels = tokenizer(" cute dog the ", return_tensors="pt").input_ids
- >>> outputs = model(input_ids=input_ids, labels=labels)
- >>> loss = outputs.loss
- >>> logits = outputs.logits
-
- >>> # inference
- >>> input_ids = tokenizer(
- ... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt"
- ... ).input_ids # Batch size 1
- >>> outputs = model.generate(input_ids)
- >>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- >>> # studies have shown that owning a dog is good for you.
- ```"""
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
- if head_mask is not None and decoder_head_mask is None:
- if self.config.num_layers == self.config.num_decoder_layers:
- warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning)
- decoder_head_mask = head_mask
-
- # Encode if needed (training, first prediction pass)
- if encoder_outputs is None:
- # Convert encoder inputs in embeddings if needed
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
- )
-
- hidden_states = encoder_outputs[0]
-
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
-
- if (
- labels is not None
- and decoder_input_ids is None
- and decoder_inputs_embeds is None
- ):
- # get decoder inputs from shifting lm labels to the right
- decoder_input_ids = self._shift_right(labels)
-
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
- hidden_states = hidden_states.to(self.decoder.first_device)
- if decoder_input_ids is not None:
- decoder_input_ids = decoder_input_ids.to(self.decoder.first_device)
- if attention_mask is not None:
- attention_mask = attention_mask.to(self.decoder.first_device)
- if decoder_attention_mask is not None:
- decoder_attention_mask = decoder_attention_mask.to(
- self.decoder.first_device
- )
-
- # Decode
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- inputs_embeds=decoder_inputs_embeds,
- past_key_values=past_key_values,
- encoder_hidden_states=hidden_states,
- encoder_attention_mask=attention_mask,
- head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = decoder_outputs[0]
-
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.encoder.first_device)
- self.lm_head = self.lm_head.to(self.encoder.first_device)
- sequence_output = sequence_output.to(self.lm_head.weight.device)
-
- if self.config.tie_word_embeddings:
- # Rescale output before projecting on vocab
- # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586
- sequence_output = sequence_output * (self.model_dim**-0.5)
-
- lm_logits = self.lm_head(sequence_output)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss(ignore_index=-100, reduction=reduction)
- loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
- if reduction == "none":
- loss = loss.view(lm_logits.size(0), -1).sum(1)
-
- if not return_dict:
- output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs
- return ((loss,) + output) if loss is not None else output
-
- return Seq2SeqLMOutput(
- loss=loss,
- logits=lm_logits,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self,
- input_ids,
- past=None,
- attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- use_cache=None,
- encoder_outputs=None,
- **kwargs,
- ):
-
- # cut decoder_input_ids if past is used
- if past is not None:
- input_ids = input_ids[:, -1:]
-
- return {
- "decoder_input_ids": input_ids,
- "past_key_values": past,
- "encoder_outputs": encoder_outputs,
- "attention_mask": attention_mask,
- "head_mask": head_mask,
- "decoder_head_mask": decoder_head_mask,
- "cross_attn_head_mask": cross_attn_head_mask,
- "use_cache": use_cache,
- }
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return self._shift_right(labels)
-
- def _reorder_cache(self, past, beam_idx):
- # if decoder past is not included in output
- # speedy decoding is disabled and no need to reorder
- if past is None:
- logger.warning(
- "You might want to consider setting `use_cache=True` to speed up decoding"
- )
- return past
-
- reordered_decoder_past = ()
- for layer_past_states in past:
- # get the correct batch idx from layer past batch dim
- # batch dim of `past` is at 2nd position
- reordered_layer_past_states = ()
- for layer_past_state in layer_past_states:
- # need to set correct `past` for each of the four key / value states
- reordered_layer_past_states = reordered_layer_past_states + (
- layer_past_state.index_select(
- 0, beam_idx.to(layer_past_state.device)
- ),
- )
-
- assert reordered_layer_past_states[0].shape == layer_past_states[0].shape
- assert len(reordered_layer_past_states) == len(layer_past_states)
-
- reordered_decoder_past = reordered_decoder_past + (
- reordered_layer_past_states,
- )
- return reordered_decoder_past
-
-
-@add_start_docstrings(
- "The bare T5 Model transformer outputting encoder's raw hidden-states without any specific head on top.",
- T5_START_DOCSTRING,
-)
-class T5EncoderModel(T5PreTrainedModel):
- authorized_missing_keys = [
- r"encoder.embed_tokens.weight",
- ]
-
- def __init__(self, config: T5Config):
- super().__init__(config)
- self.shared = nn.Embedding(config.vocab_size, config.d_model)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block), range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
-
- def get_encoder(self):
- return self.encoder
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.block[layer].layer[0].SelfAttention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(T5_ENCODER_INPUTS_DOCSTRING)
- @replace_return_docstrings(
- output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple[torch.FloatTensor], BaseModelOutput]:
- r"""
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import T5Tokenizer, T5EncoderModel
-
- >>> tokenizer = T5Tokenizer.from_pretrained("t5-small")
- >>> model = T5EncoderModel.from_pretrained("t5-small")
- >>> input_ids = tokenizer(
- ... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
- ... ).input_ids # Batch size 1
- >>> outputs = model(input_ids=input_ids)
- >>> last_hidden_states = outputs.last_hidden_state
- ```"""
- return_dict = (
- return_dict if return_dict is not None else self.config.use_return_dict
- )
-
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- return encoder_outputs
diff --git a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/README.md b/spaces/SeyedAli/Persian-Speech-Emotion-Detection/README.md
deleted file mode 100644
index 16f5aaadfb9c8ce86f2da3bb2234a723fc3681bf..0000000000000000000000000000000000000000
--- a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Persian Speech Emotion Detection
-emoji: 🔊
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Shrikrishna/Stock_Market_Trend_Prediction/README.md b/spaces/Shrikrishna/Stock_Market_Trend_Prediction/README.md
deleted file mode 100644
index ea7a419028181b3014a4010d1ea4de8390b011db..0000000000000000000000000000000000000000
--- a/spaces/Shrikrishna/Stock_Market_Trend_Prediction/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stock Market Trend Prediction
-emoji: 📈
-colorFrom: purple
-colorTo: red
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/README.md b/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/README.md
deleted file mode 100644
index 765b19d57923d49a048b28e96d70503d1fed889f..0000000000000000000000000000000000000000
--- a/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Llama 2 Ko 7B Chat Ggml
-emoji: 📈
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SudharsanSundar/token_edit_distance/token_edit_distance.py b/spaces/SudharsanSundar/token_edit_distance/token_edit_distance.py
deleted file mode 100644
index d4b2acdff0d1daa65ebe8f327d05cc596e5fbbf4..0000000000000000000000000000000000000000
--- a/spaces/SudharsanSundar/token_edit_distance/token_edit_distance.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import datasets
-import evaluate
-import numpy as np
-from Levenshtein import distance as lev_dist
-
-
-_DESCRIPTION = """
-TokenEditDistance: This is an NLP evaluation metric that records the minimum number of token edits
-(insertions, deletions, and replacements, all weighted equally) to the prediction string in order
-to make it exactly match the reference string. Uses identical logic to Levenshtein Edit Distance,
-except applied to tokens (i.e. individual ints in a list) as opposed to individual characters in a string.
-"""
-
-_CITATION = "Man of a thousand and eight names"
-
-_KWARGS_DESCRIPTION = """
-TokenEditDistance:
-
-Args:
- predictions: list of predictions to score.
- Each prediction should be tokenized into a list of tokens.
- references: list of references/ground truth output to score against.
- Each reference should be tokenized into a list of tokens.
-
-Returns:
- "avg_token_edit_distance": Float, average Token Edit Distance for all inputted predictions and references
- "token_edit_distances": List[Int], the Token Edit Distance for each inputted prediction and reference
-
-Examples:
- >>> token_edit_distance_metric = datasets.load_metric('Token Edit Distance')
- >>> references = [[15, 4243], [100, 10008]]
- >>> predictions = [[15, 4243], [100, 10009]]
- >>> results = token_edit_distance_metric.compute(predictions=predictions, references=references)
- >>> print(results)
- {'avg_token_edit_distance': 0.5, 'token_edit_distances': array([0. 1.])}
-"""
-
-
-class TokenEditDistance(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=datasets.Features(
- {
- "predictions": datasets.features.Sequence(datasets.Value("int32")),
- "references": datasets.features.Sequence(datasets.Value("int32")),
- }
- ),
- codebase_urls=[],
- reference_urls=[],
- )
-
- def _compute(self, references, predictions):
- if len(predictions) != len(references):
- raise KeyError(
- "Token Edit Distance: Compute Error: Number of predictions does not match number of references."
- )
-
- edit_dist_arr = np.zeros(len(predictions))
-
- for i in range(len(edit_dist_arr)):
- if len(predictions[i]) != len(references[i]):
- raise KeyError(
- "Token Edit Distance: Compute Error: Prediction length does not match reference length for example" +
- str(i) + " (prediction len: " + str(len(predictions[i])) + ", reference len: " + str(len(references[i])) + ")."
- )
-
- edit_dist_arr[i] = lev_dist(predictions[i], references[i])
-
- return {
- "avg_token_edit_distance": np.mean(edit_dist_arr),
- "token_edit_distances": edit_dist_arr,
- }
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_imports.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_imports.py
deleted file mode 100644
index 515cd4a8a58ec1116897bfd19eee72f4e6a75756..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_imports.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# encoding: utf-8
-from IPython.testing import decorators as dec
-
-
-def test_import_backgroundjobs():
- from IPython.lib import backgroundjobs
-
-
-def test_import_deepreload():
- from IPython.lib import deepreload
-
-
-def test_import_demo():
- from IPython.lib import demo
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/decorators.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/decorators.py
deleted file mode 100644
index af42f349d5ac43762eb367ccf9fe70578c011097..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/decorators.py
+++ /dev/null
@@ -1,201 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Decorators for labeling test objects.
-
-Decorators that merely return a modified version of the original function
-object are straightforward. Decorators that return a new function object need
-to use nose.tools.make_decorator(original_function)(decorator) in returning the
-decorator, in order to preserve metadata such as function name, setup and
-teardown functions and so on - see nose.tools for more information.
-
-This module provides a set of useful decorators meant to be ready to use in
-your own tests. See the bottom of the file for the ready-made ones, and if you
-find yourself writing a new one that may be of generic use, add it here.
-
-Included decorators:
-
-
-Lightweight testing that remains unittest-compatible.
-
-- An @as_unittest decorator can be used to tag any normal parameter-less
- function as a unittest TestCase. Then, both nose and normal unittest will
- recognize it as such. This will make it easier to migrate away from Nose if
- we ever need/want to while maintaining very lightweight tests.
-
-NOTE: This file contains IPython-specific decorators. Using the machinery in
-IPython.external.decorators, we import either numpy.testing.decorators if numpy is
-available, OR use equivalent code in IPython.external._decorators, which
-we've copied verbatim from numpy.
-
-"""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-import os
-import shutil
-import sys
-import tempfile
-import unittest
-from importlib import import_module
-
-from decorator import decorator
-
-# Expose the unittest-driven decorators
-from .ipunittest import ipdoctest, ipdocstring
-
-#-----------------------------------------------------------------------------
-# Classes and functions
-#-----------------------------------------------------------------------------
-
-# Simple example of the basic idea
-def as_unittest(func):
- """Decorator to make a simple function into a normal test via unittest."""
- class Tester(unittest.TestCase):
- def test(self):
- func()
-
- Tester.__name__ = func.__name__
-
- return Tester
-
-# Utility functions
-
-
-def skipif(skip_condition, msg=None):
- """Make function raise SkipTest exception if skip_condition is true
-
- Parameters
- ----------
-
- skip_condition : bool or callable
- Flag to determine whether to skip test. If the condition is a
- callable, it is used at runtime to dynamically make the decision. This
- is useful for tests that may require costly imports, to delay the cost
- until the test suite is actually executed.
- msg : string
- Message to give on raising a SkipTest exception.
-
- Returns
- -------
- decorator : function
- Decorator, which, when applied to a function, causes SkipTest
- to be raised when the skip_condition was True, and the function
- to be called normally otherwise.
- """
- if msg is None:
- msg = "Test skipped due to test condition."
-
- import pytest
-
- assert isinstance(skip_condition, bool)
- return pytest.mark.skipif(skip_condition, reason=msg)
-
-
-# A version with the condition set to true, common case just to attach a message
-# to a skip decorator
-def skip(msg=None):
- """Decorator factory - mark a test function for skipping from test suite.
-
- Parameters
- ----------
- msg : string
- Optional message to be added.
-
- Returns
- -------
- decorator : function
- Decorator, which, when applied to a function, causes SkipTest
- to be raised, with the optional message added.
- """
- if msg and not isinstance(msg, str):
- raise ValueError('invalid object passed to `@skip` decorator, did you '
- 'meant `@skip()` with brackets ?')
- return skipif(True, msg)
-
-
-def onlyif(condition, msg):
- """The reverse from skipif, see skipif for details."""
-
- return skipif(not condition, msg)
-
-#-----------------------------------------------------------------------------
-# Utility functions for decorators
-def module_not_available(module):
- """Can module be imported? Returns true if module does NOT import.
-
- This is used to make a decorator to skip tests that require module to be
- available, but delay the 'import numpy' to test execution time.
- """
- try:
- mod = import_module(module)
- mod_not_avail = False
- except ImportError:
- mod_not_avail = True
-
- return mod_not_avail
-
-
-#-----------------------------------------------------------------------------
-# Decorators for public use
-
-# Decorators to skip certain tests on specific platforms.
-skip_win32 = skipif(sys.platform == 'win32',
- "This test does not run under Windows")
-skip_linux = skipif(sys.platform.startswith('linux'),
- "This test does not run under Linux")
-skip_osx = skipif(sys.platform == 'darwin',"This test does not run under OS X")
-
-
-# Decorators to skip tests if not on specific platforms.
-skip_if_not_win32 = skipif(sys.platform != 'win32',
- "This test only runs under Windows")
-skip_if_not_linux = skipif(not sys.platform.startswith('linux'),
- "This test only runs under Linux")
-
-_x11_skip_cond = (sys.platform not in ('darwin', 'win32') and
- os.environ.get('DISPLAY', '') == '')
-_x11_skip_msg = "Skipped under *nix when X11/XOrg not available"
-
-skip_if_no_x11 = skipif(_x11_skip_cond, _x11_skip_msg)
-
-# Other skip decorators
-
-# generic skip without module
-skip_without = lambda mod: skipif(module_not_available(mod), "This test requires %s" % mod)
-
-skipif_not_numpy = skip_without('numpy')
-
-skipif_not_matplotlib = skip_without('matplotlib')
-
-# A null 'decorator', useful to make more readable code that needs to pick
-# between different decorators based on OS or other conditions
-null_deco = lambda f: f
-
-# Some tests only run where we can use unicode paths. Note that we can't just
-# check os.path.supports_unicode_filenames, which is always False on Linux.
-try:
- f = tempfile.NamedTemporaryFile(prefix=u"tmp€")
-except UnicodeEncodeError:
- unicode_paths = False
-else:
- unicode_paths = True
- f.close()
-
-onlyif_unicode_paths = onlyif(unicode_paths, ("This test is only applicable "
- "where we can use unicode in filenames."))
-
-
-def onlyif_cmds_exist(*commands):
- """
- Decorator to skip test when at least one of `commands` is not found.
- """
- assert (
- os.environ.get("IPTEST_WORKING_DIR", None) is None
- ), "iptest deprecated since IPython 8.0"
- for cmd in commands:
- reason = f"This test runs only if command '{cmd}' is installed"
- if not shutil.which(cmd):
- import pytest
-
- return pytest.mark.skip(reason=reason)
- return null_deco
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/__init__.py
deleted file mode 100644
index 0d769e058d51f5261953293e14e1efd108319c26..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/__init__.py
+++ /dev/null
@@ -1,74 +0,0 @@
-"""adodbapi - A python DB API 2.0 (PEP 249) interface to Microsoft ADO
-
-Copyright (C) 2002 Henrik Ekelund, version 2.1 by Vernon Cole
-* http://sourceforge.net/projects/adodbapi
-"""
-import sys
-import time
-
-from .adodbapi import Connection, Cursor, __version__, connect, dateconverter
-from .apibase import (
- BINARY,
- DATETIME,
- NUMBER,
- ROWID,
- STRING,
- DatabaseError,
- DataError,
- Error,
- FetchFailedError,
- IntegrityError,
- InterfaceError,
- InternalError,
- NotSupportedError,
- OperationalError,
- ProgrammingError,
- Warning,
- apilevel,
- paramstyle,
- threadsafety,
-)
-
-
-def Binary(aString):
- """This function constructs an object capable of holding a binary (long) string value."""
- return bytes(aString)
-
-
-def Date(year, month, day):
- "This function constructs an object holding a date value."
- return dateconverter.Date(year, month, day)
-
-
-def Time(hour, minute, second):
- "This function constructs an object holding a time value."
- return dateconverter.Time(hour, minute, second)
-
-
-def Timestamp(year, month, day, hour, minute, second):
- "This function constructs an object holding a time stamp value."
- return dateconverter.Timestamp(year, month, day, hour, minute, second)
-
-
-def DateFromTicks(ticks):
- """This function constructs an object holding a date value from the given ticks value
- (number of seconds since the epoch; see the documentation of the standard Python time module for details).
- """
- return Date(*time.gmtime(ticks)[:3])
-
-
-def TimeFromTicks(ticks):
- """This function constructs an object holding a time value from the given ticks value
- (number of seconds since the epoch; see the documentation of the standard Python time module for details).
- """
- return Time(*time.gmtime(ticks)[3:6])
-
-
-def TimestampFromTicks(ticks):
- """This function constructs an object holding a time stamp value from the given
- ticks value (number of seconds since the epoch;
- see the documentation of the standard Python time module for details)."""
- return Timestamp(*time.gmtime(ticks)[:6])
-
-
-version = "adodbapi v" + __version__
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/test/dbapi20.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/test/dbapi20.py
deleted file mode 100644
index e378b1941d6f0343a13ff60c90747b6c96697888..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/test/dbapi20.py
+++ /dev/null
@@ -1,939 +0,0 @@
-#!/usr/bin/env python
-""" Python DB API 2.0 driver compliance unit test suite.
-
- This software is Public Domain and may be used without restrictions.
-
- "Now we have booze and barflies entering the discussion, plus rumours of
- DBAs on drugs... and I won't tell you what flashes through my mind each
- time I read the subject line with 'Anal Compliance' in it. All around
- this is turning out to be a thoroughly unwholesome unit test."
-
- -- Ian Bicking
-"""
-
-__version__ = "$Revision: 1.15.0 $"[11:-2]
-__author__ = "Stuart Bishop "
-
-import sys
-import time
-import unittest
-
-if sys.version[0] >= "3": # python 3.x
- _BaseException = Exception
-
- def _failUnless(self, expr, msg=None):
- self.assertTrue(expr, msg)
-
-else: # python 2.x
- from exceptions import Exception as _BaseException
-
- def _failUnless(self, expr, msg=None):
- self.failUnless(expr, msg) ## deprecated since Python 2.6
-
-
-# set this to "True" to follow API 2.0 to the letter
-TEST_FOR_NON_IDEMPOTENT_CLOSE = False
-
-# Revision 1.15 2019/11/22 00:50:00 kf7xm
-# Make Turn off IDEMPOTENT_CLOSE a proper skipTest
-
-# Revision 1.14 2013/05/20 11:02:05 kf7xm
-# Add a literal string to the format insertion test to catch trivial re-format algorithms
-
-# Revision 1.13 2013/05/08 14:31:50 kf7xm
-# Quick switch to Turn off IDEMPOTENT_CLOSE test. Also: Silence teardown failure
-
-
-# Revision 1.12 2009/02/06 03:35:11 kf7xm
-# Tested okay with Python 3.0, includes last minute patches from Mark H.
-#
-# Revision 1.1.1.1.2.1 2008/09/20 19:54:59 rupole
-# Include latest changes from main branch
-# Updates for py3k
-#
-# Revision 1.11 2005/01/02 02:41:01 zenzen
-# Update author email address
-#
-# Revision 1.10 2003/10/09 03:14:14 zenzen
-# Add test for DB API 2.0 optional extension, where database exceptions
-# are exposed as attributes on the Connection object.
-#
-# Revision 1.9 2003/08/13 01:16:36 zenzen
-# Minor tweak from Stefan Fleiter
-#
-# Revision 1.8 2003/04/10 00:13:25 zenzen
-# Changes, as per suggestions by M.-A. Lemburg
-# - Add a table prefix, to ensure namespace collisions can always be avoided
-#
-# Revision 1.7 2003/02/26 23:33:37 zenzen
-# Break out DDL into helper functions, as per request by David Rushby
-#
-# Revision 1.6 2003/02/21 03:04:33 zenzen
-# Stuff from Henrik Ekelund:
-# added test_None
-# added test_nextset & hooks
-#
-# Revision 1.5 2003/02/17 22:08:43 zenzen
-# Implement suggestions and code from Henrik Eklund - test that cursor.arraysize
-# defaults to 1 & generic cursor.callproc test added
-#
-# Revision 1.4 2003/02/15 00:16:33 zenzen
-# Changes, as per suggestions and bug reports by M.-A. Lemburg,
-# Matthew T. Kromer, Federico Di Gregorio and Daniel Dittmar
-# - Class renamed
-# - Now a subclass of TestCase, to avoid requiring the driver stub
-# to use multiple inheritance
-# - Reversed the polarity of buggy test in test_description
-# - Test exception heirarchy correctly
-# - self.populate is now self._populate(), so if a driver stub
-# overrides self.ddl1 this change propogates
-# - VARCHAR columns now have a width, which will hopefully make the
-# DDL even more portible (this will be reversed if it causes more problems)
-# - cursor.rowcount being checked after various execute and fetchXXX methods
-# - Check for fetchall and fetchmany returning empty lists after results
-# are exhausted (already checking for empty lists if select retrieved
-# nothing
-# - Fix bugs in test_setoutputsize_basic and test_setinputsizes
-#
-def str2bytes(sval):
- if sys.version_info < (3, 0) and isinstance(sval, str):
- sval = sval.decode("latin1")
- return sval.encode("latin1") # python 3 make unicode into bytes
-
-
-class DatabaseAPI20Test(unittest.TestCase):
- """Test a database self.driver for DB API 2.0 compatibility.
- This implementation tests Gadfly, but the TestCase
- is structured so that other self.drivers can subclass this
- test case to ensure compiliance with the DB-API. It is
- expected that this TestCase may be expanded in the future
- if ambiguities or edge conditions are discovered.
-
- The 'Optional Extensions' are not yet being tested.
-
- self.drivers should subclass this test, overriding setUp, tearDown,
- self.driver, connect_args and connect_kw_args. Class specification
- should be as follows:
-
- import dbapi20
- class mytest(dbapi20.DatabaseAPI20Test):
- [...]
-
- Don't 'import DatabaseAPI20Test from dbapi20', or you will
- confuse the unit tester - just 'import dbapi20'.
- """
-
- # The self.driver module. This should be the module where the 'connect'
- # method is to be found
- driver = None
- connect_args = () # List of arguments to pass to connect
- connect_kw_args = {} # Keyword arguments for connect
- table_prefix = "dbapi20test_" # If you need to specify a prefix for tables
-
- ddl1 = "create table %sbooze (name varchar(20))" % table_prefix
- ddl2 = "create table %sbarflys (name varchar(20), drink varchar(30))" % table_prefix
- xddl1 = "drop table %sbooze" % table_prefix
- xddl2 = "drop table %sbarflys" % table_prefix
-
- lowerfunc = "lower" # Name of stored procedure to convert string->lowercase
-
- # Some drivers may need to override these helpers, for example adding
- # a 'commit' after the execute.
- def executeDDL1(self, cursor):
- cursor.execute(self.ddl1)
-
- def executeDDL2(self, cursor):
- cursor.execute(self.ddl2)
-
- def setUp(self):
- """self.drivers should override this method to perform required setup
- if any is necessary, such as creating the database.
- """
- pass
-
- def tearDown(self):
- """self.drivers should override this method to perform required cleanup
- if any is necessary, such as deleting the test database.
- The default drops the tables that may be created.
- """
- try:
- con = self._connect()
- try:
- cur = con.cursor()
- for ddl in (self.xddl1, self.xddl2):
- try:
- cur.execute(ddl)
- con.commit()
- except self.driver.Error:
- # Assume table didn't exist. Other tests will check if
- # execute is busted.
- pass
- finally:
- con.close()
- except _BaseException:
- pass
-
- def _connect(self):
- try:
- r = self.driver.connect(*self.connect_args, **self.connect_kw_args)
- except AttributeError:
- self.fail("No connect method found in self.driver module")
- return r
-
- def test_connect(self):
- con = self._connect()
- con.close()
-
- def test_apilevel(self):
- try:
- # Must exist
- apilevel = self.driver.apilevel
- # Must equal 2.0
- self.assertEqual(apilevel, "2.0")
- except AttributeError:
- self.fail("Driver doesn't define apilevel")
-
- def test_threadsafety(self):
- try:
- # Must exist
- threadsafety = self.driver.threadsafety
- # Must be a valid value
- _failUnless(self, threadsafety in (0, 1, 2, 3))
- except AttributeError:
- self.fail("Driver doesn't define threadsafety")
-
- def test_paramstyle(self):
- try:
- # Must exist
- paramstyle = self.driver.paramstyle
- # Must be a valid value
- _failUnless(
- self, paramstyle in ("qmark", "numeric", "named", "format", "pyformat")
- )
- except AttributeError:
- self.fail("Driver doesn't define paramstyle")
-
- def test_Exceptions(self):
- # Make sure required exceptions exist, and are in the
- # defined heirarchy.
- if sys.version[0] == "3": # under Python 3 StardardError no longer exists
- self.assertTrue(issubclass(self.driver.Warning, Exception))
- self.assertTrue(issubclass(self.driver.Error, Exception))
- else:
- self.failUnless(issubclass(self.driver.Warning, Exception))
- self.failUnless(issubclass(self.driver.Error, Exception))
-
- _failUnless(self, issubclass(self.driver.InterfaceError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.DatabaseError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.OperationalError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.IntegrityError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.InternalError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.ProgrammingError, self.driver.Error))
- _failUnless(self, issubclass(self.driver.NotSupportedError, self.driver.Error))
-
- def test_ExceptionsAsConnectionAttributes(self):
- # OPTIONAL EXTENSION
- # Test for the optional DB API 2.0 extension, where the exceptions
- # are exposed as attributes on the Connection object
- # I figure this optional extension will be implemented by any
- # driver author who is using this test suite, so it is enabled
- # by default.
- con = self._connect()
- drv = self.driver
- _failUnless(self, con.Warning is drv.Warning)
- _failUnless(self, con.Error is drv.Error)
- _failUnless(self, con.InterfaceError is drv.InterfaceError)
- _failUnless(self, con.DatabaseError is drv.DatabaseError)
- _failUnless(self, con.OperationalError is drv.OperationalError)
- _failUnless(self, con.IntegrityError is drv.IntegrityError)
- _failUnless(self, con.InternalError is drv.InternalError)
- _failUnless(self, con.ProgrammingError is drv.ProgrammingError)
- _failUnless(self, con.NotSupportedError is drv.NotSupportedError)
-
- def test_commit(self):
- con = self._connect()
- try:
- # Commit must work, even if it doesn't do anything
- con.commit()
- finally:
- con.close()
-
- def test_rollback(self):
- con = self._connect()
- # If rollback is defined, it should either work or throw
- # the documented exception
- if hasattr(con, "rollback"):
- try:
- con.rollback()
- except self.driver.NotSupportedError:
- pass
-
- def test_cursor(self):
- con = self._connect()
- try:
- cur = con.cursor()
- finally:
- con.close()
-
- def test_cursor_isolation(self):
- con = self._connect()
- try:
- # Make sure cursors created from the same connection have
- # the documented transaction isolation level
- cur1 = con.cursor()
- cur2 = con.cursor()
- self.executeDDL1(cur1)
- cur1.execute(
- "insert into %sbooze values ('Victoria Bitter')" % (self.table_prefix)
- )
- cur2.execute("select name from %sbooze" % self.table_prefix)
- booze = cur2.fetchall()
- self.assertEqual(len(booze), 1)
- self.assertEqual(len(booze[0]), 1)
- self.assertEqual(booze[0][0], "Victoria Bitter")
- finally:
- con.close()
-
- def test_description(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self.executeDDL1(cur)
- self.assertEqual(
- cur.description,
- None,
- "cursor.description should be none after executing a "
- "statement that can return no rows (such as DDL)",
- )
- cur.execute("select name from %sbooze" % self.table_prefix)
- self.assertEqual(
- len(cur.description), 1, "cursor.description describes too many columns"
- )
- self.assertEqual(
- len(cur.description[0]),
- 7,
- "cursor.description[x] tuples must have 7 elements",
- )
- self.assertEqual(
- cur.description[0][0].lower(),
- "name",
- "cursor.description[x][0] must return column name",
- )
- self.assertEqual(
- cur.description[0][1],
- self.driver.STRING,
- "cursor.description[x][1] must return column type. Got %r"
- % cur.description[0][1],
- )
-
- # Make sure self.description gets reset
- self.executeDDL2(cur)
- self.assertEqual(
- cur.description,
- None,
- "cursor.description not being set to None when executing "
- "no-result statements (eg. DDL)",
- )
- finally:
- con.close()
-
- def test_rowcount(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self.executeDDL1(cur)
- _failUnless(
- self,
- cur.rowcount in (-1, 0), # Bug #543885
- "cursor.rowcount should be -1 or 0 after executing no-result "
- "statements",
- )
- cur.execute(
- "insert into %sbooze values ('Victoria Bitter')" % (self.table_prefix)
- )
- _failUnless(
- self,
- cur.rowcount in (-1, 1),
- "cursor.rowcount should == number or rows inserted, or "
- "set to -1 after executing an insert statement",
- )
- cur.execute("select name from %sbooze" % self.table_prefix)
- _failUnless(
- self,
- cur.rowcount in (-1, 1),
- "cursor.rowcount should == number of rows returned, or "
- "set to -1 after executing a select statement",
- )
- self.executeDDL2(cur)
- self.assertEqual(
- cur.rowcount,
- -1,
- "cursor.rowcount not being reset to -1 after executing "
- "no-result statements",
- )
- finally:
- con.close()
-
- lower_func = "lower"
-
- def test_callproc(self):
- con = self._connect()
- try:
- cur = con.cursor()
- if self.lower_func and hasattr(cur, "callproc"):
- r = cur.callproc(self.lower_func, ("FOO",))
- self.assertEqual(len(r), 1)
- self.assertEqual(r[0], "FOO")
- r = cur.fetchall()
- self.assertEqual(len(r), 1, "callproc produced no result set")
- self.assertEqual(len(r[0]), 1, "callproc produced invalid result set")
- self.assertEqual(r[0][0], "foo", "callproc produced invalid results")
- finally:
- con.close()
-
- def test_close(self):
- con = self._connect()
- try:
- cur = con.cursor()
- finally:
- con.close()
-
- # cursor.execute should raise an Error if called after connection
- # closed
- self.assertRaises(self.driver.Error, self.executeDDL1, cur)
-
- # connection.commit should raise an Error if called after connection'
- # closed.'
- self.assertRaises(self.driver.Error, con.commit)
-
- # connection.close should raise an Error if called more than once
- #!!! reasonable persons differ about the usefulness of this test and this feature !!!
- if TEST_FOR_NON_IDEMPOTENT_CLOSE:
- self.assertRaises(self.driver.Error, con.close)
- else:
- self.skipTest(
- "Non-idempotent close is considered a bad thing by some people."
- )
-
- def test_execute(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self._paraminsert(cur)
- finally:
- con.close()
-
- def _paraminsert(self, cur):
- self.executeDDL2(cur)
- cur.execute(
- "insert into %sbarflys values ('Victoria Bitter', 'thi%%s :may ca%%(u)se? troub:1e')"
- % (self.table_prefix)
- )
- _failUnless(self, cur.rowcount in (-1, 1))
-
- if self.driver.paramstyle == "qmark":
- cur.execute(
- "insert into %sbarflys values (?, 'thi%%s :may ca%%(u)se? troub:1e')"
- % self.table_prefix,
- ("Cooper's",),
- )
- elif self.driver.paramstyle == "numeric":
- cur.execute(
- "insert into %sbarflys values (:1, 'thi%%s :may ca%%(u)se? troub:1e')"
- % self.table_prefix,
- ("Cooper's",),
- )
- elif self.driver.paramstyle == "named":
- cur.execute(
- "insert into %sbarflys values (:beer, 'thi%%s :may ca%%(u)se? troub:1e')"
- % self.table_prefix,
- {"beer": "Cooper's"},
- )
- elif self.driver.paramstyle == "format":
- cur.execute(
- "insert into %sbarflys values (%%s, 'thi%%s :may ca%%(u)se? troub:1e')"
- % self.table_prefix,
- ("Cooper's",),
- )
- elif self.driver.paramstyle == "pyformat":
- cur.execute(
- "insert into %sbarflys values (%%(beer)s, 'thi%%s :may ca%%(u)se? troub:1e')"
- % self.table_prefix,
- {"beer": "Cooper's"},
- )
- else:
- self.fail("Invalid paramstyle")
- _failUnless(self, cur.rowcount in (-1, 1))
-
- cur.execute("select name, drink from %sbarflys" % self.table_prefix)
- res = cur.fetchall()
- self.assertEqual(len(res), 2, "cursor.fetchall returned too few rows")
- beers = [res[0][0], res[1][0]]
- beers.sort()
- self.assertEqual(
- beers[0],
- "Cooper's",
- "cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly",
- )
- self.assertEqual(
- beers[1],
- "Victoria Bitter",
- "cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly",
- )
- trouble = "thi%s :may ca%(u)se? troub:1e"
- self.assertEqual(
- res[0][1],
- trouble,
- "cursor.fetchall retrieved incorrect data, or data inserted "
- "incorrectly. Got=%s, Expected=%s" % (repr(res[0][1]), repr(trouble)),
- )
- self.assertEqual(
- res[1][1],
- trouble,
- "cursor.fetchall retrieved incorrect data, or data inserted "
- "incorrectly. Got=%s, Expected=%s" % (repr(res[1][1]), repr(trouble)),
- )
-
- def test_executemany(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self.executeDDL1(cur)
- largs = [("Cooper's",), ("Boag's",)]
- margs = [{"beer": "Cooper's"}, {"beer": "Boag's"}]
- if self.driver.paramstyle == "qmark":
- cur.executemany(
- "insert into %sbooze values (?)" % self.table_prefix, largs
- )
- elif self.driver.paramstyle == "numeric":
- cur.executemany(
- "insert into %sbooze values (:1)" % self.table_prefix, largs
- )
- elif self.driver.paramstyle == "named":
- cur.executemany(
- "insert into %sbooze values (:beer)" % self.table_prefix, margs
- )
- elif self.driver.paramstyle == "format":
- cur.executemany(
- "insert into %sbooze values (%%s)" % self.table_prefix, largs
- )
- elif self.driver.paramstyle == "pyformat":
- cur.executemany(
- "insert into %sbooze values (%%(beer)s)" % (self.table_prefix),
- margs,
- )
- else:
- self.fail("Unknown paramstyle")
- _failUnless(
- self,
- cur.rowcount in (-1, 2),
- "insert using cursor.executemany set cursor.rowcount to "
- "incorrect value %r" % cur.rowcount,
- )
- cur.execute("select name from %sbooze" % self.table_prefix)
- res = cur.fetchall()
- self.assertEqual(
- len(res), 2, "cursor.fetchall retrieved incorrect number of rows"
- )
- beers = [res[0][0], res[1][0]]
- beers.sort()
- self.assertEqual(
- beers[0], "Boag's", 'incorrect data "%s" retrieved' % beers[0]
- )
- self.assertEqual(beers[1], "Cooper's", "incorrect data retrieved")
- finally:
- con.close()
-
- def test_fetchone(self):
- con = self._connect()
- try:
- cur = con.cursor()
-
- # cursor.fetchone should raise an Error if called before
- # executing a select-type query
- self.assertRaises(self.driver.Error, cur.fetchone)
-
- # cursor.fetchone should raise an Error if called after
- # executing a query that cannnot return rows
- self.executeDDL1(cur)
- self.assertRaises(self.driver.Error, cur.fetchone)
-
- cur.execute("select name from %sbooze" % self.table_prefix)
- self.assertEqual(
- cur.fetchone(),
- None,
- "cursor.fetchone should return None if a query retrieves " "no rows",
- )
- _failUnless(self, cur.rowcount in (-1, 0))
-
- # cursor.fetchone should raise an Error if called after
- # executing a query that cannnot return rows
- cur.execute(
- "insert into %sbooze values ('Victoria Bitter')" % (self.table_prefix)
- )
- self.assertRaises(self.driver.Error, cur.fetchone)
-
- cur.execute("select name from %sbooze" % self.table_prefix)
- r = cur.fetchone()
- self.assertEqual(
- len(r), 1, "cursor.fetchone should have retrieved a single row"
- )
- self.assertEqual(
- r[0], "Victoria Bitter", "cursor.fetchone retrieved incorrect data"
- )
- self.assertEqual(
- cur.fetchone(),
- None,
- "cursor.fetchone should return None if no more rows available",
- )
- _failUnless(self, cur.rowcount in (-1, 1))
- finally:
- con.close()
-
- samples = [
- "Carlton Cold",
- "Carlton Draft",
- "Mountain Goat",
- "Redback",
- "Victoria Bitter",
- "XXXX",
- ]
-
- def _populate(self):
- """Return a list of sql commands to setup the DB for the fetch
- tests.
- """
- populate = [
- "insert into %sbooze values ('%s')" % (self.table_prefix, s)
- for s in self.samples
- ]
- return populate
-
- def test_fetchmany(self):
- con = self._connect()
- try:
- cur = con.cursor()
-
- # cursor.fetchmany should raise an Error if called without
- # issuing a query
- self.assertRaises(self.driver.Error, cur.fetchmany, 4)
-
- self.executeDDL1(cur)
- for sql in self._populate():
- cur.execute(sql)
-
- cur.execute("select name from %sbooze" % self.table_prefix)
- r = cur.fetchmany()
- self.assertEqual(
- len(r),
- 1,
- "cursor.fetchmany retrieved incorrect number of rows, "
- "default of arraysize is one.",
- )
- cur.arraysize = 10
- r = cur.fetchmany(3) # Should get 3 rows
- self.assertEqual(
- len(r), 3, "cursor.fetchmany retrieved incorrect number of rows"
- )
- r = cur.fetchmany(4) # Should get 2 more
- self.assertEqual(
- len(r), 2, "cursor.fetchmany retrieved incorrect number of rows"
- )
- r = cur.fetchmany(4) # Should be an empty sequence
- self.assertEqual(
- len(r),
- 0,
- "cursor.fetchmany should return an empty sequence after "
- "results are exhausted",
- )
- _failUnless(self, cur.rowcount in (-1, 6))
-
- # Same as above, using cursor.arraysize
- cur.arraysize = 4
- cur.execute("select name from %sbooze" % self.table_prefix)
- r = cur.fetchmany() # Should get 4 rows
- self.assertEqual(
- len(r), 4, "cursor.arraysize not being honoured by fetchmany"
- )
- r = cur.fetchmany() # Should get 2 more
- self.assertEqual(len(r), 2)
- r = cur.fetchmany() # Should be an empty sequence
- self.assertEqual(len(r), 0)
- _failUnless(self, cur.rowcount in (-1, 6))
-
- cur.arraysize = 6
- cur.execute("select name from %sbooze" % self.table_prefix)
- rows = cur.fetchmany() # Should get all rows
- _failUnless(self, cur.rowcount in (-1, 6))
- self.assertEqual(len(rows), 6)
- self.assertEqual(len(rows), 6)
- rows = [r[0] for r in rows]
- rows.sort()
-
- # Make sure we get the right data back out
- for i in range(0, 6):
- self.assertEqual(
- rows[i],
- self.samples[i],
- "incorrect data retrieved by cursor.fetchmany",
- )
-
- rows = cur.fetchmany() # Should return an empty list
- self.assertEqual(
- len(rows),
- 0,
- "cursor.fetchmany should return an empty sequence if "
- "called after the whole result set has been fetched",
- )
- _failUnless(self, cur.rowcount in (-1, 6))
-
- self.executeDDL2(cur)
- cur.execute("select name from %sbarflys" % self.table_prefix)
- r = cur.fetchmany() # Should get empty sequence
- self.assertEqual(
- len(r),
- 0,
- "cursor.fetchmany should return an empty sequence if "
- "query retrieved no rows",
- )
- _failUnless(self, cur.rowcount in (-1, 0))
-
- finally:
- con.close()
-
- def test_fetchall(self):
- con = self._connect()
- try:
- cur = con.cursor()
- # cursor.fetchall should raise an Error if called
- # without executing a query that may return rows (such
- # as a select)
- self.assertRaises(self.driver.Error, cur.fetchall)
-
- self.executeDDL1(cur)
- for sql in self._populate():
- cur.execute(sql)
-
- # cursor.fetchall should raise an Error if called
- # after executing a a statement that cannot return rows
- self.assertRaises(self.driver.Error, cur.fetchall)
-
- cur.execute("select name from %sbooze" % self.table_prefix)
- rows = cur.fetchall()
- _failUnless(self, cur.rowcount in (-1, len(self.samples)))
- self.assertEqual(
- len(rows),
- len(self.samples),
- "cursor.fetchall did not retrieve all rows",
- )
- rows = [r[0] for r in rows]
- rows.sort()
- for i in range(0, len(self.samples)):
- self.assertEqual(
- rows[i], self.samples[i], "cursor.fetchall retrieved incorrect rows"
- )
- rows = cur.fetchall()
- self.assertEqual(
- len(rows),
- 0,
- "cursor.fetchall should return an empty list if called "
- "after the whole result set has been fetched",
- )
- _failUnless(self, cur.rowcount in (-1, len(self.samples)))
-
- self.executeDDL2(cur)
- cur.execute("select name from %sbarflys" % self.table_prefix)
- rows = cur.fetchall()
- _failUnless(self, cur.rowcount in (-1, 0))
- self.assertEqual(
- len(rows),
- 0,
- "cursor.fetchall should return an empty list if "
- "a select query returns no rows",
- )
-
- finally:
- con.close()
-
- def test_mixedfetch(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self.executeDDL1(cur)
- for sql in self._populate():
- cur.execute(sql)
-
- cur.execute("select name from %sbooze" % self.table_prefix)
- rows1 = cur.fetchone()
- rows23 = cur.fetchmany(2)
- rows4 = cur.fetchone()
- rows56 = cur.fetchall()
- _failUnless(self, cur.rowcount in (-1, 6))
- self.assertEqual(
- len(rows23), 2, "fetchmany returned incorrect number of rows"
- )
- self.assertEqual(
- len(rows56), 2, "fetchall returned incorrect number of rows"
- )
-
- rows = [rows1[0]]
- rows.extend([rows23[0][0], rows23[1][0]])
- rows.append(rows4[0])
- rows.extend([rows56[0][0], rows56[1][0]])
- rows.sort()
- for i in range(0, len(self.samples)):
- self.assertEqual(
- rows[i], self.samples[i], "incorrect data retrieved or inserted"
- )
- finally:
- con.close()
-
- def help_nextset_setUp(self, cur):
- """Should create a procedure called deleteme
- that returns two result sets, first the
- number of rows in booze then "name from booze"
- """
- raise NotImplementedError("Helper not implemented")
- # sql="""
- # create procedure deleteme as
- # begin
- # select count(*) from booze
- # select name from booze
- # end
- # """
- # cur.execute(sql)
-
- def help_nextset_tearDown(self, cur):
- "If cleaning up is needed after nextSetTest"
- raise NotImplementedError("Helper not implemented")
- # cur.execute("drop procedure deleteme")
-
- def test_nextset(self):
- con = self._connect()
- try:
- cur = con.cursor()
- if not hasattr(cur, "nextset"):
- return
-
- try:
- self.executeDDL1(cur)
- sql = self._populate()
- for sql in self._populate():
- cur.execute(sql)
-
- self.help_nextset_setUp(cur)
-
- cur.callproc("deleteme")
- numberofrows = cur.fetchone()
- assert numberofrows[0] == len(self.samples)
- assert cur.nextset()
- names = cur.fetchall()
- assert len(names) == len(self.samples)
- s = cur.nextset()
- assert s == None, "No more return sets, should return None"
- finally:
- self.help_nextset_tearDown(cur)
-
- finally:
- con.close()
-
- def test_nextset(self):
- raise NotImplementedError("Drivers need to override this test")
-
- def test_arraysize(self):
- # Not much here - rest of the tests for this are in test_fetchmany
- con = self._connect()
- try:
- cur = con.cursor()
- _failUnless(
- self, hasattr(cur, "arraysize"), "cursor.arraysize must be defined"
- )
- finally:
- con.close()
-
- def test_setinputsizes(self):
- con = self._connect()
- try:
- cur = con.cursor()
- cur.setinputsizes((25,))
- self._paraminsert(cur) # Make sure cursor still works
- finally:
- con.close()
-
- def test_setoutputsize_basic(self):
- # Basic test is to make sure setoutputsize doesn't blow up
- con = self._connect()
- try:
- cur = con.cursor()
- cur.setoutputsize(1000)
- cur.setoutputsize(2000, 0)
- self._paraminsert(cur) # Make sure the cursor still works
- finally:
- con.close()
-
- def test_setoutputsize(self):
- # Real test for setoutputsize is driver dependant
- raise NotImplementedError("Driver needed to override this test")
-
- def test_None(self):
- con = self._connect()
- try:
- cur = con.cursor()
- self.executeDDL1(cur)
- cur.execute("insert into %sbooze values (NULL)" % self.table_prefix)
- cur.execute("select name from %sbooze" % self.table_prefix)
- r = cur.fetchall()
- self.assertEqual(len(r), 1)
- self.assertEqual(len(r[0]), 1)
- self.assertEqual(r[0][0], None, "NULL value not returned as None")
- finally:
- con.close()
-
- def test_Date(self):
- d1 = self.driver.Date(2002, 12, 25)
- d2 = self.driver.DateFromTicks(time.mktime((2002, 12, 25, 0, 0, 0, 0, 0, 0)))
- # Can we assume this? API doesn't specify, but it seems implied
- # self.assertEqual(str(d1),str(d2))
-
- def test_Time(self):
- t1 = self.driver.Time(13, 45, 30)
- t2 = self.driver.TimeFromTicks(time.mktime((2001, 1, 1, 13, 45, 30, 0, 0, 0)))
- # Can we assume this? API doesn't specify, but it seems implied
- # self.assertEqual(str(t1),str(t2))
-
- def test_Timestamp(self):
- t1 = self.driver.Timestamp(2002, 12, 25, 13, 45, 30)
- t2 = self.driver.TimestampFromTicks(
- time.mktime((2002, 12, 25, 13, 45, 30, 0, 0, 0))
- )
- # Can we assume this? API doesn't specify, but it seems implied
- # self.assertEqual(str(t1),str(t2))
-
- def test_Binary(self):
- b = self.driver.Binary(str2bytes("Something"))
- b = self.driver.Binary(str2bytes(""))
-
- def test_STRING(self):
- _failUnless(
- self, hasattr(self.driver, "STRING"), "module.STRING must be defined"
- )
-
- def test_BINARY(self):
- _failUnless(
- self, hasattr(self.driver, "BINARY"), "module.BINARY must be defined."
- )
-
- def test_NUMBER(self):
- _failUnless(
- self, hasattr(self.driver, "NUMBER"), "module.NUMBER must be defined."
- )
-
- def test_DATETIME(self):
- _failUnless(
- self, hasattr(self.driver, "DATETIME"), "module.DATETIME must be defined."
- )
-
- def test_ROWID(self):
- _failUnless(
- self, hasattr(self.driver, "ROWID"), "module.ROWID must be defined."
- )
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_eventloop.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_eventloop.py
deleted file mode 100644
index ae9864851baee17613175361a9983f6756a2b0d1..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_eventloop.py
+++ /dev/null
@@ -1,153 +0,0 @@
-from __future__ import annotations
-
-import math
-import sys
-import threading
-from contextlib import contextmanager
-from importlib import import_module
-from typing import (
- Any,
- Awaitable,
- Callable,
- Generator,
- TypeVar,
-)
-
-import sniffio
-
-# This must be updated when new backends are introduced
-from ._compat import DeprecatedAwaitableFloat
-
-BACKENDS = "asyncio", "trio"
-
-T_Retval = TypeVar("T_Retval")
-threadlocals = threading.local()
-
-
-def run(
- func: Callable[..., Awaitable[T_Retval]],
- *args: object,
- backend: str = "asyncio",
- backend_options: dict[str, Any] | None = None,
-) -> T_Retval:
- """
- Run the given coroutine function in an asynchronous event loop.
-
- The current thread must not be already running an event loop.
-
- :param func: a coroutine function
- :param args: positional arguments to ``func``
- :param backend: name of the asynchronous event loop implementation – currently either
- ``asyncio`` or ``trio``
- :param backend_options: keyword arguments to call the backend ``run()`` implementation with
- (documented :ref:`here `)
- :return: the return value of the coroutine function
- :raises RuntimeError: if an asynchronous event loop is already running in this thread
- :raises LookupError: if the named backend is not found
-
- """
- try:
- asynclib_name = sniffio.current_async_library()
- except sniffio.AsyncLibraryNotFoundError:
- pass
- else:
- raise RuntimeError(f"Already running {asynclib_name} in this thread")
-
- try:
- asynclib = import_module(f"..._backends._{backend}", package=__name__)
- except ImportError as exc:
- raise LookupError(f"No such backend: {backend}") from exc
-
- token = None
- if sniffio.current_async_library_cvar.get(None) is None:
- # Since we're in control of the event loop, we can cache the name of the async library
- token = sniffio.current_async_library_cvar.set(backend)
-
- try:
- backend_options = backend_options or {}
- return asynclib.run(func, *args, **backend_options)
- finally:
- if token:
- sniffio.current_async_library_cvar.reset(token)
-
-
-async def sleep(delay: float) -> None:
- """
- Pause the current task for the specified duration.
-
- :param delay: the duration, in seconds
-
- """
- return await get_asynclib().sleep(delay)
-
-
-async def sleep_forever() -> None:
- """
- Pause the current task until it's cancelled.
-
- This is a shortcut for ``sleep(math.inf)``.
-
- .. versionadded:: 3.1
-
- """
- await sleep(math.inf)
-
-
-async def sleep_until(deadline: float) -> None:
- """
- Pause the current task until the given time.
-
- :param deadline: the absolute time to wake up at (according to the internal monotonic clock of
- the event loop)
-
- .. versionadded:: 3.1
-
- """
- now = current_time()
- await sleep(max(deadline - now, 0))
-
-
-def current_time() -> DeprecatedAwaitableFloat:
- """
- Return the current value of the event loop's internal clock.
-
- :return: the clock value (seconds)
-
- """
- return DeprecatedAwaitableFloat(get_asynclib().current_time(), current_time)
-
-
-def get_all_backends() -> tuple[str, ...]:
- """Return a tuple of the names of all built-in backends."""
- return BACKENDS
-
-
-def get_cancelled_exc_class() -> type[BaseException]:
- """Return the current async library's cancellation exception class."""
- return get_asynclib().CancelledError
-
-
-#
-# Private API
-#
-
-
-@contextmanager
-def claim_worker_thread(backend: str) -> Generator[Any, None, None]:
- module = sys.modules["anyio._backends._" + backend]
- threadlocals.current_async_module = module
- try:
- yield
- finally:
- del threadlocals.current_async_module
-
-
-def get_asynclib(asynclib_name: str | None = None) -> Any:
- if asynclib_name is None:
- asynclib_name = sniffio.current_async_library()
-
- modulename = "anyio._backends._" + asynclib_name
- try:
- return sys.modules[modulename]
- except KeyError:
- return import_module(modulename)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/event.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/event.py
deleted file mode 100644
index af64727be69261079d07b72db25a159ef9a34650..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/event.py
+++ /dev/null
@@ -1,1869 +0,0 @@
-#!~/.wine/drive_c/Python25/python.exe
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2009-2014, Mario Vilas
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice,this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# * Neither the name of the copyright holder nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-
-"""
-Event handling module.
-
-@see: U{http://apps.sourceforge.net/trac/winappdbg/wiki/Debugging}
-
-@group Debugging:
- EventHandler, EventSift
-
-@group Debug events:
- EventFactory,
- EventDispatcher,
- Event,
- NoEvent,
- CreateProcessEvent,
- CreateThreadEvent,
- ExitProcessEvent,
- ExitThreadEvent,
- LoadDLLEvent,
- UnloadDLLEvent,
- OutputDebugStringEvent,
- RIPEvent,
- ExceptionEvent
-
-@group Warnings:
- EventCallbackWarning
-"""
-
-__revision__ = "$Id$"
-
-__all__ = [
- # Factory of Event objects and all of it's subclasses.
- # Users should not need to instance Event objects directly.
- 'EventFactory',
-
- # Event dispatcher used internally by the Debug class.
- 'EventDispatcher',
-
- # Base classes for user-defined event handlers.
- 'EventHandler',
- 'EventSift',
-
- # Warning for uncaught exceptions on event callbacks.
- 'EventCallbackWarning',
-
- # Dummy event object that can be used as a placeholder.
- # It's never returned by the EventFactory.
- 'NoEvent',
-
- # Base class for event objects.
- 'Event',
-
- # Event objects.
- 'CreateProcessEvent',
- 'CreateThreadEvent',
- 'ExitProcessEvent',
- 'ExitThreadEvent',
- 'LoadDLLEvent',
- 'UnloadDLLEvent',
- 'OutputDebugStringEvent',
- 'RIPEvent',
- 'ExceptionEvent'
- ]
-
-from winappdbg import win32
-from winappdbg import compat
-from winappdbg.win32 import FileHandle, ProcessHandle, ThreadHandle
-from winappdbg.breakpoint import ApiHook
-from winappdbg.module import Module
-from winappdbg.thread import Thread
-from winappdbg.process import Process
-from winappdbg.textio import HexDump
-from winappdbg.util import StaticClass, PathOperations
-
-import sys
-import ctypes
-import warnings
-import traceback
-
-#==============================================================================
-
-class EventCallbackWarning (RuntimeWarning):
- """
- This warning is issued when an uncaught exception was raised by a
- user-defined event handler.
- """
-
-#==============================================================================
-
-class Event (object):
- """
- Event object.
-
- @type eventMethod: str
- @cvar eventMethod:
- Method name to call when using L{EventHandler} subclasses.
- Used internally.
-
- @type eventName: str
- @cvar eventName:
- User-friendly name of the event.
-
- @type eventDescription: str
- @cvar eventDescription:
- User-friendly description of the event.
-
- @type debug: L{Debug}
- @ivar debug:
- Debug object that received the event.
-
- @type raw: L{DEBUG_EVENT}
- @ivar raw:
- Raw DEBUG_EVENT structure as used by the Win32 API.
-
- @type continueStatus: int
- @ivar continueStatus:
- Continue status to pass to L{win32.ContinueDebugEvent}.
- """
-
- eventMethod = 'unknown_event'
- eventName = 'Unknown event'
- eventDescription = 'A debug event of an unknown type has occured.'
-
- def __init__(self, debug, raw):
- """
- @type debug: L{Debug}
- @param debug: Debug object that received the event.
-
- @type raw: L{DEBUG_EVENT}
- @param raw: Raw DEBUG_EVENT structure as used by the Win32 API.
- """
- self.debug = debug
- self.raw = raw
- self.continueStatus = win32.DBG_EXCEPTION_NOT_HANDLED
-
-## @property
-## def debug(self):
-## """
-## @rtype debug: L{Debug}
-## @return debug:
-## Debug object that received the event.
-## """
-## return self.__debug()
-
- def get_event_name(self):
- """
- @rtype: str
- @return: User-friendly name of the event.
- """
- return self.eventName
-
- def get_event_description(self):
- """
- @rtype: str
- @return: User-friendly description of the event.
- """
- return self.eventDescription
-
- def get_event_code(self):
- """
- @rtype: int
- @return: Debug event code as defined in the Win32 API.
- """
- return self.raw.dwDebugEventCode
-
-## # Compatibility with version 1.0
-## # XXX to be removed in version 1.4
-## def get_code(self):
-## """
-## Alias of L{get_event_code} for backwards compatibility
-## with WinAppDbg version 1.0.
-## Will be phased out in the next version.
-##
-## @rtype: int
-## @return: Debug event code as defined in the Win32 API.
-## """
-## return self.get_event_code()
-
- def get_pid(self):
- """
- @see: L{get_process}
-
- @rtype: int
- @return: Process global ID where the event occured.
- """
- return self.raw.dwProcessId
-
- def get_tid(self):
- """
- @see: L{get_thread}
-
- @rtype: int
- @return: Thread global ID where the event occured.
- """
- return self.raw.dwThreadId
-
- def get_process(self):
- """
- @see: L{get_pid}
-
- @rtype: L{Process}
- @return: Process where the event occured.
- """
- pid = self.get_pid()
- system = self.debug.system
- if system.has_process(pid):
- process = system.get_process(pid)
- else:
- # XXX HACK
- # The process object was missing for some reason, so make a new one.
- process = Process(pid)
- system._add_process(process)
-## process.scan_threads() # not needed
- process.scan_modules()
- return process
-
- def get_thread(self):
- """
- @see: L{get_tid}
-
- @rtype: L{Thread}
- @return: Thread where the event occured.
- """
- tid = self.get_tid()
- process = self.get_process()
- if process.has_thread(tid):
- thread = process.get_thread(tid)
- else:
- # XXX HACK
- # The thread object was missing for some reason, so make a new one.
- thread = Thread(tid)
- process._add_thread(thread)
- return thread
-
-#==============================================================================
-
-class NoEvent (Event):
- """
- No event.
-
- Dummy L{Event} object that can be used as a placeholder when no debug
- event has occured yet. It's never returned by the L{EventFactory}.
- """
-
- eventMethod = 'no_event'
- eventName = 'No event'
- eventDescription = 'No debug event has occured.'
-
- def __init__(self, debug, raw = None):
- Event.__init__(self, debug, raw)
-
- def __len__(self):
- """
- Always returns C{0}, so when evaluating the object as a boolean it's
- always C{False}. This prevents L{Debug.cont} from trying to continue
- a dummy event.
- """
- return 0
-
- def get_event_code(self):
- return -1
-
- def get_pid(self):
- return -1
-
- def get_tid(self):
- return -1
-
- def get_process(self):
- return Process(self.get_pid())
-
- def get_thread(self):
- return Thread(self.get_tid())
-
-#==============================================================================
-
-class ExceptionEvent (Event):
- """
- Exception event.
-
- @type exceptionName: dict( int S{->} str )
- @cvar exceptionName:
- Mapping of exception constants to their names.
-
- @type exceptionDescription: dict( int S{->} str )
- @cvar exceptionDescription:
- Mapping of exception constants to user-friendly strings.
-
- @type breakpoint: L{Breakpoint}
- @ivar breakpoint:
- If the exception was caused by one of our breakpoints, this member
- contains a reference to the breakpoint object. Otherwise it's not
- defined. It should only be used from the condition or action callback
- routines, instead of the event handler.
-
- @type hook: L{Hook}
- @ivar hook:
- If the exception was caused by a function hook, this member contains a
- reference to the hook object. Otherwise it's not defined. It should
- only be used from the hook callback routines, instead of the event
- handler.
- """
-
- eventName = 'Exception event'
- eventDescription = 'An exception was raised by the debugee.'
-
- __exceptionMethod = {
- win32.EXCEPTION_ACCESS_VIOLATION : 'access_violation',
- win32.EXCEPTION_ARRAY_BOUNDS_EXCEEDED : 'array_bounds_exceeded',
- win32.EXCEPTION_BREAKPOINT : 'breakpoint',
- win32.EXCEPTION_DATATYPE_MISALIGNMENT : 'datatype_misalignment',
- win32.EXCEPTION_FLT_DENORMAL_OPERAND : 'float_denormal_operand',
- win32.EXCEPTION_FLT_DIVIDE_BY_ZERO : 'float_divide_by_zero',
- win32.EXCEPTION_FLT_INEXACT_RESULT : 'float_inexact_result',
- win32.EXCEPTION_FLT_INVALID_OPERATION : 'float_invalid_operation',
- win32.EXCEPTION_FLT_OVERFLOW : 'float_overflow',
- win32.EXCEPTION_FLT_STACK_CHECK : 'float_stack_check',
- win32.EXCEPTION_FLT_UNDERFLOW : 'float_underflow',
- win32.EXCEPTION_ILLEGAL_INSTRUCTION : 'illegal_instruction',
- win32.EXCEPTION_IN_PAGE_ERROR : 'in_page_error',
- win32.EXCEPTION_INT_DIVIDE_BY_ZERO : 'integer_divide_by_zero',
- win32.EXCEPTION_INT_OVERFLOW : 'integer_overflow',
- win32.EXCEPTION_INVALID_DISPOSITION : 'invalid_disposition',
- win32.EXCEPTION_NONCONTINUABLE_EXCEPTION : 'noncontinuable_exception',
- win32.EXCEPTION_PRIV_INSTRUCTION : 'privileged_instruction',
- win32.EXCEPTION_SINGLE_STEP : 'single_step',
- win32.EXCEPTION_STACK_OVERFLOW : 'stack_overflow',
- win32.EXCEPTION_GUARD_PAGE : 'guard_page',
- win32.EXCEPTION_INVALID_HANDLE : 'invalid_handle',
- win32.EXCEPTION_POSSIBLE_DEADLOCK : 'possible_deadlock',
- win32.EXCEPTION_WX86_BREAKPOINT : 'wow64_breakpoint',
- win32.CONTROL_C_EXIT : 'control_c_exit',
- win32.DBG_CONTROL_C : 'debug_control_c',
- win32.MS_VC_EXCEPTION : 'ms_vc_exception',
- }
-
- __exceptionName = {
- win32.EXCEPTION_ACCESS_VIOLATION : 'EXCEPTION_ACCESS_VIOLATION',
- win32.EXCEPTION_ARRAY_BOUNDS_EXCEEDED : 'EXCEPTION_ARRAY_BOUNDS_EXCEEDED',
- win32.EXCEPTION_BREAKPOINT : 'EXCEPTION_BREAKPOINT',
- win32.EXCEPTION_DATATYPE_MISALIGNMENT : 'EXCEPTION_DATATYPE_MISALIGNMENT',
- win32.EXCEPTION_FLT_DENORMAL_OPERAND : 'EXCEPTION_FLT_DENORMAL_OPERAND',
- win32.EXCEPTION_FLT_DIVIDE_BY_ZERO : 'EXCEPTION_FLT_DIVIDE_BY_ZERO',
- win32.EXCEPTION_FLT_INEXACT_RESULT : 'EXCEPTION_FLT_INEXACT_RESULT',
- win32.EXCEPTION_FLT_INVALID_OPERATION : 'EXCEPTION_FLT_INVALID_OPERATION',
- win32.EXCEPTION_FLT_OVERFLOW : 'EXCEPTION_FLT_OVERFLOW',
- win32.EXCEPTION_FLT_STACK_CHECK : 'EXCEPTION_FLT_STACK_CHECK',
- win32.EXCEPTION_FLT_UNDERFLOW : 'EXCEPTION_FLT_UNDERFLOW',
- win32.EXCEPTION_ILLEGAL_INSTRUCTION : 'EXCEPTION_ILLEGAL_INSTRUCTION',
- win32.EXCEPTION_IN_PAGE_ERROR : 'EXCEPTION_IN_PAGE_ERROR',
- win32.EXCEPTION_INT_DIVIDE_BY_ZERO : 'EXCEPTION_INT_DIVIDE_BY_ZERO',
- win32.EXCEPTION_INT_OVERFLOW : 'EXCEPTION_INT_OVERFLOW',
- win32.EXCEPTION_INVALID_DISPOSITION : 'EXCEPTION_INVALID_DISPOSITION',
- win32.EXCEPTION_NONCONTINUABLE_EXCEPTION : 'EXCEPTION_NONCONTINUABLE_EXCEPTION',
- win32.EXCEPTION_PRIV_INSTRUCTION : 'EXCEPTION_PRIV_INSTRUCTION',
- win32.EXCEPTION_SINGLE_STEP : 'EXCEPTION_SINGLE_STEP',
- win32.EXCEPTION_STACK_OVERFLOW : 'EXCEPTION_STACK_OVERFLOW',
- win32.EXCEPTION_GUARD_PAGE : 'EXCEPTION_GUARD_PAGE',
- win32.EXCEPTION_INVALID_HANDLE : 'EXCEPTION_INVALID_HANDLE',
- win32.EXCEPTION_POSSIBLE_DEADLOCK : 'EXCEPTION_POSSIBLE_DEADLOCK',
- win32.EXCEPTION_WX86_BREAKPOINT : 'EXCEPTION_WX86_BREAKPOINT',
- win32.CONTROL_C_EXIT : 'CONTROL_C_EXIT',
- win32.DBG_CONTROL_C : 'DBG_CONTROL_C',
- win32.MS_VC_EXCEPTION : 'MS_VC_EXCEPTION',
- }
-
- __exceptionDescription = {
- win32.EXCEPTION_ACCESS_VIOLATION : 'Access violation',
- win32.EXCEPTION_ARRAY_BOUNDS_EXCEEDED : 'Array bounds exceeded',
- win32.EXCEPTION_BREAKPOINT : 'Breakpoint',
- win32.EXCEPTION_DATATYPE_MISALIGNMENT : 'Datatype misalignment',
- win32.EXCEPTION_FLT_DENORMAL_OPERAND : 'Float denormal operand',
- win32.EXCEPTION_FLT_DIVIDE_BY_ZERO : 'Float divide by zero',
- win32.EXCEPTION_FLT_INEXACT_RESULT : 'Float inexact result',
- win32.EXCEPTION_FLT_INVALID_OPERATION : 'Float invalid operation',
- win32.EXCEPTION_FLT_OVERFLOW : 'Float overflow',
- win32.EXCEPTION_FLT_STACK_CHECK : 'Float stack check',
- win32.EXCEPTION_FLT_UNDERFLOW : 'Float underflow',
- win32.EXCEPTION_ILLEGAL_INSTRUCTION : 'Illegal instruction',
- win32.EXCEPTION_IN_PAGE_ERROR : 'In-page error',
- win32.EXCEPTION_INT_DIVIDE_BY_ZERO : 'Integer divide by zero',
- win32.EXCEPTION_INT_OVERFLOW : 'Integer overflow',
- win32.EXCEPTION_INVALID_DISPOSITION : 'Invalid disposition',
- win32.EXCEPTION_NONCONTINUABLE_EXCEPTION : 'Noncontinuable exception',
- win32.EXCEPTION_PRIV_INSTRUCTION : 'Privileged instruction',
- win32.EXCEPTION_SINGLE_STEP : 'Single step event',
- win32.EXCEPTION_STACK_OVERFLOW : 'Stack limits overflow',
- win32.EXCEPTION_GUARD_PAGE : 'Guard page hit',
- win32.EXCEPTION_INVALID_HANDLE : 'Invalid handle',
- win32.EXCEPTION_POSSIBLE_DEADLOCK : 'Possible deadlock',
- win32.EXCEPTION_WX86_BREAKPOINT : 'WOW64 breakpoint',
- win32.CONTROL_C_EXIT : 'Control-C exit',
- win32.DBG_CONTROL_C : 'Debug Control-C',
- win32.MS_VC_EXCEPTION : 'Microsoft Visual C++ exception',
- }
-
- @property
- def eventMethod(self):
- return self.__exceptionMethod.get(
- self.get_exception_code(), 'unknown_exception')
-
- def get_exception_name(self):
- """
- @rtype: str
- @return: Name of the exception as defined by the Win32 API.
- """
- code = self.get_exception_code()
- unk = HexDump.integer(code)
- return self.__exceptionName.get(code, unk)
-
- def get_exception_description(self):
- """
- @rtype: str
- @return: User-friendly name of the exception.
- """
- code = self.get_exception_code()
- description = self.__exceptionDescription.get(code, None)
- if description is None:
- try:
- description = 'Exception code %s (%s)'
- description = description % (HexDump.integer(code),
- ctypes.FormatError(code))
- except OverflowError:
- description = 'Exception code %s' % HexDump.integer(code)
- return description
-
- def is_first_chance(self):
- """
- @rtype: bool
- @return: C{True} for first chance exceptions, C{False} for last chance.
- """
- return self.raw.u.Exception.dwFirstChance != 0
-
- def is_last_chance(self):
- """
- @rtype: bool
- @return: The opposite of L{is_first_chance}.
- """
- return not self.is_first_chance()
-
- def is_noncontinuable(self):
- """
- @see: U{http://msdn.microsoft.com/en-us/library/aa363082(VS.85).aspx}
-
- @rtype: bool
- @return: C{True} if the exception is noncontinuable,
- C{False} otherwise.
-
- Attempting to continue a noncontinuable exception results in an
- EXCEPTION_NONCONTINUABLE_EXCEPTION exception to be raised.
- """
- return bool( self.raw.u.Exception.ExceptionRecord.ExceptionFlags & \
- win32.EXCEPTION_NONCONTINUABLE )
-
- def is_continuable(self):
- """
- @rtype: bool
- @return: The opposite of L{is_noncontinuable}.
- """
- return not self.is_noncontinuable()
-
- def is_user_defined_exception(self):
- """
- Determines if this is an user-defined exception. User-defined
- exceptions may contain any exception code that is not system reserved.
-
- Often the exception code is also a valid Win32 error code, but that's
- up to the debugged application.
-
- @rtype: bool
- @return: C{True} if the exception is user-defined, C{False} otherwise.
- """
- return self.get_exception_code() & 0x10000000 == 0
-
- def is_system_defined_exception(self):
- """
- @rtype: bool
- @return: The opposite of L{is_user_defined_exception}.
- """
- return not self.is_user_defined_exception()
-
- def get_exception_code(self):
- """
- @rtype: int
- @return: Exception code as defined by the Win32 API.
- """
- return self.raw.u.Exception.ExceptionRecord.ExceptionCode
-
- def get_exception_address(self):
- """
- @rtype: int
- @return: Memory address where the exception occured.
- """
- address = self.raw.u.Exception.ExceptionRecord.ExceptionAddress
- if address is None:
- address = 0
- return address
-
- def get_exception_information(self, index):
- """
- @type index: int
- @param index: Index into the exception information block.
-
- @rtype: int
- @return: Exception information DWORD.
- """
- if index < 0 or index > win32.EXCEPTION_MAXIMUM_PARAMETERS:
- raise IndexError("Array index out of range: %s" % repr(index))
- info = self.raw.u.Exception.ExceptionRecord.ExceptionInformation
- value = info[index]
- if value is None:
- value = 0
- return value
-
- def get_exception_information_as_list(self):
- """
- @rtype: list( int )
- @return: Exception information block.
- """
- info = self.raw.u.Exception.ExceptionRecord.ExceptionInformation
- data = list()
- for index in compat.xrange(0, win32.EXCEPTION_MAXIMUM_PARAMETERS):
- value = info[index]
- if value is None:
- value = 0
- data.append(value)
- return data
-
- def get_fault_type(self):
- """
- @rtype: int
- @return: Access violation type.
- Should be one of the following constants:
-
- - L{win32.EXCEPTION_READ_FAULT}
- - L{win32.EXCEPTION_WRITE_FAULT}
- - L{win32.EXCEPTION_EXECUTE_FAULT}
-
- @note: This method is only meaningful for access violation exceptions,
- in-page memory error exceptions and guard page exceptions.
-
- @raise NotImplementedError: Wrong kind of exception.
- """
- if self.get_exception_code() not in (win32.EXCEPTION_ACCESS_VIOLATION,
- win32.EXCEPTION_IN_PAGE_ERROR, win32.EXCEPTION_GUARD_PAGE):
- msg = "This method is not meaningful for %s."
- raise NotImplementedError(msg % self.get_exception_name())
- return self.get_exception_information(0)
-
- def get_fault_address(self):
- """
- @rtype: int
- @return: Access violation memory address.
-
- @note: This method is only meaningful for access violation exceptions,
- in-page memory error exceptions and guard page exceptions.
-
- @raise NotImplementedError: Wrong kind of exception.
- """
- if self.get_exception_code() not in (win32.EXCEPTION_ACCESS_VIOLATION,
- win32.EXCEPTION_IN_PAGE_ERROR, win32.EXCEPTION_GUARD_PAGE):
- msg = "This method is not meaningful for %s."
- raise NotImplementedError(msg % self.get_exception_name())
- return self.get_exception_information(1)
-
- def get_ntstatus_code(self):
- """
- @rtype: int
- @return: NTSTATUS status code that caused the exception.
-
- @note: This method is only meaningful for in-page memory error
- exceptions.
-
- @raise NotImplementedError: Not an in-page memory error.
- """
- if self.get_exception_code() != win32.EXCEPTION_IN_PAGE_ERROR:
- msg = "This method is only meaningful "\
- "for in-page memory error exceptions."
- raise NotImplementedError(msg)
- return self.get_exception_information(2)
-
- def is_nested(self):
- """
- @rtype: bool
- @return: Returns C{True} if there are additional exception records
- associated with this exception. This would mean the exception
- is nested, that is, it was triggered while trying to handle
- at least one previous exception.
- """
- return bool(self.raw.u.Exception.ExceptionRecord.ExceptionRecord)
-
- def get_raw_exception_record_list(self):
- """
- Traverses the exception record linked list and builds a Python list.
-
- Nested exception records are received for nested exceptions. This
- happens when an exception is raised in the debugee while trying to
- handle a previous exception.
-
- @rtype: list( L{win32.EXCEPTION_RECORD} )
- @return:
- List of raw exception record structures as used by the Win32 API.
-
- There is always at least one exception record, so the list is
- never empty. All other methods of this class read from the first
- exception record only, that is, the most recent exception.
- """
- # The first EXCEPTION_RECORD is contained in EXCEPTION_DEBUG_INFO.
- # The remaining EXCEPTION_RECORD structures are linked by pointers.
- nested = list()
- record = self.raw.u.Exception
- while True:
- record = record.ExceptionRecord
- if not record:
- break
- nested.append(record)
- return nested
-
- def get_nested_exceptions(self):
- """
- Traverses the exception record linked list and builds a Python list.
-
- Nested exception records are received for nested exceptions. This
- happens when an exception is raised in the debugee while trying to
- handle a previous exception.
-
- @rtype: list( L{ExceptionEvent} )
- @return:
- List of ExceptionEvent objects representing each exception record
- found in this event.
-
- There is always at least one exception record, so the list is
- never empty. All other methods of this class read from the first
- exception record only, that is, the most recent exception.
- """
- # The list always begins with ourselves.
- # Just put a reference to "self" as the first element,
- # and start looping from the second exception record.
- nested = [ self ]
- raw = self.raw
- dwDebugEventCode = raw.dwDebugEventCode
- dwProcessId = raw.dwProcessId
- dwThreadId = raw.dwThreadId
- dwFirstChance = raw.u.Exception.dwFirstChance
- record = raw.u.Exception.ExceptionRecord
- while True:
- record = record.ExceptionRecord
- if not record:
- break
- raw = win32.DEBUG_EVENT()
- raw.dwDebugEventCode = dwDebugEventCode
- raw.dwProcessId = dwProcessId
- raw.dwThreadId = dwThreadId
- raw.u.Exception.ExceptionRecord = record
- raw.u.Exception.dwFirstChance = dwFirstChance
- event = EventFactory.get(self.debug, raw)
- nested.append(event)
- return nested
-
-#==============================================================================
-
-class CreateThreadEvent (Event):
- """
- Thread creation event.
- """
-
- eventMethod = 'create_thread'
- eventName = 'Thread creation event'
- eventDescription = 'A new thread has started.'
-
- def get_thread_handle(self):
- """
- @rtype: L{ThreadHandle}
- @return: Thread handle received from the system.
- Returns C{None} if the handle is not available.
- """
- # The handle doesn't need to be closed.
- # See http://msdn.microsoft.com/en-us/library/ms681423(VS.85).aspx
- hThread = self.raw.u.CreateThread.hThread
- if hThread in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hThread = None
- else:
- hThread = ThreadHandle(hThread, False, win32.THREAD_ALL_ACCESS)
- return hThread
-
- def get_teb(self):
- """
- @rtype: int
- @return: Pointer to the TEB.
- """
- return self.raw.u.CreateThread.lpThreadLocalBase
-
- def get_start_address(self):
- """
- @rtype: int
- @return: Pointer to the first instruction to execute in this thread.
-
- Returns C{NULL} when the debugger attached to a process
- and the thread already existed.
-
- See U{http://msdn.microsoft.com/en-us/library/ms679295(VS.85).aspx}
- """
- return self.raw.u.CreateThread.lpStartAddress
-
-#==============================================================================
-
-class CreateProcessEvent (Event):
- """
- Process creation event.
- """
-
- eventMethod = 'create_process'
- eventName = 'Process creation event'
- eventDescription = 'A new process has started.'
-
- def get_file_handle(self):
- """
- @rtype: L{FileHandle} or None
- @return: File handle to the main module, received from the system.
- Returns C{None} if the handle is not available.
- """
- # This handle DOES need to be closed.
- # Therefore we must cache it so it doesn't
- # get closed after the first call.
- try:
- hFile = self.__hFile
- except AttributeError:
- hFile = self.raw.u.CreateProcessInfo.hFile
- if hFile in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hFile = None
- else:
- hFile = FileHandle(hFile, True)
- self.__hFile = hFile
- return hFile
-
- def get_process_handle(self):
- """
- @rtype: L{ProcessHandle}
- @return: Process handle received from the system.
- Returns C{None} if the handle is not available.
- """
- # The handle doesn't need to be closed.
- # See http://msdn.microsoft.com/en-us/library/ms681423(VS.85).aspx
- hProcess = self.raw.u.CreateProcessInfo.hProcess
- if hProcess in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hProcess = None
- else:
- hProcess = ProcessHandle(hProcess, False, win32.PROCESS_ALL_ACCESS)
- return hProcess
-
- def get_thread_handle(self):
- """
- @rtype: L{ThreadHandle}
- @return: Thread handle received from the system.
- Returns C{None} if the handle is not available.
- """
- # The handle doesn't need to be closed.
- # See http://msdn.microsoft.com/en-us/library/ms681423(VS.85).aspx
- hThread = self.raw.u.CreateProcessInfo.hThread
- if hThread in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hThread = None
- else:
- hThread = ThreadHandle(hThread, False, win32.THREAD_ALL_ACCESS)
- return hThread
-
- def get_start_address(self):
- """
- @rtype: int
- @return: Pointer to the first instruction to execute in this process.
-
- Returns C{NULL} when the debugger attaches to a process.
-
- See U{http://msdn.microsoft.com/en-us/library/ms679295(VS.85).aspx}
- """
- return self.raw.u.CreateProcessInfo.lpStartAddress
-
- def get_image_base(self):
- """
- @rtype: int
- @return: Base address of the main module.
- @warn: This value is taken from the PE file
- and may be incorrect because of ASLR!
- """
- # TODO try to calculate the real value when ASLR is active.
- return self.raw.u.CreateProcessInfo.lpBaseOfImage
-
- def get_teb(self):
- """
- @rtype: int
- @return: Pointer to the TEB.
- """
- return self.raw.u.CreateProcessInfo.lpThreadLocalBase
-
- def get_debug_info(self):
- """
- @rtype: str
- @return: Debugging information.
- """
- raw = self.raw.u.CreateProcessInfo
- ptr = raw.lpBaseOfImage + raw.dwDebugInfoFileOffset
- size = raw.nDebugInfoSize
- data = self.get_process().peek(ptr, size)
- if len(data) == size:
- return data
- return None
-
- def get_filename(self):
- """
- @rtype: str, None
- @return: This method does it's best to retrieve the filename to
- the main module of the process. However, sometimes that's not
- possible, and C{None} is returned instead.
- """
-
- # Try to get the filename from the file handle.
- szFilename = None
- hFile = self.get_file_handle()
- if hFile:
- szFilename = hFile.get_filename()
- if not szFilename:
-
- # Try to get it from CREATE_PROCESS_DEBUG_INFO.lpImageName
- # It's NULL or *NULL most of the times, see MSDN:
- # http://msdn.microsoft.com/en-us/library/ms679286(VS.85).aspx
- aProcess = self.get_process()
- lpRemoteFilenamePtr = self.raw.u.CreateProcessInfo.lpImageName
- if lpRemoteFilenamePtr:
- lpFilename = aProcess.peek_uint(lpRemoteFilenamePtr)
- fUnicode = bool( self.raw.u.CreateProcessInfo.fUnicode )
- szFilename = aProcess.peek_string(lpFilename, fUnicode)
-
- # XXX TODO
- # Sometimes the filename is relative (ntdll.dll, kernel32.dll).
- # It could be converted to an absolute pathname (SearchPath).
-
- # Try to get it from Process.get_image_name().
- if not szFilename:
- szFilename = aProcess.get_image_name()
-
- # Return the filename, or None on error.
- return szFilename
-
- def get_module_base(self):
- """
- @rtype: int
- @return: Base address of the main module.
- """
- return self.get_image_base()
-
- def get_module(self):
- """
- @rtype: L{Module}
- @return: Main module of the process.
- """
- return self.get_process().get_module( self.get_module_base() )
-
-#==============================================================================
-
-class ExitThreadEvent (Event):
- """
- Thread termination event.
- """
-
- eventMethod = 'exit_thread'
- eventName = 'Thread termination event'
- eventDescription = 'A thread has finished executing.'
-
- def get_exit_code(self):
- """
- @rtype: int
- @return: Exit code of the thread.
- """
- return self.raw.u.ExitThread.dwExitCode
-
-#==============================================================================
-
-class ExitProcessEvent (Event):
- """
- Process termination event.
- """
-
- eventMethod = 'exit_process'
- eventName = 'Process termination event'
- eventDescription = 'A process has finished executing.'
-
- def get_exit_code(self):
- """
- @rtype: int
- @return: Exit code of the process.
- """
- return self.raw.u.ExitProcess.dwExitCode
-
- def get_filename(self):
- """
- @rtype: None or str
- @return: Filename of the main module.
- C{None} if the filename is unknown.
- """
- return self.get_module().get_filename()
-
- def get_image_base(self):
- """
- @rtype: int
- @return: Base address of the main module.
- """
- return self.get_module_base()
-
- def get_module_base(self):
- """
- @rtype: int
- @return: Base address of the main module.
- """
- return self.get_module().get_base()
-
- def get_module(self):
- """
- @rtype: L{Module}
- @return: Main module of the process.
- """
- return self.get_process().get_main_module()
-
-#==============================================================================
-
-class LoadDLLEvent (Event):
- """
- Module load event.
- """
-
- eventMethod = 'load_dll'
- eventName = 'Module load event'
- eventDescription = 'A new DLL library was loaded by the debugee.'
-
- def get_module_base(self):
- """
- @rtype: int
- @return: Base address for the newly loaded DLL.
- """
- return self.raw.u.LoadDll.lpBaseOfDll
-
- def get_module(self):
- """
- @rtype: L{Module}
- @return: Module object for the newly loaded DLL.
- """
- lpBaseOfDll = self.get_module_base()
- aProcess = self.get_process()
- if aProcess.has_module(lpBaseOfDll):
- aModule = aProcess.get_module(lpBaseOfDll)
- else:
- # XXX HACK
- # For some reason the module object is missing, so make a new one.
- aModule = Module(lpBaseOfDll,
- hFile = self.get_file_handle(),
- fileName = self.get_filename(),
- process = aProcess)
- aProcess._add_module(aModule)
- return aModule
-
- def get_file_handle(self):
- """
- @rtype: L{FileHandle} or None
- @return: File handle to the newly loaded DLL received from the system.
- Returns C{None} if the handle is not available.
- """
- # This handle DOES need to be closed.
- # Therefore we must cache it so it doesn't
- # get closed after the first call.
- try:
- hFile = self.__hFile
- except AttributeError:
- hFile = self.raw.u.LoadDll.hFile
- if hFile in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hFile = None
- else:
- hFile = FileHandle(hFile, True)
- self.__hFile = hFile
- return hFile
-
- def get_filename(self):
- """
- @rtype: str, None
- @return: This method does it's best to retrieve the filename to
- the newly loaded module. However, sometimes that's not
- possible, and C{None} is returned instead.
- """
- szFilename = None
-
- # Try to get it from LOAD_DLL_DEBUG_INFO.lpImageName
- # It's NULL or *NULL most of the times, see MSDN:
- # http://msdn.microsoft.com/en-us/library/ms679286(VS.85).aspx
- aProcess = self.get_process()
- lpRemoteFilenamePtr = self.raw.u.LoadDll.lpImageName
- if lpRemoteFilenamePtr:
- lpFilename = aProcess.peek_uint(lpRemoteFilenamePtr)
- fUnicode = bool( self.raw.u.LoadDll.fUnicode )
- szFilename = aProcess.peek_string(lpFilename, fUnicode)
- if not szFilename:
- szFilename = None
-
- # Try to get the filename from the file handle.
- if not szFilename:
- hFile = self.get_file_handle()
- if hFile:
- szFilename = hFile.get_filename()
-
- # Return the filename, or None on error.
- return szFilename
-
-#==============================================================================
-
-class UnloadDLLEvent (Event):
- """
- Module unload event.
- """
-
- eventMethod = 'unload_dll'
- eventName = 'Module unload event'
- eventDescription = 'A DLL library was unloaded by the debugee.'
-
- def get_module_base(self):
- """
- @rtype: int
- @return: Base address for the recently unloaded DLL.
- """
- return self.raw.u.UnloadDll.lpBaseOfDll
-
- def get_module(self):
- """
- @rtype: L{Module}
- @return: Module object for the recently unloaded DLL.
- """
- lpBaseOfDll = self.get_module_base()
- aProcess = self.get_process()
- if aProcess.has_module(lpBaseOfDll):
- aModule = aProcess.get_module(lpBaseOfDll)
- else:
- aModule = Module(lpBaseOfDll, process = aProcess)
- aProcess._add_module(aModule)
- return aModule
-
- def get_file_handle(self):
- """
- @rtype: None or L{FileHandle}
- @return: File handle to the recently unloaded DLL.
- Returns C{None} if the handle is not available.
- """
- hFile = self.get_module().hFile
- if hFile in (0, win32.NULL, win32.INVALID_HANDLE_VALUE):
- hFile = None
- return hFile
-
- def get_filename(self):
- """
- @rtype: None or str
- @return: Filename of the recently unloaded DLL.
- C{None} if the filename is unknown.
- """
- return self.get_module().get_filename()
-
-#==============================================================================
-
-class OutputDebugStringEvent (Event):
- """
- Debug string output event.
- """
-
- eventMethod = 'output_string'
- eventName = 'Debug string output event'
- eventDescription = 'The debugee sent a message to the debugger.'
-
- def get_debug_string(self):
- """
- @rtype: str, compat.unicode
- @return: String sent by the debugee.
- It may be ANSI or Unicode and may end with a null character.
- """
- return self.get_process().peek_string(
- self.raw.u.DebugString.lpDebugStringData,
- bool( self.raw.u.DebugString.fUnicode ),
- self.raw.u.DebugString.nDebugStringLength)
-
-#==============================================================================
-
-class RIPEvent (Event):
- """
- RIP event.
- """
-
- eventMethod = 'rip'
- eventName = 'RIP event'
- eventDescription = 'An error has occured and the process ' \
- 'can no longer be debugged.'
-
- def get_rip_error(self):
- """
- @rtype: int
- @return: RIP error code as defined by the Win32 API.
- """
- return self.raw.u.RipInfo.dwError
-
- def get_rip_type(self):
- """
- @rtype: int
- @return: RIP type code as defined by the Win32 API.
- May be C{0} or one of the following:
- - L{win32.SLE_ERROR}
- - L{win32.SLE_MINORERROR}
- - L{win32.SLE_WARNING}
- """
- return self.raw.u.RipInfo.dwType
-
-#==============================================================================
-
-class EventFactory (StaticClass):
- """
- Factory of L{Event} objects.
-
- @type baseEvent: L{Event}
- @cvar baseEvent:
- Base class for Event objects.
- It's used for unknown event codes.
-
- @type eventClasses: dict( int S{->} L{Event} )
- @cvar eventClasses:
- Dictionary that maps event codes to L{Event} subclasses.
- """
-
- baseEvent = Event
- eventClasses = {
- win32.EXCEPTION_DEBUG_EVENT : ExceptionEvent, # 1
- win32.CREATE_THREAD_DEBUG_EVENT : CreateThreadEvent, # 2
- win32.CREATE_PROCESS_DEBUG_EVENT : CreateProcessEvent, # 3
- win32.EXIT_THREAD_DEBUG_EVENT : ExitThreadEvent, # 4
- win32.EXIT_PROCESS_DEBUG_EVENT : ExitProcessEvent, # 5
- win32.LOAD_DLL_DEBUG_EVENT : LoadDLLEvent, # 6
- win32.UNLOAD_DLL_DEBUG_EVENT : UnloadDLLEvent, # 7
- win32.OUTPUT_DEBUG_STRING_EVENT : OutputDebugStringEvent, # 8
- win32.RIP_EVENT : RIPEvent, # 9
- }
-
- @classmethod
- def get(cls, debug, raw):
- """
- @type debug: L{Debug}
- @param debug: Debug object that received the event.
-
- @type raw: L{DEBUG_EVENT}
- @param raw: Raw DEBUG_EVENT structure as used by the Win32 API.
-
- @rtype: L{Event}
- @returns: An Event object or one of it's subclasses,
- depending on the event type.
- """
- eventClass = cls.eventClasses.get(raw.dwDebugEventCode, cls.baseEvent)
- return eventClass(debug, raw)
-
-#==============================================================================
-
-class EventHandler (object):
- """
- Base class for debug event handlers.
-
- Your program should subclass it to implement it's own event handling.
-
- The constructor can be overriden as long as you call the superclass
- constructor. The special method L{__call__} B{MUST NOT} be overriden.
-
- The signature for event handlers is the following::
-
- def event_handler(self, event):
-
- Where B{event} is an L{Event} object.
-
- Each event handler is named after the event they handle.
- This is the list of all valid event handler names:
-
- - I{event}
-
- Receives an L{Event} object or an object of any of it's subclasses,
- and handles any event for which no handler was defined.
-
- - I{unknown_event}
-
- Receives an L{Event} object or an object of any of it's subclasses,
- and handles any event unknown to the debugging engine. (This is not
- likely to happen unless the Win32 debugging API is changed in future
- versions of Windows).
-
- - I{exception}
-
- Receives an L{ExceptionEvent} object and handles any exception for
- which no handler was defined. See above for exception handlers.
-
- - I{unknown_exception}
-
- Receives an L{ExceptionEvent} object and handles any exception unknown
- to the debugging engine. This usually happens for C++ exceptions, which
- are not standardized and may change from one compiler to the next.
-
- Currently we have partial support for C++ exceptions thrown by Microsoft
- compilers.
-
- Also see: U{RaiseException()
- }
-
- - I{create_thread}
-
- Receives a L{CreateThreadEvent} object.
-
- - I{create_process}
-
- Receives a L{CreateProcessEvent} object.
-
- - I{exit_thread}
-
- Receives a L{ExitThreadEvent} object.
-
- - I{exit_process}
-
- Receives a L{ExitProcessEvent} object.
-
- - I{load_dll}
-
- Receives a L{LoadDLLEvent} object.
-
- - I{unload_dll}
-
- Receives an L{UnloadDLLEvent} object.
-
- - I{output_string}
-
- Receives an L{OutputDebugStringEvent} object.
-
- - I{rip}
-
- Receives a L{RIPEvent} object.
-
- This is the list of all valid exception handler names
- (they all receive an L{ExceptionEvent} object):
-
- - I{access_violation}
- - I{array_bounds_exceeded}
- - I{breakpoint}
- - I{control_c_exit}
- - I{datatype_misalignment}
- - I{debug_control_c}
- - I{float_denormal_operand}
- - I{float_divide_by_zero}
- - I{float_inexact_result}
- - I{float_invalid_operation}
- - I{float_overflow}
- - I{float_stack_check}
- - I{float_underflow}
- - I{guard_page}
- - I{illegal_instruction}
- - I{in_page_error}
- - I{integer_divide_by_zero}
- - I{integer_overflow}
- - I{invalid_disposition}
- - I{invalid_handle}
- - I{ms_vc_exception}
- - I{noncontinuable_exception}
- - I{possible_deadlock}
- - I{privileged_instruction}
- - I{single_step}
- - I{stack_overflow}
- - I{wow64_breakpoint}
-
-
-
- @type apiHooks: dict( str S{->} list( tuple( str, int ) ) )
- @cvar apiHooks:
- Dictionary that maps module names to lists of
- tuples of ( procedure name, parameter count ).
-
- All procedures listed here will be hooked for calls from the debugee.
- When this happens, the corresponding event handler can be notified both
- when the procedure is entered and when it's left by the debugee.
-
- For example, let's hook the LoadLibraryEx() API call.
- This would be the declaration of apiHooks::
-
- from winappdbg import EventHandler
- from winappdbg.win32 import *
-
- # (...)
-
- class MyEventHandler (EventHandler):
-
- apiHook = {
-
- "kernel32.dll" : (
-
- # Procedure name Signature
- ( "LoadLibraryEx", (PVOID, HANDLE, DWORD) ),
-
- # (more procedures can go here...)
- ),
-
- # (more libraries can go here...)
- }
-
- # (your method definitions go here...)
-
- Note that all pointer types are treated like void pointers, so your
- callback won't get the string or structure pointed to by it, but the
- remote memory address instead. This is so to prevent the ctypes library
- from being "too helpful" and trying to dereference the pointer. To get
- the actual data being pointed to, use one of the L{Process.read}
- methods.
-
- Now, to intercept calls to LoadLibraryEx define a method like this in
- your event handler class::
-
- def pre_LoadLibraryEx(self, event, ra, lpFilename, hFile, dwFlags):
- szFilename = event.get_process().peek_string(lpFilename)
-
- # (...)
-
- Note that the first parameter is always the L{Event} object, and the
- second parameter is the return address. The third parameter and above
- are the values passed to the hooked function.
-
- Finally, to intercept returns from calls to LoadLibraryEx define a
- method like this::
-
- def post_LoadLibraryEx(self, event, retval):
- # (...)
-
- The first parameter is the L{Event} object and the second is the
- return value from the hooked function.
- """
-
-#------------------------------------------------------------------------------
-
- # Default (empty) API hooks dictionary.
- apiHooks = {}
-
- def __init__(self):
- """
- Class constructor. Don't forget to call it when subclassing!
-
- Forgetting to call the superclass constructor is a common mistake when
- you're new to Python. :)
-
- Example::
- class MyEventHandler (EventHandler):
-
- # Override the constructor to use an extra argument.
- def __init__(self, myArgument):
-
- # Do something with the argument, like keeping it
- # as an instance variable.
- self.myVariable = myArgument
-
- # Call the superclass constructor.
- super(MyEventHandler, self).__init__()
-
- # The rest of your code below...
- """
-
- # TODO
- # All this does is set up the hooks.
- # This code should be moved to the EventDispatcher class.
- # Then the hooks can be set up at set_event_handler() instead, making
- # this class even simpler. The downside here is deciding where to store
- # the ApiHook objects.
-
- # Convert the tuples into instances of the ApiHook class.
- # A new dictionary must be instanced, otherwise we could also be
- # affecting all other instances of the EventHandler.
- apiHooks = dict()
- for lib, hooks in compat.iteritems(self.apiHooks):
- hook_objs = []
- for proc, args in hooks:
- if type(args) in (int, long):
- h = ApiHook(self, lib, proc, paramCount = args)
- else:
- h = ApiHook(self, lib, proc, signature = args)
- hook_objs.append(h)
- apiHooks[lib] = hook_objs
- self.__apiHooks = apiHooks
-
- def __get_hooks_for_dll(self, event):
- """
- Get the requested API hooks for the current DLL.
-
- Used by L{__hook_dll} and L{__unhook_dll}.
- """
- result = []
- if self.__apiHooks:
- path = event.get_module().get_filename()
- if path:
- lib_name = PathOperations.pathname_to_filename(path).lower()
- for hook_lib, hook_api_list in compat.iteritems(self.__apiHooks):
- if hook_lib == lib_name:
- result.extend(hook_api_list)
- return result
-
- def __hook_dll(self, event):
- """
- Hook the requested API calls (in self.apiHooks).
-
- This method is called automatically whenever a DLL is loaded.
- """
- debug = event.debug
- pid = event.get_pid()
- for hook_api_stub in self.__get_hooks_for_dll(event):
- hook_api_stub.hook(debug, pid)
-
- def __unhook_dll(self, event):
- """
- Unhook the requested API calls (in self.apiHooks).
-
- This method is called automatically whenever a DLL is unloaded.
- """
- debug = event.debug
- pid = event.get_pid()
- for hook_api_stub in self.__get_hooks_for_dll(event):
- hook_api_stub.unhook(debug, pid)
-
- def __call__(self, event):
- """
- Dispatch debug events.
-
- @warn: B{Don't override this method!}
-
- @type event: L{Event}
- @param event: Event object.
- """
- try:
- code = event.get_event_code()
- if code == win32.LOAD_DLL_DEBUG_EVENT:
- self.__hook_dll(event)
- elif code == win32.UNLOAD_DLL_DEBUG_EVENT:
- self.__unhook_dll(event)
- finally:
- method = EventDispatcher.get_handler_method(self, event)
- if method is not None:
- return method(event)
-
-#==============================================================================
-
-# TODO
-# * Make it more generic by adding a few more callbacks.
-# That way it will be possible to make a thread sifter too.
-# * This interface feels too much like an antipattern.
-# When apiHooks is deprecated this will have to be reviewed.
-
-class EventSift(EventHandler):
- """
- Event handler that allows you to use customized event handlers for each
- process you're attached to.
-
- This makes coding the event handlers much easier, because each instance
- will only "know" about one process. So you can code your event handler as
- if only one process was being debugged, but your debugger can attach to
- multiple processes.
-
- Example::
- from winappdbg import Debug, EventHandler, EventSift
-
- # This class was written assuming only one process is attached.
- # If you used it directly it would break when attaching to another
- # process, or when a child process is spawned.
- class MyEventHandler (EventHandler):
-
- def create_process(self, event):
- self.first = True
- self.name = event.get_process().get_filename()
- print "Attached to %s" % self.name
-
- def breakpoint(self, event):
- if self.first:
- self.first = False
- print "First breakpoint reached at %s" % self.name
-
- def exit_process(self, event):
- print "Detached from %s" % self.name
-
- # Now when debugging we use the EventSift to be able to work with
- # multiple processes while keeping our code simple. :)
- if __name__ == "__main__":
- handler = EventSift(MyEventHandler)
- #handler = MyEventHandler() # try uncommenting this line...
- with Debug(handler) as debug:
- debug.execl("calc.exe")
- debug.execl("notepad.exe")
- debug.execl("charmap.exe")
- debug.loop()
-
- Subclasses of C{EventSift} can prevent specific event types from
- being forwarded by simply defining a method for it. That means your
- subclass can handle some event types globally while letting other types
- be handled on per-process basis. To forward events manually you can
- call C{self.event(event)}.
-
- Example::
- class MySift (EventSift):
-
- # Don't forward this event.
- def debug_control_c(self, event):
- pass
-
- # Handle this event globally without forwarding it.
- def output_string(self, event):
- print "Debug string: %s" % event.get_debug_string()
-
- # Handle this event globally and then forward it.
- def create_process(self, event):
- print "New process created, PID: %d" % event.get_pid()
- return self.event(event)
-
- # All other events will be forwarded.
-
- Note that overriding the C{event} method would cause no events to be
- forwarded at all. To prevent this, call the superclass implementation.
-
- Example::
-
- def we_want_to_forward_this_event(event):
- "Use whatever logic you want here..."
- # (...return True or False...)
-
- class MySift (EventSift):
-
- def event(self, event):
-
- # If the event matches some custom criteria...
- if we_want_to_forward_this_event(event):
-
- # Forward it.
- return super(MySift, self).event(event)
-
- # Otherwise, don't.
-
- @type cls: class
- @ivar cls:
- Event handler class. There will be one instance of this class
- per debugged process in the L{forward} dictionary.
-
- @type argv: list
- @ivar argv:
- Positional arguments to pass to the constructor of L{cls}.
-
- @type argd: list
- @ivar argd:
- Keyword arguments to pass to the constructor of L{cls}.
-
- @type forward: dict
- @ivar forward:
- Dictionary that maps each debugged process ID to an instance of L{cls}.
- """
-
- def __init__(self, cls, *argv, **argd):
- """
- Maintains an instance of your event handler for each process being
- debugged, and forwards the events of each process to each corresponding
- instance.
-
- @warn: If you subclass L{EventSift} and reimplement this method,
- don't forget to call the superclass constructor!
-
- @see: L{event}
-
- @type cls: class
- @param cls: Event handler class. This must be the class itself, not an
- instance! All additional arguments passed to the constructor of
- the event forwarder will be passed on to the constructor of this
- class as well.
- """
- self.cls = cls
- self.argv = argv
- self.argd = argd
- self.forward = dict()
- super(EventSift, self).__init__()
-
- # XXX HORRIBLE HACK
- # This makes apiHooks work in the inner handlers.
- def __call__(self, event):
- try:
- eventCode = event.get_event_code()
- if eventCode in (win32.LOAD_DLL_DEBUG_EVENT,
- win32.LOAD_DLL_DEBUG_EVENT):
- pid = event.get_pid()
- handler = self.forward.get(pid, None)
- if handler is None:
- handler = self.cls(*self.argv, **self.argd)
- self.forward[pid] = handler
- if isinstance(handler, EventHandler):
- if eventCode == win32.LOAD_DLL_DEBUG_EVENT:
- handler.__EventHandler_hook_dll(event)
- else:
- handler.__EventHandler_unhook_dll(event)
- finally:
- return super(EventSift, self).__call__(event)
-
- def event(self, event):
- """
- Forwards events to the corresponding instance of your event handler
- for this process.
-
- If you subclass L{EventSift} and reimplement this method, no event
- will be forwarded at all unless you call the superclass implementation.
-
- If your filtering is based on the event type, there's a much easier way
- to do it: just implement a handler for it.
- """
- eventCode = event.get_event_code()
- pid = event.get_pid()
- handler = self.forward.get(pid, None)
- if handler is None:
- handler = self.cls(*self.argv, **self.argd)
- if eventCode != win32.EXIT_PROCESS_DEBUG_EVENT:
- self.forward[pid] = handler
- elif eventCode == win32.EXIT_PROCESS_DEBUG_EVENT:
- del self.forward[pid]
- return handler(event)
-
-#==============================================================================
-
-class EventDispatcher (object):
- """
- Implements debug event dispatching capabilities.
-
- @group Debugging events:
- get_event_handler, set_event_handler, get_handler_method
- """
-
- # Maps event code constants to the names of the pre-notify routines.
- # These routines are called BEFORE the user-defined handlers.
- # Unknown codes are ignored.
- __preEventNotifyCallbackName = {
- win32.CREATE_THREAD_DEBUG_EVENT : '_notify_create_thread',
- win32.CREATE_PROCESS_DEBUG_EVENT : '_notify_create_process',
- win32.LOAD_DLL_DEBUG_EVENT : '_notify_load_dll',
- }
-
- # Maps event code constants to the names of the post-notify routines.
- # These routines are called AFTER the user-defined handlers.
- # Unknown codes are ignored.
- __postEventNotifyCallbackName = {
- win32.EXIT_THREAD_DEBUG_EVENT : '_notify_exit_thread',
- win32.EXIT_PROCESS_DEBUG_EVENT : '_notify_exit_process',
- win32.UNLOAD_DLL_DEBUG_EVENT : '_notify_unload_dll',
- win32.RIP_EVENT : '_notify_rip',
- }
-
- # Maps exception code constants to the names of the pre-notify routines.
- # These routines are called BEFORE the user-defined handlers.
- # Unknown codes are ignored.
- __preExceptionNotifyCallbackName = {
- win32.EXCEPTION_BREAKPOINT : '_notify_breakpoint',
- win32.EXCEPTION_WX86_BREAKPOINT : '_notify_breakpoint',
- win32.EXCEPTION_SINGLE_STEP : '_notify_single_step',
- win32.EXCEPTION_GUARD_PAGE : '_notify_guard_page',
- win32.DBG_CONTROL_C : '_notify_debug_control_c',
- win32.MS_VC_EXCEPTION : '_notify_ms_vc_exception',
- }
-
- # Maps exception code constants to the names of the post-notify routines.
- # These routines are called AFTER the user-defined handlers.
- # Unknown codes are ignored.
- __postExceptionNotifyCallbackName = {
- }
-
- def __init__(self, eventHandler = None):
- """
- Event dispatcher.
-
- @type eventHandler: L{EventHandler}
- @param eventHandler: (Optional) User-defined event handler.
-
- @raise TypeError: The event handler is of an incorrect type.
-
- @note: The L{eventHandler} parameter may be any callable Python object
- (for example a function, or an instance method).
- However you'll probably find it more convenient to use an instance
- of a subclass of L{EventHandler} here.
- """
- self.set_event_handler(eventHandler)
-
- def get_event_handler(self):
- """
- Get the event handler.
-
- @see: L{set_event_handler}
-
- @rtype: L{EventHandler}
- @return: Current event handler object, or C{None}.
- """
- return self.__eventHandler
-
- def set_event_handler(self, eventHandler):
- """
- Set the event handler.
-
- @warn: This is normally not needed. Use with care!
-
- @type eventHandler: L{EventHandler}
- @param eventHandler: New event handler object, or C{None}.
-
- @rtype: L{EventHandler}
- @return: Previous event handler object, or C{None}.
-
- @raise TypeError: The event handler is of an incorrect type.
-
- @note: The L{eventHandler} parameter may be any callable Python object
- (for example a function, or an instance method).
- However you'll probably find it more convenient to use an instance
- of a subclass of L{EventHandler} here.
- """
- if eventHandler is not None and not callable(eventHandler):
- raise TypeError("Event handler must be a callable object")
- try:
- wrong_type = issubclass(eventHandler, EventHandler)
- except TypeError:
- wrong_type = False
- if wrong_type:
- classname = str(eventHandler)
- msg = "Event handler must be an instance of class %s"
- msg += "rather than the %s class itself. (Missing parens?)"
- msg = msg % (classname, classname)
- raise TypeError(msg)
- try:
- previous = self.__eventHandler
- except AttributeError:
- previous = None
- self.__eventHandler = eventHandler
- return previous
-
- @staticmethod
- def get_handler_method(eventHandler, event, fallback=None):
- """
- Retrieves the appropriate callback method from an L{EventHandler}
- instance for the given L{Event} object.
-
- @type eventHandler: L{EventHandler}
- @param eventHandler:
- Event handler object whose methods we are examining.
-
- @type event: L{Event}
- @param event: Debugging event to be handled.
-
- @type fallback: callable
- @param fallback: (Optional) If no suitable method is found in the
- L{EventHandler} instance, return this value.
-
- @rtype: callable
- @return: Bound method that will handle the debugging event.
- Returns C{None} if no such method is defined.
- """
- eventCode = event.get_event_code()
- method = getattr(eventHandler, 'event', fallback)
- if eventCode == win32.EXCEPTION_DEBUG_EVENT:
- method = getattr(eventHandler, 'exception', method)
- method = getattr(eventHandler, event.eventMethod, method)
- return method
-
- def dispatch(self, event):
- """
- Sends event notifications to the L{Debug} object and
- the L{EventHandler} object provided by the user.
-
- The L{Debug} object will forward the notifications to it's contained
- snapshot objects (L{System}, L{Process}, L{Thread} and L{Module}) when
- appropriate.
-
- @warning: This method is called automatically from L{Debug.dispatch}.
-
- @see: L{Debug.cont}, L{Debug.loop}, L{Debug.wait}
-
- @type event: L{Event}
- @param event: Event object passed to L{Debug.dispatch}.
-
- @raise WindowsError: Raises an exception on error.
- """
- returnValue = None
- bCallHandler = True
- pre_handler = None
- post_handler = None
- eventCode = event.get_event_code()
-
- # Get the pre and post notification methods for exceptions.
- # If not found, the following steps take care of that.
- if eventCode == win32.EXCEPTION_DEBUG_EVENT:
- exceptionCode = event.get_exception_code()
- pre_name = self.__preExceptionNotifyCallbackName.get(
- exceptionCode, None)
- post_name = self.__postExceptionNotifyCallbackName.get(
- exceptionCode, None)
- if pre_name is not None:
- pre_handler = getattr(self, pre_name, None)
- if post_name is not None:
- post_handler = getattr(self, post_name, None)
-
- # Get the pre notification method for all other events.
- # This includes the exception event if no notify method was found
- # for this exception code.
- if pre_handler is None:
- pre_name = self.__preEventNotifyCallbackName.get(eventCode, None)
- if pre_name is not None:
- pre_handler = getattr(self, pre_name, pre_handler)
-
- # Get the post notification method for all other events.
- # This includes the exception event if no notify method was found
- # for this exception code.
- if post_handler is None:
- post_name = self.__postEventNotifyCallbackName.get(eventCode, None)
- if post_name is not None:
- post_handler = getattr(self, post_name, post_handler)
-
- # Call the pre-notify method only if it was defined.
- # If an exception is raised don't call the other methods.
- if pre_handler is not None:
- bCallHandler = pre_handler(event)
-
- # Call the user-defined event handler only if the pre-notify
- # method was not defined, or was and it returned True.
- try:
- if bCallHandler and self.__eventHandler is not None:
- try:
- returnValue = self.__eventHandler(event)
- except Exception:
- e = sys.exc_info()[1]
- msg = ("Event handler pre-callback %r"
- " raised an exception: %s")
- msg = msg % (self.__eventHandler, traceback.format_exc(e))
- warnings.warn(msg, EventCallbackWarning)
- returnValue = None
-
- # Call the post-notify method if defined, even if an exception is
- # raised by the user-defined event handler.
- finally:
- if post_handler is not None:
- post_handler(event)
-
- # Return the value from the call to the user-defined event handler.
- # If not defined return None.
- return returnValue
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/video/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/video/__init__.py
deleted file mode 100644
index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/video/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .io import Cache, VideoReader, frames2video
-from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread,
- flowwrite, quantize_flow, sparse_flow_from_bytes)
-from .processing import concat_video, convert_video, cut_video, resize_video
-
-__all__ = [
- 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video',
- 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow',
- 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes'
-]
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/__init__.py
deleted file mode 100644
index 9a89a838b9a5cb264e9ae9d269fbedca6e2d6333..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from pip._internal.distributions.base import AbstractDistribution
-from pip._internal.distributions.sdist import SourceDistribution
-from pip._internal.distributions.wheel import WheelDistribution
-from pip._internal.req.req_install import InstallRequirement
-
-
-def make_distribution_for_install_requirement(
- install_req: InstallRequirement,
-) -> AbstractDistribution:
- """Returns a Distribution for the given InstallRequirement"""
- # Editable requirements will always be source distributions. They use the
- # legacy logic until we create a modern standard for them.
- if install_req.editable:
- return SourceDistribution(install_req)
-
- # If it's a wheel, it's a WheelDistribution
- if install_req.is_wheel:
- return WheelDistribution(install_req)
-
- # Otherwise, a SourceDistribution
- return SourceDistribution(install_req)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/scheme.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/scheme.py
deleted file mode 100644
index f51190ac60354d90eb2aef4b04c484f8517275c2..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/models/scheme.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-For types associated with installation schemes.
-
-For a general overview of available schemes and their context, see
-https://docs.python.org/3/install/index.html#alternate-installation.
-"""
-
-
-SCHEME_KEYS = ["platlib", "purelib", "headers", "scripts", "data"]
-
-
-class Scheme:
- """A Scheme holds paths which are used as the base directories for
- artifacts associated with a Python package.
- """
-
- __slots__ = SCHEME_KEYS
-
- def __init__(
- self,
- platlib: str,
- purelib: str,
- headers: str,
- scripts: str,
- data: str,
- ) -> None:
- self.platlib = platlib
- self.purelib = purelib
- self.headers = headers
- self.scripts = scripts
- self.data = data
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/table.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/table.py
deleted file mode 100644
index 17409f2ee8df322a5ac115d1d0ff0c2d2aa11c4e..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/table.py
+++ /dev/null
@@ -1,1002 +0,0 @@
-from dataclasses import dataclass, field, replace
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Tuple,
- Union,
-)
-
-from . import box, errors
-from ._loop import loop_first_last, loop_last
-from ._pick import pick_bool
-from ._ratio import ratio_distribute, ratio_reduce
-from .align import VerticalAlignMethod
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .padding import Padding, PaddingDimensions
-from .protocol import is_renderable
-from .segment import Segment
-from .style import Style, StyleType
-from .text import Text, TextType
-
-if TYPE_CHECKING:
- from .console import (
- Console,
- ConsoleOptions,
- JustifyMethod,
- OverflowMethod,
- RenderableType,
- RenderResult,
- )
-
-
-@dataclass
-class Column:
- """Defines a column within a ~Table.
-
- Args:
- title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
- caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
- width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
- min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
- box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
- safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
- padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
- pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
- show_header (bool, optional): Show a header row. Defaults to True.
- show_footer (bool, optional): Show a footer row. Defaults to False.
- show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
- show_lines (bool, optional): Draw lines between every row. Defaults to False.
- leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
- style (Union[str, Style], optional): Default style for the table. Defaults to "none".
- row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
- header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
- footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
- border_style (Union[str, Style], optional): Style of the border. Defaults to None.
- title_style (Union[str, Style], optional): Style of the title. Defaults to None.
- caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
- title_justify (str, optional): Justify method for title. Defaults to "center".
- caption_justify (str, optional): Justify method for caption. Defaults to "center".
- highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
- """
-
- header: "RenderableType" = ""
- """RenderableType: Renderable for the header (typically a string)"""
-
- footer: "RenderableType" = ""
- """RenderableType: Renderable for the footer (typically a string)"""
-
- header_style: StyleType = ""
- """StyleType: The style of the header."""
-
- footer_style: StyleType = ""
- """StyleType: The style of the footer."""
-
- style: StyleType = ""
- """StyleType: The style of the column."""
-
- justify: "JustifyMethod" = "left"
- """str: How to justify text within the column ("left", "center", "right", or "full")"""
-
- vertical: "VerticalAlignMethod" = "top"
- """str: How to vertically align content ("top", "middle", or "bottom")"""
-
- overflow: "OverflowMethod" = "ellipsis"
- """str: Overflow method."""
-
- width: Optional[int] = None
- """Optional[int]: Width of the column, or ``None`` (default) to auto calculate width."""
-
- min_width: Optional[int] = None
- """Optional[int]: Minimum width of column, or ``None`` for no minimum. Defaults to None."""
-
- max_width: Optional[int] = None
- """Optional[int]: Maximum width of column, or ``None`` for no maximum. Defaults to None."""
-
- ratio: Optional[int] = None
- """Optional[int]: Ratio to use when calculating column width, or ``None`` (default) to adapt to column contents."""
-
- no_wrap: bool = False
- """bool: Prevent wrapping of text within the column. Defaults to ``False``."""
-
- _index: int = 0
- """Index of column."""
-
- _cells: List["RenderableType"] = field(default_factory=list)
-
- def copy(self) -> "Column":
- """Return a copy of this Column."""
- return replace(self, _cells=[])
-
- @property
- def cells(self) -> Iterable["RenderableType"]:
- """Get all cells in the column, not including header."""
- yield from self._cells
-
- @property
- def flexible(self) -> bool:
- """Check if this column is flexible."""
- return self.ratio is not None
-
-
-@dataclass
-class Row:
- """Information regarding a row."""
-
- style: Optional[StyleType] = None
- """Style to apply to row."""
-
- end_section: bool = False
- """Indicated end of section, which will force a line beneath the row."""
-
-
-class _Cell(NamedTuple):
- """A single cell in a table."""
-
- style: StyleType
- """Style to apply to cell."""
- renderable: "RenderableType"
- """Cell renderable."""
- vertical: VerticalAlignMethod
- """Cell vertical alignment."""
-
-
-class Table(JupyterMixin):
- """A console renderable to draw a table.
-
- Args:
- *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
- title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
- caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
- width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
- min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
- box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
- safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
- padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
- pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
- show_header (bool, optional): Show a header row. Defaults to True.
- show_footer (bool, optional): Show a footer row. Defaults to False.
- show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
- show_lines (bool, optional): Draw lines between every row. Defaults to False.
- leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
- style (Union[str, Style], optional): Default style for the table. Defaults to "none".
- row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
- header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
- footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
- border_style (Union[str, Style], optional): Style of the border. Defaults to None.
- title_style (Union[str, Style], optional): Style of the title. Defaults to None.
- caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
- title_justify (str, optional): Justify method for title. Defaults to "center".
- caption_justify (str, optional): Justify method for caption. Defaults to "center".
- highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
- """
-
- columns: List[Column]
- rows: List[Row]
-
- def __init__(
- self,
- *headers: Union[Column, str],
- title: Optional[TextType] = None,
- caption: Optional[TextType] = None,
- width: Optional[int] = None,
- min_width: Optional[int] = None,
- box: Optional[box.Box] = box.HEAVY_HEAD,
- safe_box: Optional[bool] = None,
- padding: PaddingDimensions = (0, 1),
- collapse_padding: bool = False,
- pad_edge: bool = True,
- expand: bool = False,
- show_header: bool = True,
- show_footer: bool = False,
- show_edge: bool = True,
- show_lines: bool = False,
- leading: int = 0,
- style: StyleType = "none",
- row_styles: Optional[Iterable[StyleType]] = None,
- header_style: Optional[StyleType] = "table.header",
- footer_style: Optional[StyleType] = "table.footer",
- border_style: Optional[StyleType] = None,
- title_style: Optional[StyleType] = None,
- caption_style: Optional[StyleType] = None,
- title_justify: "JustifyMethod" = "center",
- caption_justify: "JustifyMethod" = "center",
- highlight: bool = False,
- ) -> None:
-
- self.columns: List[Column] = []
- self.rows: List[Row] = []
- self.title = title
- self.caption = caption
- self.width = width
- self.min_width = min_width
- self.box = box
- self.safe_box = safe_box
- self._padding = Padding.unpack(padding)
- self.pad_edge = pad_edge
- self._expand = expand
- self.show_header = show_header
- self.show_footer = show_footer
- self.show_edge = show_edge
- self.show_lines = show_lines
- self.leading = leading
- self.collapse_padding = collapse_padding
- self.style = style
- self.header_style = header_style or ""
- self.footer_style = footer_style or ""
- self.border_style = border_style
- self.title_style = title_style
- self.caption_style = caption_style
- self.title_justify: "JustifyMethod" = title_justify
- self.caption_justify: "JustifyMethod" = caption_justify
- self.highlight = highlight
- self.row_styles: Sequence[StyleType] = list(row_styles or [])
- append_column = self.columns.append
- for header in headers:
- if isinstance(header, str):
- self.add_column(header=header)
- else:
- header._index = len(self.columns)
- append_column(header)
-
- @classmethod
- def grid(
- cls,
- *headers: Union[Column, str],
- padding: PaddingDimensions = 0,
- collapse_padding: bool = True,
- pad_edge: bool = False,
- expand: bool = False,
- ) -> "Table":
- """Get a table with no lines, headers, or footer.
-
- Args:
- *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
- padding (PaddingDimensions, optional): Get padding around cells. Defaults to 0.
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to True.
- pad_edge (bool, optional): Enable padding around edges of table. Defaults to False.
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
-
- Returns:
- Table: A table instance.
- """
- return cls(
- *headers,
- box=None,
- padding=padding,
- collapse_padding=collapse_padding,
- show_header=False,
- show_footer=False,
- show_edge=False,
- pad_edge=pad_edge,
- expand=expand,
- )
-
- @property
- def expand(self) -> bool:
- """Setting a non-None self.width implies expand."""
- return self._expand or self.width is not None
-
- @expand.setter
- def expand(self, expand: bool) -> None:
- """Set expand."""
- self._expand = expand
-
- @property
- def _extra_width(self) -> int:
- """Get extra width to add to cell content."""
- width = 0
- if self.box and self.show_edge:
- width += 2
- if self.box:
- width += len(self.columns) - 1
- return width
-
- @property
- def row_count(self) -> int:
- """Get the current number of rows."""
- return len(self.rows)
-
- def get_row_style(self, console: "Console", index: int) -> StyleType:
- """Get the current row style."""
- style = Style.null()
- if self.row_styles:
- style += console.get_style(self.row_styles[index % len(self.row_styles)])
- row_style = self.rows[index].style
- if row_style is not None:
- style += console.get_style(row_style)
- return style
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> Measurement:
- max_width = options.max_width
- if self.width is not None:
- max_width = self.width
- if max_width < 0:
- return Measurement(0, 0)
-
- extra_width = self._extra_width
- max_width = sum(
- self._calculate_column_widths(
- console, options.update_width(max_width - extra_width)
- )
- )
- _measure_column = self._measure_column
-
- measurements = [
- _measure_column(console, options.update_width(max_width), column)
- for column in self.columns
- ]
- minimum_width = (
- sum(measurement.minimum for measurement in measurements) + extra_width
- )
- maximum_width = (
- sum(measurement.maximum for measurement in measurements) + extra_width
- if (self.width is None)
- else self.width
- )
- measurement = Measurement(minimum_width, maximum_width)
- measurement = measurement.clamp(self.min_width)
- return measurement
-
- @property
- def padding(self) -> Tuple[int, int, int, int]:
- """Get cell padding."""
- return self._padding
-
- @padding.setter
- def padding(self, padding: PaddingDimensions) -> "Table":
- """Set cell padding."""
- self._padding = Padding.unpack(padding)
- return self
-
- def add_column(
- self,
- header: "RenderableType" = "",
- footer: "RenderableType" = "",
- *,
- header_style: Optional[StyleType] = None,
- footer_style: Optional[StyleType] = None,
- style: Optional[StyleType] = None,
- justify: "JustifyMethod" = "left",
- vertical: "VerticalAlignMethod" = "top",
- overflow: "OverflowMethod" = "ellipsis",
- width: Optional[int] = None,
- min_width: Optional[int] = None,
- max_width: Optional[int] = None,
- ratio: Optional[int] = None,
- no_wrap: bool = False,
- ) -> None:
- """Add a column to the table.
-
- Args:
- header (RenderableType, optional): Text or renderable for the header.
- Defaults to "".
- footer (RenderableType, optional): Text or renderable for the footer.
- Defaults to "".
- header_style (Union[str, Style], optional): Style for the header, or None for default. Defaults to None.
- footer_style (Union[str, Style], optional): Style for the footer, or None for default. Defaults to None.
- style (Union[str, Style], optional): Style for the column cells, or None for default. Defaults to None.
- justify (JustifyMethod, optional): Alignment for cells. Defaults to "left".
- vertical (VerticalAlignMethod, optional): Vertical alignment, one of "top", "middle", or "bottom". Defaults to "top".
- overflow (OverflowMethod): Overflow method: "crop", "fold", "ellipsis". Defaults to "ellipsis".
- width (int, optional): Desired width of column in characters, or None to fit to contents. Defaults to None.
- min_width (Optional[int], optional): Minimum width of column, or ``None`` for no minimum. Defaults to None.
- max_width (Optional[int], optional): Maximum width of column, or ``None`` for no maximum. Defaults to None.
- ratio (int, optional): Flexible ratio for the column (requires ``Table.expand`` or ``Table.width``). Defaults to None.
- no_wrap (bool, optional): Set to ``True`` to disable wrapping of this column.
- """
-
- column = Column(
- _index=len(self.columns),
- header=header,
- footer=footer,
- header_style=header_style or "",
- footer_style=footer_style or "",
- style=style or "",
- justify=justify,
- vertical=vertical,
- overflow=overflow,
- width=width,
- min_width=min_width,
- max_width=max_width,
- ratio=ratio,
- no_wrap=no_wrap,
- )
- self.columns.append(column)
-
- def add_row(
- self,
- *renderables: Optional["RenderableType"],
- style: Optional[StyleType] = None,
- end_section: bool = False,
- ) -> None:
- """Add a row of renderables.
-
- Args:
- *renderables (None or renderable): Each cell in a row must be a renderable object (including str),
- or ``None`` for a blank cell.
- style (StyleType, optional): An optional style to apply to the entire row. Defaults to None.
- end_section (bool, optional): End a section and draw a line. Defaults to False.
-
- Raises:
- errors.NotRenderableError: If you add something that can't be rendered.
- """
-
- def add_cell(column: Column, renderable: "RenderableType") -> None:
- column._cells.append(renderable)
-
- cell_renderables: List[Optional["RenderableType"]] = list(renderables)
-
- columns = self.columns
- if len(cell_renderables) < len(columns):
- cell_renderables = [
- *cell_renderables,
- *[None] * (len(columns) - len(cell_renderables)),
- ]
- for index, renderable in enumerate(cell_renderables):
- if index == len(columns):
- column = Column(_index=index)
- for _ in self.rows:
- add_cell(column, Text(""))
- self.columns.append(column)
- else:
- column = columns[index]
- if renderable is None:
- add_cell(column, "")
- elif is_renderable(renderable):
- add_cell(column, renderable)
- else:
- raise errors.NotRenderableError(
- f"unable to render {type(renderable).__name__}; a string or other renderable object is required"
- )
- self.rows.append(Row(style=style, end_section=end_section))
-
- def add_section(self) -> None:
- """Add a new section (draw a line after current row)."""
-
- if self.rows:
- self.rows[-1].end_section = True
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
-
- if not self.columns:
- yield Segment("\n")
- return
-
- max_width = options.max_width
- if self.width is not None:
- max_width = self.width
-
- extra_width = self._extra_width
- widths = self._calculate_column_widths(
- console, options.update_width(max_width - extra_width)
- )
- table_width = sum(widths) + extra_width
-
- render_options = options.update(
- width=table_width, highlight=self.highlight, height=None
- )
-
- def render_annotation(
- text: TextType, style: StyleType, justify: "JustifyMethod" = "center"
- ) -> "RenderResult":
- render_text = (
- console.render_str(text, style=style, highlight=False)
- if isinstance(text, str)
- else text
- )
- return console.render(
- render_text, options=render_options.update(justify=justify)
- )
-
- if self.title:
- yield from render_annotation(
- self.title,
- style=Style.pick_first(self.title_style, "table.title"),
- justify=self.title_justify,
- )
- yield from self._render(console, render_options, widths)
- if self.caption:
- yield from render_annotation(
- self.caption,
- style=Style.pick_first(self.caption_style, "table.caption"),
- justify=self.caption_justify,
- )
-
- def _calculate_column_widths(
- self, console: "Console", options: "ConsoleOptions"
- ) -> List[int]:
- """Calculate the widths of each column, including padding, not including borders."""
- max_width = options.max_width
- columns = self.columns
- width_ranges = [
- self._measure_column(console, options, column) for column in columns
- ]
- widths = [_range.maximum or 1 for _range in width_ranges]
- get_padding_width = self._get_padding_width
- extra_width = self._extra_width
- if self.expand:
- ratios = [col.ratio or 0 for col in columns if col.flexible]
- if any(ratios):
- fixed_widths = [
- 0 if column.flexible else _range.maximum
- for _range, column in zip(width_ranges, columns)
- ]
- flex_minimum = [
- (column.width or 1) + get_padding_width(column._index)
- for column in columns
- if column.flexible
- ]
- flexible_width = max_width - sum(fixed_widths)
- flex_widths = ratio_distribute(flexible_width, ratios, flex_minimum)
- iter_flex_widths = iter(flex_widths)
- for index, column in enumerate(columns):
- if column.flexible:
- widths[index] = fixed_widths[index] + next(iter_flex_widths)
- table_width = sum(widths)
-
- if table_width > max_width:
- widths = self._collapse_widths(
- widths,
- [(column.width is None and not column.no_wrap) for column in columns],
- max_width,
- )
- table_width = sum(widths)
- # last resort, reduce columns evenly
- if table_width > max_width:
- excess_width = table_width - max_width
- widths = ratio_reduce(excess_width, [1] * len(widths), widths, widths)
- table_width = sum(widths)
-
- width_ranges = [
- self._measure_column(console, options.update_width(width), column)
- for width, column in zip(widths, columns)
- ]
- widths = [_range.maximum or 0 for _range in width_ranges]
-
- if (table_width < max_width and self.expand) or (
- self.min_width is not None and table_width < (self.min_width - extra_width)
- ):
- _max_width = (
- max_width
- if self.min_width is None
- else min(self.min_width - extra_width, max_width)
- )
- pad_widths = ratio_distribute(_max_width - table_width, widths)
- widths = [_width + pad for _width, pad in zip(widths, pad_widths)]
-
- return widths
-
- @classmethod
- def _collapse_widths(
- cls, widths: List[int], wrapable: List[bool], max_width: int
- ) -> List[int]:
- """Reduce widths so that the total is under max_width.
-
- Args:
- widths (List[int]): List of widths.
- wrapable (List[bool]): List of booleans that indicate if a column may shrink.
- max_width (int): Maximum width to reduce to.
-
- Returns:
- List[int]: A new list of widths.
- """
- total_width = sum(widths)
- excess_width = total_width - max_width
- if any(wrapable):
- while total_width and excess_width > 0:
- max_column = max(
- width for width, allow_wrap in zip(widths, wrapable) if allow_wrap
- )
- second_max_column = max(
- width if allow_wrap and width != max_column else 0
- for width, allow_wrap in zip(widths, wrapable)
- )
- column_difference = max_column - second_max_column
- ratios = [
- (1 if (width == max_column and allow_wrap) else 0)
- for width, allow_wrap in zip(widths, wrapable)
- ]
- if not any(ratios) or not column_difference:
- break
- max_reduce = [min(excess_width, column_difference)] * len(widths)
- widths = ratio_reduce(excess_width, ratios, max_reduce, widths)
-
- total_width = sum(widths)
- excess_width = total_width - max_width
- return widths
-
- def _get_cells(
- self, console: "Console", column_index: int, column: Column
- ) -> Iterable[_Cell]:
- """Get all the cells with padding and optional header."""
-
- collapse_padding = self.collapse_padding
- pad_edge = self.pad_edge
- padding = self.padding
- any_padding = any(padding)
-
- first_column = column_index == 0
- last_column = column_index == len(self.columns) - 1
-
- _padding_cache: Dict[Tuple[bool, bool], Tuple[int, int, int, int]] = {}
-
- def get_padding(first_row: bool, last_row: bool) -> Tuple[int, int, int, int]:
- cached = _padding_cache.get((first_row, last_row))
- if cached:
- return cached
- top, right, bottom, left = padding
-
- if collapse_padding:
- if not first_column:
- left = max(0, left - right)
- if not last_row:
- bottom = max(0, top - bottom)
-
- if not pad_edge:
- if first_column:
- left = 0
- if last_column:
- right = 0
- if first_row:
- top = 0
- if last_row:
- bottom = 0
- _padding = (top, right, bottom, left)
- _padding_cache[(first_row, last_row)] = _padding
- return _padding
-
- raw_cells: List[Tuple[StyleType, "RenderableType"]] = []
- _append = raw_cells.append
- get_style = console.get_style
- if self.show_header:
- header_style = get_style(self.header_style or "") + get_style(
- column.header_style
- )
- _append((header_style, column.header))
- cell_style = get_style(column.style or "")
- for cell in column.cells:
- _append((cell_style, cell))
- if self.show_footer:
- footer_style = get_style(self.footer_style or "") + get_style(
- column.footer_style
- )
- _append((footer_style, column.footer))
-
- if any_padding:
- _Padding = Padding
- for first, last, (style, renderable) in loop_first_last(raw_cells):
- yield _Cell(
- style,
- _Padding(renderable, get_padding(first, last)),
- getattr(renderable, "vertical", None) or column.vertical,
- )
- else:
- for (style, renderable) in raw_cells:
- yield _Cell(
- style,
- renderable,
- getattr(renderable, "vertical", None) or column.vertical,
- )
-
- def _get_padding_width(self, column_index: int) -> int:
- """Get extra width from padding."""
- _, pad_right, _, pad_left = self.padding
- if self.collapse_padding:
- if column_index > 0:
- pad_left = max(0, pad_left - pad_right)
- return pad_left + pad_right
-
- def _measure_column(
- self,
- console: "Console",
- options: "ConsoleOptions",
- column: Column,
- ) -> Measurement:
- """Get the minimum and maximum width of the column."""
-
- max_width = options.max_width
- if max_width < 1:
- return Measurement(0, 0)
-
- padding_width = self._get_padding_width(column._index)
-
- if column.width is not None:
- # Fixed width column
- return Measurement(
- column.width + padding_width, column.width + padding_width
- ).with_maximum(max_width)
- # Flexible column, we need to measure contents
- min_widths: List[int] = []
- max_widths: List[int] = []
- append_min = min_widths.append
- append_max = max_widths.append
- get_render_width = Measurement.get
- for cell in self._get_cells(console, column._index, column):
- _min, _max = get_render_width(console, options, cell.renderable)
- append_min(_min)
- append_max(_max)
-
- measurement = Measurement(
- max(min_widths) if min_widths else 1,
- max(max_widths) if max_widths else max_width,
- ).with_maximum(max_width)
- measurement = measurement.clamp(
- None if column.min_width is None else column.min_width + padding_width,
- None if column.max_width is None else column.max_width + padding_width,
- )
- return measurement
-
- def _render(
- self, console: "Console", options: "ConsoleOptions", widths: List[int]
- ) -> "RenderResult":
- table_style = console.get_style(self.style or "")
-
- border_style = table_style + console.get_style(self.border_style or "")
- _column_cells = (
- self._get_cells(console, column_index, column)
- for column_index, column in enumerate(self.columns)
- )
- row_cells: List[Tuple[_Cell, ...]] = list(zip(*_column_cells))
- _box = (
- self.box.substitute(
- options, safe=pick_bool(self.safe_box, console.safe_box)
- )
- if self.box
- else None
- )
- _box = _box.get_plain_headed_box() if _box and not self.show_header else _box
-
- new_line = Segment.line()
-
- columns = self.columns
- show_header = self.show_header
- show_footer = self.show_footer
- show_edge = self.show_edge
- show_lines = self.show_lines
- leading = self.leading
-
- _Segment = Segment
- if _box:
- box_segments = [
- (
- _Segment(_box.head_left, border_style),
- _Segment(_box.head_right, border_style),
- _Segment(_box.head_vertical, border_style),
- ),
- (
- _Segment(_box.foot_left, border_style),
- _Segment(_box.foot_right, border_style),
- _Segment(_box.foot_vertical, border_style),
- ),
- (
- _Segment(_box.mid_left, border_style),
- _Segment(_box.mid_right, border_style),
- _Segment(_box.mid_vertical, border_style),
- ),
- ]
- if show_edge:
- yield _Segment(_box.get_top(widths), border_style)
- yield new_line
- else:
- box_segments = []
-
- get_row_style = self.get_row_style
- get_style = console.get_style
-
- for index, (first, last, row_cell) in enumerate(loop_first_last(row_cells)):
- header_row = first and show_header
- footer_row = last and show_footer
- row = (
- self.rows[index - show_header]
- if (not header_row and not footer_row)
- else None
- )
- max_height = 1
- cells: List[List[List[Segment]]] = []
- if header_row or footer_row:
- row_style = Style.null()
- else:
- row_style = get_style(
- get_row_style(console, index - 1 if show_header else index)
- )
- for width, cell, column in zip(widths, row_cell, columns):
- render_options = options.update(
- width=width,
- justify=column.justify,
- no_wrap=column.no_wrap,
- overflow=column.overflow,
- height=None,
- )
- lines = console.render_lines(
- cell.renderable,
- render_options,
- style=get_style(cell.style) + row_style,
- )
- max_height = max(max_height, len(lines))
- cells.append(lines)
-
- row_height = max(len(cell) for cell in cells)
-
- def align_cell(
- cell: List[List[Segment]],
- vertical: "VerticalAlignMethod",
- width: int,
- style: Style,
- ) -> List[List[Segment]]:
- if header_row:
- vertical = "bottom"
- elif footer_row:
- vertical = "top"
-
- if vertical == "top":
- return _Segment.align_top(cell, width, row_height, style)
- elif vertical == "middle":
- return _Segment.align_middle(cell, width, row_height, style)
- return _Segment.align_bottom(cell, width, row_height, style)
-
- cells[:] = [
- _Segment.set_shape(
- align_cell(
- cell,
- _cell.vertical,
- width,
- get_style(_cell.style) + row_style,
- ),
- width,
- max_height,
- )
- for width, _cell, cell, column in zip(widths, row_cell, cells, columns)
- ]
-
- if _box:
- if last and show_footer:
- yield _Segment(
- _box.get_row(widths, "foot", edge=show_edge), border_style
- )
- yield new_line
- left, right, _divider = box_segments[0 if first else (2 if last else 1)]
-
- # If the column divider is whitespace also style it with the row background
- divider = (
- _divider
- if _divider.text.strip()
- else _Segment(
- _divider.text, row_style.background_style + _divider.style
- )
- )
- for line_no in range(max_height):
- if show_edge:
- yield left
- for last_cell, rendered_cell in loop_last(cells):
- yield from rendered_cell[line_no]
- if not last_cell:
- yield divider
- if show_edge:
- yield right
- yield new_line
- else:
- for line_no in range(max_height):
- for rendered_cell in cells:
- yield from rendered_cell[line_no]
- yield new_line
- if _box and first and show_header:
- yield _Segment(
- _box.get_row(widths, "head", edge=show_edge), border_style
- )
- yield new_line
- end_section = row and row.end_section
- if _box and (show_lines or leading or end_section):
- if (
- not last
- and not (show_footer and index >= len(row_cells) - 2)
- and not (show_header and header_row)
- ):
- if leading:
- yield _Segment(
- _box.get_row(widths, "mid", edge=show_edge) * leading,
- border_style,
- )
- else:
- yield _Segment(
- _box.get_row(widths, "row", edge=show_edge), border_style
- )
- yield new_line
-
- if _box and show_edge:
- yield _Segment(_box.get_bottom(widths), border_style)
- yield new_line
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich.console import Console
- from pip._vendor.rich.highlighter import ReprHighlighter
- from pip._vendor.rich.table import Table as Table
-
- from ._timer import timer
-
- with timer("Table render"):
- table = Table(
- title="Star Wars Movies",
- caption="Rich example table",
- caption_justify="right",
- )
-
- table.add_column(
- "Released", header_style="bright_cyan", style="cyan", no_wrap=True
- )
- table.add_column("Title", style="magenta")
- table.add_column("Box Office", justify="right", style="green")
-
- table.add_row(
- "Dec 20, 2019",
- "Star Wars: The Rise of Skywalker",
- "$952,110,690",
- )
- table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347")
- table.add_row(
- "Dec 15, 2017",
- "Star Wars Ep. V111: The Last Jedi",
- "$1,332,539,889",
- style="on black",
- end_section=True,
- )
- table.add_row(
- "Dec 16, 2016",
- "Rogue One: A Star Wars Story",
- "$1,332,439,889",
- )
-
- def header(text: str) -> None:
- console.print()
- console.rule(highlight(text))
- console.print()
-
- console = Console()
- highlight = ReprHighlighter()
- header("Example Table")
- console.print(table, justify="center")
-
- table.expand = True
- header("expand=True")
- console.print(table)
-
- table.width = 50
- header("width=50")
-
- console.print(table, justify="center")
-
- table.width = None
- table.expand = False
- table.row_styles = ["dim", "none"]
- header("row_styles=['dim', 'none']")
-
- console.print(table, justify="center")
-
- table.width = None
- table.expand = False
- table.row_styles = ["dim", "none"]
- table.leading = 1
- header("leading=1, row_styles=['dim', 'none']")
- console.print(table, justify="center")
-
- table.width = None
- table.expand = False
- table.row_styles = ["dim", "none"]
- table.show_lines = True
- table.leading = 0
- header("show_lines=True, row_styles=['dim', 'none']")
- console.print(table, justify="center")
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py
deleted file mode 100644
index 9fb3462b7f9abf6feaa499976bfed526ebd17e31..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import io
-import itertools
-import json
-import logging
-import numpy as np
-import os
-import tempfile
-from collections import OrderedDict
-from typing import Optional
-from PIL import Image
-from tabulate import tabulate
-
-from detectron2.data import MetadataCatalog
-from detectron2.utils import comm
-from detectron2.utils.file_io import PathManager
-
-from .evaluator import DatasetEvaluator
-
-logger = logging.getLogger(__name__)
-
-
-class COCOPanopticEvaluator(DatasetEvaluator):
- """
- Evaluate Panoptic Quality metrics on COCO using PanopticAPI.
- It saves panoptic segmentation prediction in `output_dir`
-
- It contains a synchronize call and has to be called from all workers.
- """
-
- def __init__(self, dataset_name: str, output_dir: Optional[str] = None):
- """
- Args:
- dataset_name: name of the dataset
- output_dir: output directory to save results for evaluation.
- """
- self._metadata = MetadataCatalog.get(dataset_name)
- self._thing_contiguous_id_to_dataset_id = {
- v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
- }
- self._stuff_contiguous_id_to_dataset_id = {
- v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items()
- }
-
- self._output_dir = output_dir
- if self._output_dir is not None:
- PathManager.mkdirs(self._output_dir)
-
- def reset(self):
- self._predictions = []
-
- def _convert_category_id(self, segment_info):
- isthing = segment_info.pop("isthing", None)
- if isthing is None:
- # the model produces panoptic category id directly. No more conversion needed
- return segment_info
- if isthing is True:
- segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[
- segment_info["category_id"]
- ]
- else:
- segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[
- segment_info["category_id"]
- ]
- return segment_info
-
- def process(self, inputs, outputs):
- from panopticapi.utils import id2rgb
-
- for input, output in zip(inputs, outputs):
- panoptic_img, segments_info = output["panoptic_seg"]
- panoptic_img = panoptic_img.cpu().numpy()
- if segments_info is None:
- # If "segments_info" is None, we assume "panoptic_img" is a
- # H*W int32 image storing the panoptic_id in the format of
- # category_id * label_divisor + instance_id. We reserve -1 for
- # VOID label, and add 1 to panoptic_img since the official
- # evaluation script uses 0 for VOID label.
- label_divisor = self._metadata.label_divisor
- segments_info = []
- for panoptic_label in np.unique(panoptic_img):
- if panoptic_label == -1:
- # VOID region.
- continue
- pred_class = panoptic_label // label_divisor
- isthing = (
- pred_class in self._metadata.thing_dataset_id_to_contiguous_id.values()
- )
- segments_info.append(
- {
- "id": int(panoptic_label) + 1,
- "category_id": int(pred_class),
- "isthing": bool(isthing),
- }
- )
- # Official evaluation script uses 0 for VOID label.
- panoptic_img += 1
-
- file_name = os.path.basename(input["file_name"])
- file_name_png = os.path.splitext(file_name)[0] + ".png"
- with io.BytesIO() as out:
- Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG")
- segments_info = [self._convert_category_id(x) for x in segments_info]
- self._predictions.append(
- {
- "image_id": input["image_id"],
- "file_name": file_name_png,
- "png_string": out.getvalue(),
- "segments_info": segments_info,
- }
- )
-
- def evaluate(self):
- comm.synchronize()
-
- self._predictions = comm.gather(self._predictions)
- self._predictions = list(itertools.chain(*self._predictions))
- if not comm.is_main_process():
- return
-
- # PanopticApi requires local files
- gt_json = PathManager.get_local_path(self._metadata.panoptic_json)
- gt_folder = PathManager.get_local_path(self._metadata.panoptic_root)
-
- with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir:
- logger.info("Writing all panoptic predictions to {} ...".format(pred_dir))
- for p in self._predictions:
- with open(os.path.join(pred_dir, p["file_name"]), "wb") as f:
- f.write(p.pop("png_string"))
-
- with open(gt_json, "r") as f:
- json_data = json.load(f)
- json_data["annotations"] = self._predictions
-
- output_dir = self._output_dir or pred_dir
- predictions_json = os.path.join(output_dir, "predictions.json")
- with PathManager.open(predictions_json, "w") as f:
- f.write(json.dumps(json_data))
-
- from panopticapi.evaluation import pq_compute
-
- with contextlib.redirect_stdout(io.StringIO()):
- pq_res = pq_compute(
- gt_json,
- PathManager.get_local_path(predictions_json),
- gt_folder=gt_folder,
- pred_folder=pred_dir,
- )
-
- res = {}
- res["PQ"] = 100 * pq_res["All"]["pq"]
- res["SQ"] = 100 * pq_res["All"]["sq"]
- res["RQ"] = 100 * pq_res["All"]["rq"]
- res["PQ_th"] = 100 * pq_res["Things"]["pq"]
- res["SQ_th"] = 100 * pq_res["Things"]["sq"]
- res["RQ_th"] = 100 * pq_res["Things"]["rq"]
- res["PQ_st"] = 100 * pq_res["Stuff"]["pq"]
- res["SQ_st"] = 100 * pq_res["Stuff"]["sq"]
- res["RQ_st"] = 100 * pq_res["Stuff"]["rq"]
-
- results = OrderedDict({"panoptic_seg": res})
- _print_panoptic_results(pq_res)
-
- return results
-
-
-def _print_panoptic_results(pq_res):
- headers = ["", "PQ", "SQ", "RQ", "#categories"]
- data = []
- for name in ["All", "Things", "Stuff"]:
- row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]]
- data.append(row)
- table = tabulate(
- data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center"
- )
- logger.info("Panoptic Evaluation Results:\n" + table)
-
-
-if __name__ == "__main__":
- from detectron2.utils.logger import setup_logger
-
- logger = setup_logger()
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--gt-json")
- parser.add_argument("--gt-dir")
- parser.add_argument("--pred-json")
- parser.add_argument("--pred-dir")
- args = parser.parse_args()
-
- from panopticapi.evaluation import pq_compute
-
- with contextlib.redirect_stdout(io.StringIO()):
- pq_res = pq_compute(
- args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir
- )
- _print_panoptic_results(pq_res)
diff --git a/spaces/Tetel/secondbing/public/style.css b/spaces/Tetel/secondbing/public/style.css
deleted file mode 100644
index 071a08fb1af500313656529f0e08e1c0d94f319a..0000000000000000000000000000000000000000
--- a/spaces/Tetel/secondbing/public/style.css
+++ /dev/null
@@ -1,157 +0,0 @@
-body {
- font-family: "Microsoft YaHei", sans-serif;
- margin: 0;
- padding: 0;
- background-image: url("background.png");
- background-size: cover;
-}
-
-.container {
- display: flex;
- flex-direction: column;
- margin: auto;
- max-width: 1184px;
- padding: 20px;
- box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
- border-radius: 10px;
-}
-
-.heading {
- color: #444;
- font-size: 1.5em;
- margin-bottom: 2px;
-}
-
-.button-container {
- display: flex;
- justify-content: flex-end;
- flex-wrap: wrap;
-}
-
-.button {
- margin-left: 10px;
- padding: 5px 10px;
- border: none;
- border-radius: 5px;
- background-color: #007BFF;
- color: white;
- cursor: pointer;
- transition: background-color 0.3s;
-}
-
-.button:hover {
- background-color: #0056b3;
-}
-
-.button[disabled] {
- background-color: gray;
-}
-
-.messages {
- display: flex;
- flex-direction: column;
- border: 1px solid #ccc;
- padding: 10px;
- margin-bottom: 20px;
- border-radius: 5px;
-}
-
-.textarea {
- width: 100%;
- margin-bottom: 10px;
- border: 1px solid #ccc;
- border-radius: 5px;
- padding: 10px;
- box-sizing: border-box;
- font-family: "Microsoft YaHei", sans-serif;
-}
-
-.selector {
- margin-bottom: 10px;
-}
-
-.message {
- margin-bottom: 10px;
- padding: 10px;
- border-radius: 12px;
- box-shadow: 0 0.3px 0.9px rgba(0, 0, 0, 0.12), 0 1.6px 3.6px rgba(0, 0, 0, 0.16);
- font-size: 16px;
- width: fit-content;
- max-width: 768px;
- position: relative;
-}
-
-.user-message {
- color: white;
- background-image: linear-gradient(90deg, #904887 10.79%, #8B257E 87.08%);
- align-self: flex-end;
-}
-
-.assistant-message {
- background-color: rgba(255, 255, 255, 0.6);
-}
-
-.other-message {
- background-color: rgba(255, 255, 255, 0.3);
- align-self: flex-end;
-}
-
-.message * {
- margin-block: 0;
-}
-
-.add-button, .delete-button, .edit-button {
- box-shadow: 0 0.3px 0.9px rgba(0, 0, 0, 0.12), 0 1.6px 3.6px rgba(0, 0, 0, 0.16);
- position: absolute;
- top: -36px;
- background-color: white;
- color: white;
- border: none;
- border-radius: 8px;
- width: 36px;
- height: 36px;
- text-align: center;
- line-height: 36px;
- cursor: pointer;
-}
-
-.delete-button {
- right: 0;
-}
-
-.edit-button {
- right: 36px;
-}
-
-.add-button {
- right: 72px;
-}
-
-.add-button:hover, .delete-button:hover, .edit-button:hover {
- background-color: rgb(255, 255, 255, 0.06);
-}
-
-img[alt^="image"] {
- width: 206px;
- height: 206px;
- border: 6px solid transparent;
- border-radius: 15px;
- transition: transform 0.3s;
- object-fit: contain;
-}
-
-img[alt^="image"]:hover {
- transform: scale(1.1);
-}
-
-img[alt="bg_upload_image"] {
- width: 20px;
- height: 20px;
-}
-
-#image_upload {
- margin: 10px;
- display: flex;
- align-items: center;
-}
-
diff --git a/spaces/Vrk/SkimLit/MakePredictions.py b/spaces/Vrk/SkimLit/MakePredictions.py
deleted file mode 100644
index 1918e05f5fea1cd434a0675e9a249f352dfd338c..0000000000000000000000000000000000000000
--- a/spaces/Vrk/SkimLit/MakePredictions.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import numpy as np
-from spacy.lang.en import English
-import pandas as pd
-
-import nltk
-from nltk.corpus import stopwords
-from nltk.stem import PorterStemmer
-import re
-
-import torch
-import torch.nn.functional as F
-
-from Dataset import SkimlitDataset
-
-# nltk.download("stopwords")
-# STOPWORDS = stopwords.words("english")
-# porter = PorterStemmer()
-
-def download_stopwords():
- nltk.download("stopwords")
- STOPWORDS = stopwords.words("english")
- porter = PorterStemmer()
- return STOPWORDS, porter
-
-def preprocess(text, stopwords):
- """Conditional preprocessing on our text unique to our task."""
- # Lower
- text = text.lower()
-
- # Remove stopwords
- pattern = re.compile(r"\b(" + r"|".join(stopwords) + r")\b\s*")
- text = pattern.sub("", text)
-
- # Remove words in paranthesis
- text = re.sub(r"\([^)]*\)", "", text)
-
- # Spacing and filters
- text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
- text = re.sub("[^A-Za-z0-9]+", " ", text) # remove non alphanumeric chars
- text = re.sub(" +", " ", text) # remove multiple spaces
- text = text.strip()
-
- return text
-
-def spacy_function(abstract):
-
- # setup English sentence parser
- nlp = English()
-
- # create sentence splitting pipeline object
- sentencizer = nlp.create_pipe("sentencizer")
-
- # add sentence splitting pipeline object to sentence parser
- nlp.add_pipe('sentencizer')
-
- # create "doc" of parsed sequences, change index for a different abstract
- doc = nlp(abstract)
-
- # return detected sentences from doc in string type (not spaCy token type)
- abstract_lines = [str(sent) for sent in list(doc.sents)]
-
- return abstract_lines
-
-# ---------------------------------------------------------------------------------------------------------------------------
-
-def model_prediction(model, dataloader):
- """Prediction step."""
- # Set model to eval mode
- model.eval()
- y_trues, y_probs = [], []
- # Iterate over val batches
- for i, batch in enumerate(dataloader):
- # Forward pass w/ inputs
- # batch = [item.to(.device) for item in batch] # Set device
- inputs = batch
- z = model(inputs)
- # Store outputs
- y_prob = F.softmax(z, dim=1).detach().cpu().numpy()
- y_probs.extend(y_prob)
- return np.vstack(y_probs)
-
-# ---------------------------------------------------------------------------------------------------------------------------
-
-def make_skimlit_predictions(text, model, tokenizer, label_encoder): # embedding path
- # getting all lines seprated from abstract
- abstract_lines = list()
- abstract_lines = spacy_function(text)
-
- # Get total number of lines
- total_lines_in_sample = len(abstract_lines)
-
- # Go through each line in abstract and create a list of dictionaries containing features for each line
- sample_lines = []
- for i, line in enumerate(abstract_lines):
- sample_dict = {}
- sample_dict["text"] = str(line)
- sample_dict["line_number"] = i
- sample_dict["total_lines"] = total_lines_in_sample - 1
- sample_lines.append(sample_dict)
-
- # converting sample line list into pandas Dataframe
- df = pd.DataFrame(sample_lines)
-
- # getting stopword
- STOPWORDS, porter = download_stopwords()
-
- # applying preprocessing function to lines
- df.text = df.text.apply(lambda x: preprocess(x, STOPWORDS))
-
- # converting texts into numberical sequences
- text_seq = tokenizer.texts_to_sequences(texts=df['text'])
-
- # creating Dataset
- dataset = SkimlitDataset(text_seq=text_seq, line_num=df['line_number'], total_line=df['total_lines'])
-
- # creating dataloader
- dataloader = dataset.create_dataloader(batch_size=2)
-
- # Preparing embedings
-# embedding_matrix = get_embeddings(embeding_path, tokenizer, 300)
-
- # creating model
-# model = SkimlitModel(embedding_dim=300, vocab_size=len(tokenizer), hidden_dim=128, n_layers=3, linear_output=128, num_classes=len(label_encoder), pretrained_embeddings=embedding_matrix)
-
- # loading model weight
-# model.load_state_dict(torch.load('/content/drive/MyDrive/Datasets/SkimLit/skimlit-pytorch-1/skimlit-model-final-1.pt', map_location='cpu'))
-
- # setting model into evaluation mode
- model.eval()
-
- # getting predictions
- y_pred = model_prediction(model, dataloader)
-
- # converting predictions into label class
- pred = y_pred.argmax(axis=1)
- pred = label_encoder.decode(pred)
-
- return abstract_lines, pred
\ No newline at end of file
diff --git a/spaces/Xenos14/XenoEngine-SD-webui/Dockerfile b/spaces/Xenos14/XenoEngine-SD-webui/Dockerfile
deleted file mode 100644
index f315c0c460357514f564aca3f39aa805b9afbf9e..0000000000000000000000000000000000000000
--- a/spaces/Xenos14/XenoEngine-SD-webui/Dockerfile
+++ /dev/null
@@ -1,225 +0,0 @@
-FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
-
-ENV DEBIAN_FRONTEND noninteractive
-ENV PYTHONUNBUFFERED=1
-ENV PIP_DISABLE_PIP_VERSION_CHECK=1
-ENV PIP_NO_CACHE_DIR=1
-
-# OS setup
-RUN apt-get update -y \
- && apt-get upgrade -y \
- && apt-get install -y \
- libgl1 \
- libglib2.0-0 \
- curl \
- vim \
- wget \
- git \
- git-lfs \
- tzdata \
- bash \
- ca-certificates \
- libreadline8 \
- bzip2 \
- psmisc \
- procps \
- netbase \
- openssh-client \
- libsqlite3-dev \
- python3-pip \
- python3-venv \
- python-is-python3 \
- build-essential \
- libssl-dev \
- libffi-dev \
- aria2 \
- \
- && pip3 install --upgrade pip \
- \
- && git lfs install \
- \
- && apt-get clean autoclean \
- && apt-get autoremove --yes \
- && rm -rf /var/lib/apt/lists/*
-
-# OS timezone setting (UTC)
-RUN echo "UTC" > /etc/timezone
-ENV TZ=UTC
-
-# Poetry for Python packages
-RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/usr/local/poetry python3 - --yes \
- && ln -s /usr/local/poetry/bin/poetry /usr/bin/poetry \
- \
- && poetry config virtualenvs.create false \
- && poetry config virtualenvs.in-project false
-
-# Create non-root user
-ENV ENV="/etc/profile"
-RUN adduser --disabled-password --gecos '' user && \
- mkdir -p /app && \
- chown -R user:user /app && \
- printf "\n. /etc/profile\n" >> /home/user/.profile \
- printf "\n. /etc/profile\n" >> /home/user/.bashrc
-
-# Sets up virtualenv for dependencies
-ENV VIRTUAL_ENV="/opt/venv"
-ENV VIRTUAL_ENV_DISABLE_PROMPT=1
-ENV POETRY_ACTIVE=1
-ENV PATH="$VIRTUAL_ENV/bin:$PATH"
-RUN echo "export PATH=$PATH" >> /home/user/.bashrc \
- && python3 -m venv $VIRTUAL_ENV \
- && /opt/venv/bin/pip install --upgrade --no-cache-dir pip \
- && chown -R user:user /opt/venv
-
-# Run as non-root user
-USER user
-WORKDIR /app
-
-# Installation of basic Python dependencies specified in pyproject.toml
-COPY --chown=user:user pyproject.toml poetry.lock /app/
-RUN poetry install
-
-# AUTOMATIC1111' WebUI
-RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /app/stable-diffusion-webui \
- && (cd /app/stable-diffusion-webui && git checkout 5ef669de080814067961f28357256e8fe27544f4)
-
-# Deforum extension
-RUN git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui \
- && (cd /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui && git checkout 8a6ee64c72c18c60d66a5758b84496bf27c52cda)
-
-# Images Browser WebUI extension
-RUN git clone https://github.com/AlUlkesh/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser && git checkout b984cdd1692f46006333ab92ef463cc35879f455)
-
-# Locon extension (Obsolete - Use Lycrois)
-#RUN git clone https://github.com/KohakuBlueleaf/a1111-sd-webui-locon /app/stable-diffusion-webui/extensions/a1111-sd-webui-locon \
-# && (cd /app/stable-diffusion-webui/extensions/a1111-sd-webui-locon && git checkout afe70b0f77f2d1cc691f297074cc049913711662)
-
-# Lycoris extension
-RUN git clone https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris /app/stable-diffusion-webui/extensions/a1111-sd-webui-lycoris \
- && (cd /app/stable-diffusion-webui/extensions/a1111-sd-webui-lycoris && git checkout 8e97bf54867c25d00fc480be1ab4dae5399b35ef)
-
-# Local Latent Upscaler extension
-RUN git clone https://github.com/hnmr293/sd-webui-llul /app/stable-diffusion-webui/extensions/sd-webui-llul \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-llul && git checkout b20337ae1091ea65fdaf7108a2eaac13fed078d5)
-
-# Aspect Ratios extension
-RUN git clone https://github.com/alemelis/sd-webui-ar /app/stable-diffusion-webui/extensions/sd-webui-ar \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-ar && git checkout ce0a645ca2ad949573cacc7f5cd14ac13e83e2c9)
-
-# Stable Hoarde extension
-#RUN git clone https://github.com/natanjunges/stable-diffusion-webui-stable-horde /app/stable-diffusion-webui/extensions/stable-diffusion-webui-stable-horde \
-# && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-stable-horde && git checkout 00248b89bfab7ba465f104324a5d0708ad37341f)
-
-# After Detailer extension
-RUN git clone https://github.com/Bing-su/adetailer /app/stable-diffusion-webui/extensions/adetailer \
- && (cd /app/stable-diffusion-webui/extensions/adetailer && git checkout a0b4c56eb75eceabf07f2ede28986a58cef2bebe)
-
-
-# Panorama extension
-RUN git clone https://github.com/GeorgLegato/sd-webui-panorama-viewer /app/stable-diffusion-webui/extensions/sd-webui-panorama-viewer \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-panorama-viewer && git checkout 6879f2e00f4e21abffe66cd2f35e1a50efc4aba8)
-
-# Style Pile extension
-RUN git clone https://github.com/some9000/StylePile /app/stable-diffusion-webui/extensions/StylePile \
- && (cd /app/stable-diffusion-webui/extensions/StylePile && git checkout 206b3d06bebb75df1a4b5439e35c432668ea7574)
-
-# Anti Burn extension
-RUN git clone https://github.com/klimaleksus/stable-diffusion-webui-anti-burn /app/stable-diffusion-webui/extensions/stable-diffusion-webui-anti-burn \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-anti-burn && git checkout 4d678f1f1120415fe4cb9f77484252bc82af03b2)
-
-# Super Merger extension
-RUN git clone https://github.com/hako-mikan/sd-webui-supermerger /app/stable-diffusion-webui/extensions/sd-webui-supermerger \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-supermerger && git checkout 665878f69f8287bd8d34cf388e8b1f2bf4468ab1)
-
-# UMI AI Extension
-#RUN git clone https://github.com/Klokinator/UnivAICharGen /app/stable-diffusion-webui/extensions/UnivAICharGen \
-# && (cd /app/stable-diffusion-webui/extensions/UnivAICharGen && git checkout c2c6114a98a46085ee7e7eec7e09980c68ae43d0)
-
-
-# Wildcards Extension
-RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards /app/stable-diffusion-webui/extensions/stable-diffusion-webui-wildcards \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-wildcards && git checkout c7d49e18398a95f2d13e2e4c063fe2f63fc2a432)
-
-# Dynamic Prompts extension
-#RUN git clone https://github.com/adieyal/sd-dynamic-prompts /app/stable-diffusion-webui/extensions/sd-dynamic-prompts \
-# && (cd /app/stable-diffusion-webui/extensions/sd-dynamic-prompts && git checkout 45b21373c00097546694aaee4f29b3d1514f76c3)
-
-# CiviTAI BETTER Browser WebUI extension
-RUN git clone https://github.com/IAmXenos14/SDWebUI_CivitaiHelperUpdated /app/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper \
- && (cd /app/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper && git checkout a5d6c493c8e00668b63e3ab924630d2ccc0a2c18)
-
-# CiviTAI WebUI extension
-RUN git clone https://github.com/civitai/sd_civitai_extension /app/stable-diffusion-webui/extensions/sd_civitai_extension \
- && (cd /app/stable-diffusion-webui/extensions/sd_civitai_extension && git checkout 763e8aedfab68e8933c3efbfa568961beeaa3def)
-
-# Huggingface Push extension
-RUN git clone https://github.com/camenduru/stable-diffusion-webui-huggingface /app/stable-diffusion-webui/extensions/stable-diffusion-webui-huggingface \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-huggingface && git checkout 6e824a1aeff9982e6068ec369dbaceb79c21a05a)
-
-# Booru Tag Autocomplete extension
-RUN git clone https://github.com/DominikDoom/a1111-sd-webui-tagcomplete /app/stable-diffusion-webui/extensions/a1111-sd-webui-tagcomplete \
- && (cd /app/stable-diffusion-webui/extensions/a1111-sd-webui-tagcomplete && git checkout 5db035cc3ac5ba418abbbd49dc1d0112594a488a)
-
-# Batchlinks Downloader extension
-RUN git clone https://github.com/etherealxx/batchlinks-webui /app/stable-diffusion-webui/extensions/batchlinks-webui \
- && (cd /app/stable-diffusion-webui/extensions/batchlinks-webui && git checkout d44bbb5e2a043f2eed80c3945c0f2c676e41d0e5)
-
-# Fast PNG Info extension
-#RUN git clone https://github.com/NoCrypt/sd-fast-pnginfo /app/stable-diffusion-webui/extensions/sd-fast-pnginfo \
-# && (cd /app/stable-diffusion-webui/extensions/sd-fast-pnginfo && git checkout b6647cd57fd5930f4355dee253833a459d2b39fe)
-
-# Filer extension
-RUN git clone https://github.com/aka7774/sd_filer /app/stable-diffusion-webui/extensions/sd_filer \
- && (cd /app/stable-diffusion-webui/extensions/sd_filer && git checkout ff7d76930ced048a4e5e73ca964551d679463da7)
-
-# Paste extension
-RUN git clone https://github.com/klimaleksus/stable-diffusion-webui-fix-image-paste /app/stable-diffusion-webui/extensions/stable-diffusion-webui-fix-image-paste \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-fix-image-paste && git checkout 2844e17e2806ed5bc76831b27f947909060d0aac)
-
-
-# Toolkit extension
-RUN git clone https://github.com/arenasys/stable-diffusion-webui-model-toolkit /app/stable-diffusion-webui/extensions/stable-diffusion-webui-model-toolkit \
- && (cd /app/stable-diffusion-webui/extensions/stable-diffusion-webui-model-toolkit && git checkout 4d8fea77dba5643439691c1c6b003db4d330ff0b)
-
-# Additional Networks WebUI extension
-RUN git clone https://github.com/kohya-ss/sd-webui-additional-networks /app/stable-diffusion-webui/extensions/sd-webui-additional-networks \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-additional-networks && git checkout 86300421b0ff35ab9d670874e836b7f65b806430)
- #&& mkdir -p /app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/LoRA
-
-# ControlNet WebUI extension
-RUN git clone https://github.com/Mikubill/sd-webui-controlnet /app/stable-diffusion-webui/extensions/sd-webui-controlnet \
- && (cd /app/stable-diffusion-webui/extensions/sd-webui-controlnet && git checkout e78d486ce0e5cb9adc52549370d71e0433bf2111) \
- && mkdir -p /app/stable-diffusion-webui/models/ControlNet
-
-#Grab the Helper LoRas
-#RUN mkdir -p /app/stable-diffusion-webui/models/Lora && cd /app/stable-diffusion-webui/models/Lora \
-# && (git clone https://huggingface.co/Xenos14/QoL-LoRas)
-
-# Grab the Embeddings, LoRa's, etc.
-RUN mkdir -p /app/holder && cd /app/holder \
- && git clone https://huggingface.co/Xenos14/MyMods \
- && cd MyMods \
- && cp -r models /app/stable-diffusion-webui/ \
- && cp -r embeddings /app/stable-diffusion-webui/ \
- && cp -r extensions/Umi-AI-debloat/wildcards /app/stable-diffusion-webui/extensions/stable-diffusion-webui-wildcards/
-
-# Prepare WebUI environment
-WORKDIR /app/stable-diffusion-webui
-RUN /opt/venv/bin/python launch.py --exit --skip-torch-cuda-test --xformers
-
-# Patch WebUI
-RUN sed -i -e 's/ show_progress=False,/ show_progress=True,/g' modules/ui.py
-RUN sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' webui.py
-RUN sed -i -e 's/ outputs=\[/queue=False, &/g' modules/ui.py
-RUN sed -i -e 's/ queue=False, / /g' modules/ui.py
-
-# Copy startup scripts
-COPY --chown=user:user run.py on_start.sh config.json ui-config.json shared-config.json shared-ui-config.json header_patch.py /app/stable-diffusion-webui/
-# COPY embeddings/ /app/stable-diffusion-webui/embeddings/
-COPY styles.csv /app/stable-diffusion-webui/
-RUN chmod +x on_start.sh
-
-EXPOSE 7860
-
-CMD ["/opt/venv/bin/python", "run.py", "--listen", "--gradio-queue", "--disable-nan-check", "--enable-insecure-extension-access", "--ui-config-file", "ui-config.json", "--ui-settings-file", "config.json", "--disable-console-progressbars", "--cors-allow-origins", "huggingface.co,hf.space", "--no-progressbar-hiding", "--enable-console-prompts", "--no-download-sd-model", "--api", "--skip-version-check", "--lora-dir", "/app/stable-diffusion-webui/models/Lora", "--embeddings-dir", "/app/stable-diffusion-webui/embeddings"]
diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese.py b/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese.py
deleted file mode 100644
index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import re
-
-import cn2an
-from pypinyin import lazy_pinyin, Style
-
-from text import symbols
-from text.symbols import punctuation
-from text.tone_sandhi import ToneSandhi
-
-current_file_path = os.path.dirname(__file__)
-pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
-
-import jieba.posseg as psg
-
-
-rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- '$': '.',
- '“': "'",
- '”': "'",
- '‘': "'",
- '’': "'",
- '(': "'",
- ')': "'",
- '(': "'",
- ')': "'",
- '《': "'",
- '》': "'",
- '【': "'",
- '】': "'",
- '[': "'",
- ']': "'",
- '—': "-",
- '~': "-",
- '~': "-",
- '「': "'",
- '」': "'",
-
-}
-
-tone_modifier = ToneSandhi()
-
-def replace_punctuation(text):
- text = text.replace("嗯", "恩").replace("呣","母")
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
-
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
-
- replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text)
-
- return replaced_text
-
-def g2p(text):
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
- sentences = [i for i in re.split(pattern, text) if i.strip()!='']
- phones, tones, word2ph = _g2p(sentences)
- assert sum(word2ph) == len(phones)
- assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch.
- phones = ['_'] + phones + ["_"]
- tones = [0] + tones + [0]
- word2ph = [1] + word2ph + [1]
- return phones, tones, word2ph
-
-
-def _get_initials_finals(word):
- initials = []
- finals = []
- orig_initials = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.INITIALS)
- orig_finals = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for c, v in zip(orig_initials, orig_finals):
- initials.append(c)
- finals.append(v)
- return initials, finals
-
-
-def _g2p(segments):
- phones_list = []
- tones_list = []
- word2ph = []
- for seg in segments:
- pinyins = []
- # Replace all English words in the sentence
- seg = re.sub('[a-zA-Z]+', '', seg)
- seg_cut = psg.lcut(seg)
- initials = []
- finals = []
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
- for word, pos in seg_cut:
- if pos == 'eng':
- continue
- sub_initials, sub_finals = _get_initials_finals(word)
- sub_finals = tone_modifier.modified_tone(word, pos,
- sub_finals)
- initials.append(sub_initials)
- finals.append(sub_finals)
-
- # assert len(sub_initials) == len(sub_finals) == len(word)
- initials = sum(initials, [])
- finals = sum(finals, [])
- #
- for c, v in zip(initials, finals):
- raw_pinyin = c+v
- # NOTE: post process for pypinyin outputs
- # we discriminate i, ii and iii
- if c == v:
- assert c in punctuation
- phone = [c]
- tone = '0'
- word2ph.append(1)
- else:
- v_without_tone = v[:-1]
- tone = v[-1]
-
- pinyin = c+v_without_tone
- assert tone in '12345'
-
- if c:
- # 多音节
- v_rep_map = {
- "uei": 'ui',
- 'iou': 'iu',
- 'uen': 'un',
- }
- if v_without_tone in v_rep_map.keys():
- pinyin = c+v_rep_map[v_without_tone]
- else:
- # 单音节
- pinyin_rep_map = {
- 'ing': 'ying',
- 'i': 'yi',
- 'in': 'yin',
- 'u': 'wu',
- }
- if pinyin in pinyin_rep_map.keys():
- pinyin = pinyin_rep_map[pinyin]
- else:
- single_rep_map = {
- 'v': 'yu',
- 'e': 'e',
- 'i': 'y',
- 'u': 'w',
- }
- if pinyin[0] in single_rep_map.keys():
- pinyin = single_rep_map[pinyin[0]]+pinyin[1:]
-
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
- phone = pinyin_to_symbol_map[pinyin].split(' ')
- word2ph.append(len(phone))
-
- phones_list += phone
- tones_list += [int(tone)] * len(phone)
- return phones_list, tones_list, word2ph
-
-
-
-def text_normalize(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- text = replace_punctuation(text)
- return text
-
-def get_bert_feature(text, word2ph):
- from text import chinese_bert
- return chinese_bert.get_bert_feature(text, word2ph)
-
-if __name__ == '__main__':
- from text.chinese_bert import get_bert_feature
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
- text = text_normalize(text)
- print(text)
- phones, tones, word2ph = g2p(text)
- bert = get_bert_feature(text, word2ph)
-
- print(phones, tones, word2ph, bert.shape)
-
-
-# # 示例用法
-# text = "这是一个示例文本:,你好!这是一个测试...."
-# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
diff --git a/spaces/XzJosh/otto-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/otto-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/otto-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/models.py b/spaces/XzJosh/yoyo-Bert-VITS2/models.py
deleted file mode 100644
index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/yoyo-Bert-VITS2/models.py
+++ /dev/null
@@ -1,707 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-from commons import init_weights, get_padding
-from text import symbols, num_tones, num_languages
-class DurationDiscriminator(nn.Module): #vits2
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.dur_proj = nn.Conv1d(1, filter_channels, 1)
-
- self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
- self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- self.output_layer = nn.Sequential(
- nn.Linear(filter_channels, 1),
- nn.Sigmoid()
- )
-
- def forward_probability(self, x, x_mask, dur, g=None):
- dur = self.dur_proj(dur)
- x = torch.cat([x, dur], dim=1)
- x = self.pre_out_conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_1(x)
- x = self.drop(x)
- x = self.pre_out_conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_2(x)
- x = self.drop(x)
- x = x * x_mask
- x = x.transpose(1, 2)
- output_prob = self.output_layer(x)
- return output_prob
-
- def forward(self, x, x_mask, dur_r, dur_hat, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
-
- output_probs = []
- for dur in [dur_r, dur_hat]:
- output_prob = self.forward_probability(x, x_mask, dur, g)
- output_probs.append(output_prob)
-
- return output_probs
-
-class TransformerCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- n_flows=4,
- gin_channels=0,
- share_parameter=False
- ):
-
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
-
- self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None
-
- for i in range(n_flows):
- self.flows.append(
- modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=0):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
- self.emb = nn.Embedding(len(symbols), hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
- self.tone_emb = nn.Embedding(num_tones, hidden_channels)
- nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5)
- self.language_emb = nn.Embedding(num_languages, hidden_channels)
- nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5)
- self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, tone, language, bert, g=None):
- x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask, g=g)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-class ReferenceEncoder(nn.Module):
- '''
- inputs --- [N, Ty/r, n_mels*r] mels
- outputs --- [N, ref_enc_gru_size]
- '''
-
- def __init__(self, spec_channels, gin_channels=0):
-
- super().__init__()
- self.spec_channels = spec_channels
- ref_enc_filters = [32, 32, 64, 64, 128, 128]
- K = len(ref_enc_filters)
- filters = [1] + ref_enc_filters
- convs = [weight_norm(nn.Conv2d(in_channels=filters[i],
- out_channels=filters[i + 1],
- kernel_size=(3, 3),
- stride=(2, 2),
- padding=(1, 1))) for i in range(K)]
- self.convs = nn.ModuleList(convs)
- # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)])
-
- out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
- self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels,
- hidden_size=256 // 2,
- batch_first=True)
- self.proj = nn.Linear(128, gin_channels)
-
- def forward(self, inputs, mask=None):
- N = inputs.size(0)
- out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
- for conv in self.convs:
- out = conv(out)
- # out = wn(out)
- out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
-
- out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
- T = out.size(1)
- N = out.size(0)
- out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
-
- self.gru.flatten_parameters()
- memory, out = self.gru(out) # out --- [1, N, 128]
-
- return self.proj(out.squeeze(0))
-
- def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
- for i in range(n_convs):
- L = (L - kernel_size + 2 * pad) // stride + 1
- return L
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=256,
- gin_channels=256,
- use_sdp=True,
- n_flow_layer = 4,
- n_layers_trans_flow = 3,
- flow_share_parameter = False,
- use_transformer_flow = True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_layers_trans_flow = n_layers_trans_flow
- self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True)
- self.use_sdp = use_sdp
- self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
- self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
- self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
- self.current_mas_noise_scale = self.mas_noise_scale_initial
- if self.use_spk_conditioned_encoder and gin_channels > 0:
- self.enc_gin_channels = gin_channels
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.enc_gin_channels)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- if use_transformer_flow:
- self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter)
- else:
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels)
- self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers >= 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- else:
- self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert):
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
- if self.use_noise_scaled_mas:
- epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale
- neg_cent = neg_cent + epsilon
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
-
- l_length_sdp = self.sdp(x, x_mask, w, g=g)
- l_length_sdp = l_length_sdp / torch.sum(x_mask)
-
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- l_length = l_length_dp + l_length_sdp
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_)
-
- def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None):
- #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
- # g = self.gst(y)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
diff --git a/spaces/YUANAI/DiffspeechResearch/data_gen/tts/runs/preprocess.py b/spaces/YUANAI/DiffspeechResearch/data_gen/tts/runs/preprocess.py
deleted file mode 100644
index c6ca87c3d37c0bdedfff26a9a0b8450e430b6d59..0000000000000000000000000000000000000000
--- a/spaces/YUANAI/DiffspeechResearch/data_gen/tts/runs/preprocess.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import utils.commons.single_thread_env # NOQA
-from utils.commons.hparams import hparams, set_hparams
-import importlib
-
-
-def preprocess():
- assert hparams['preprocess_cls'] != ''
-
- pkg = ".".join(hparams["preprocess_cls"].split(".")[:-1])
- cls_name = hparams["preprocess_cls"].split(".")[-1]
- process_cls = getattr(importlib.import_module(pkg), cls_name)
- process_cls().process()
-
-
-if __name__ == '__main__':
- set_hparams()
- preprocess()
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py
deleted file mode 100644
index 2eb202bd5efa3ec3d366027b1debffc269ae8b17..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import numpy as np
-import time
-from pycocotools.cocoeval import COCOeval
-
-from detectron2 import _C
-
-logger = logging.getLogger(__name__)
-
-
-class COCOeval_opt(COCOeval):
- """
- This is a slightly modified version of the original COCO API, where the functions evaluateImg()
- and accumulate() are implemented in C++ to speedup evaluation
- """
-
- def evaluate(self):
- """
- Run per image evaluation on given images and store results in self.evalImgs_cpp, a
- datastructure that isn't readable from Python but is used by a c++ implementation of
- accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure
- self.evalImgs because this datastructure is a computational bottleneck.
- :return: None
- """
- tic = time.time()
-
- p = self.params
- # add backward compatibility if useSegm is specified in params
- if p.useSegm is not None:
- p.iouType = "segm" if p.useSegm == 1 else "bbox"
- logger.info("Evaluate annotation type *{}*".format(p.iouType))
- p.imgIds = list(np.unique(p.imgIds))
- if p.useCats:
- p.catIds = list(np.unique(p.catIds))
- p.maxDets = sorted(p.maxDets)
- self.params = p
-
- self._prepare() # bottleneck
-
- # loop through images, area range, max detection number
- catIds = p.catIds if p.useCats else [-1]
-
- if p.iouType == "segm" or p.iouType == "bbox":
- computeIoU = self.computeIoU
- elif p.iouType == "keypoints":
- computeIoU = self.computeOks
- self.ious = {
- (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds
- } # bottleneck
-
- maxDet = p.maxDets[-1]
-
- # <<<< Beginning of code differences with original COCO API
- def convert_instances_to_cpp(instances, is_det=False):
- # Convert annotations for a list of instances in an image to a format that's fast
- # to access in C++
- instances_cpp = []
- for instance in instances:
- instance_cpp = _C.InstanceAnnotation(
- int(instance["id"]),
- instance["score"] if is_det else instance.get("score", 0.0),
- instance["area"],
- bool(instance.get("iscrowd", 0)),
- bool(instance.get("ignore", 0)),
- )
- instances_cpp.append(instance_cpp)
- return instances_cpp
-
- # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++
- ground_truth_instances = [
- [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds]
- for imgId in p.imgIds
- ]
- detected_instances = [
- [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds]
- for imgId in p.imgIds
- ]
- ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds]
-
- if not p.useCats:
- # For each image, flatten per-category lists into a single list
- ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances]
- detected_instances = [[[o for c in i for o in c]] for i in detected_instances]
-
- # Call C++ implementation of self.evaluateImgs()
- self._evalImgs_cpp = _C.COCOevalEvaluateImages(
- p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances
- )
- self._evalImgs = None
-
- self._paramsEval = copy.deepcopy(self.params)
- toc = time.time()
- logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic))
- # >>>> End of code differences with original COCO API
-
- def accumulate(self):
- """
- Accumulate per image evaluation results and store the result in self.eval. Does not
- support changing parameter settings from those used by self.evaluate()
- """
- logger.info("Accumulating evaluation results...")
- tic = time.time()
- assert hasattr(
- self, "_evalImgs_cpp"
- ), "evaluate() must be called before accmulate() is called."
-
- self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp)
-
- # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections
- self.eval["recall"] = np.array(self.eval["recall"]).reshape(
- self.eval["counts"][:1] + self.eval["counts"][2:]
- )
-
- # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X
- # num_area_ranges X num_max_detections
- self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"])
- self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"])
- toc = time.time()
- logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic))
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h
deleted file mode 100644
index 3bf383b8ed9b358b5313d433a9682c294dfb77e4..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h
+++ /dev/null
@@ -1,35 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-at::Tensor box_iou_rotated_cpu(
- const at::Tensor& boxes1,
- const at::Tensor& boxes2);
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-at::Tensor box_iou_rotated_cuda(
- const at::Tensor& boxes1,
- const at::Tensor& boxes2);
-#endif
-
-// Interface for Python
-// inline is needed to prevent multiple function definitions when this header is
-// included by different cpps
-inline at::Tensor box_iou_rotated(
- const at::Tensor& boxes1,
- const at::Tensor& boxes2) {
- assert(boxes1.device().is_cuda() == boxes2.device().is_cuda());
- if (boxes1.device().is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return box_iou_rotated_cuda(boxes1.contiguous(), boxes2.contiguous());
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
-
- return box_iou_rotated_cpu(boxes1.contiguous(), boxes2.contiguous());
-}
-
-} // namespace detectron2
diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/smpl.py b/spaces/Yuliang/ECON/lib/pymafx/models/smpl.py
deleted file mode 100644
index 6dcb6127886e9671fde6a4036d0889ab39ff2b66..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/pymafx/models/smpl.py
+++ /dev/null
@@ -1,927 +0,0 @@
-# This script is extended based on https://github.com/nkolot/SPIN/blob/master/models/smpl.py
-
-import json
-import os
-import pickle
-from dataclasses import dataclass
-from typing import Optional
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-from lib.pymafx.core import constants, path_config
-from lib.smplx import SMPL as _SMPL
-from lib.smplx import FLAMELayer, MANOLayer, SMPLXLayer
-from lib.smplx.body_models import SMPLXOutput
-from lib.smplx.lbs import (
- batch_rodrigues,
- blend_shapes,
- transform_mat,
- vertices2joints,
-)
-
-SMPL_MEAN_PARAMS = path_config.SMPL_MEAN_PARAMS
-SMPL_MODEL_DIR = path_config.SMPL_MODEL_DIR
-
-
-@dataclass
-class ModelOutput(SMPLXOutput):
- smpl_joints: Optional[torch.Tensor] = None
- joints_J19: Optional[torch.Tensor] = None
- smplx_vertices: Optional[torch.Tensor] = None
- flame_vertices: Optional[torch.Tensor] = None
- lhand_vertices: Optional[torch.Tensor] = None
- rhand_vertices: Optional[torch.Tensor] = None
- lhand_joints: Optional[torch.Tensor] = None
- rhand_joints: Optional[torch.Tensor] = None
- face_joints: Optional[torch.Tensor] = None
- lfoot_joints: Optional[torch.Tensor] = None
- rfoot_joints: Optional[torch.Tensor] = None
-
-
-class SMPL(_SMPL):
- """ Extension of the official SMPL implementation to support more joints """
- def __init__(
- self,
- create_betas=False,
- create_global_orient=False,
- create_body_pose=False,
- create_transl=False,
- *args,
- **kwargs
- ):
- super().__init__(
- create_betas=create_betas,
- create_global_orient=create_global_orient,
- create_body_pose=create_body_pose,
- create_transl=create_transl,
- *args,
- **kwargs
- )
- joints = [constants.JOINT_MAP[i] for i in constants.JOINT_NAMES]
- J_regressor_extra = np.load(path_config.JOINT_REGRESSOR_TRAIN_EXTRA)
- self.register_buffer(
- 'J_regressor_extra', torch.tensor(J_regressor_extra, dtype=torch.float32)
- )
- self.joint_map = torch.tensor(joints, dtype=torch.long)
- # self.ModelOutput = namedtuple('ModelOutput_', ModelOutput._fields + ('smpl_joints', 'joints_J19',))
- # self.ModelOutput.__new__.__defaults__ = (None,) * len(self.ModelOutput._fields)
-
- tpose_joints = vertices2joints(self.J_regressor, self.v_template.unsqueeze(0))
- self.register_buffer('tpose_joints', tpose_joints)
-
- def forward(self, *args, **kwargs):
- kwargs['get_skin'] = True
- smpl_output = super().forward(*args, **kwargs)
- extra_joints = vertices2joints(self.J_regressor_extra, smpl_output.vertices)
- # smpl_output.joints: [B, 45, 3] extra_joints: [B, 9, 3]
- vertices = smpl_output.vertices
- joints = torch.cat([smpl_output.joints, extra_joints], dim=1)
- smpl_joints = smpl_output.joints[:, :24]
- joints = joints[:, self.joint_map, :] # [B, 49, 3]
- joints_J24 = joints[:, -24:, :]
- joints_J19 = joints_J24[:, constants.J24_TO_J19, :]
- output = ModelOutput(
- vertices=vertices,
- global_orient=smpl_output.global_orient,
- body_pose=smpl_output.body_pose,
- joints=joints,
- joints_J19=joints_J19,
- smpl_joints=smpl_joints,
- betas=smpl_output.betas,
- full_pose=smpl_output.full_pose
- )
- return output
-
- def get_global_rotation(
- self,
- global_orient: Optional[torch.Tensor] = None,
- body_pose: Optional[torch.Tensor] = None,
- **kwargs
- ):
- '''
- Forward pass for the SMPLX model
-
- Parameters
- ----------
- global_orient: torch.tensor, optional, shape Bx3x3
- If given, ignore the member variable and use it as the global
- rotation of the body. Useful if someone wishes to predicts this
- with an external model. It is expected to be in rotation matrix
- format. (default=None)
- body_pose: torch.tensor, optional, shape BxJx3x3
- If given, ignore the member variable `body_pose` and use it
- instead. For example, it can used if someone predicts the
- pose of the body joints are predicted from some external model.
- It should be a tensor that contains joint rotations in
- rotation matrix format. (default=None)
- Returns
- -------
- output: Global rotation matrix
- '''
- device, dtype = self.shapedirs.device, self.shapedirs.dtype
-
- model_vars = [global_orient, body_pose]
- batch_size = 1
- for var in model_vars:
- if var is None:
- continue
- batch_size = max(batch_size, len(var))
-
- if global_orient is None:
- global_orient = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, -1, -1,
- -1).contiguous()
- if body_pose is None:
- body_pose = torch.eye(3, device=device, dtype=dtype).view(1, 1, 3, 3).expand(
- batch_size, self.NUM_BODY_JOINTS, -1, -1
- ).contiguous()
-
- # Concatenate all pose vectors
- full_pose = torch.cat([
- global_orient.reshape(-1, 1, 3, 3),
- body_pose.reshape(-1, self.NUM_BODY_JOINTS, 3, 3)
- ],
- dim=1)
-
- rot_mats = full_pose.view(batch_size, -1, 3, 3)
-
- # Get the joints
- # NxJx3 array
- # joints = vertices2joints(self.J_regressor, self.v_template.unsqueeze(0).expand(batch_size, -1, -1))
- # joints = torch.unsqueeze(joints, dim=-1)
-
- joints = self.tpose_joints.expand(batch_size, -1, -1).unsqueeze(-1)
-
- rel_joints = joints.clone()
- rel_joints[:, 1:] -= joints[:, self.parents[1:]]
-
- transforms_mat = transform_mat(rot_mats.reshape(-1, 3, 3),
- rel_joints.reshape(-1, 3,
- 1)).reshape(-1, joints.shape[1], 4, 4)
-
- transform_chain = [transforms_mat[:, 0]]
- for i in range(1, self.parents.shape[0]):
- # Subtract the joint location at the rest pose
- # No need for rotation, since it's identity when at rest
- curr_res = torch.matmul(transform_chain[self.parents[i]], transforms_mat[:, i])
- transform_chain.append(curr_res)
-
- transforms = torch.stack(transform_chain, dim=1)
-
- global_rotmat = transforms[:, :, :3, :3]
-
- # The last column of the transformations contains the posed joints
- posed_joints = transforms[:, :, :3, 3]
-
- return global_rotmat, posed_joints
-
-
-class SMPLX(SMPLXLayer):
- """ Extension of the official SMPLX implementation to support more functions """
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def get_global_rotation(
- self,
- global_orient: Optional[torch.Tensor] = None,
- body_pose: Optional[torch.Tensor] = None,
- left_hand_pose: Optional[torch.Tensor] = None,
- right_hand_pose: Optional[torch.Tensor] = None,
- jaw_pose: Optional[torch.Tensor] = None,
- leye_pose: Optional[torch.Tensor] = None,
- reye_pose: Optional[torch.Tensor] = None,
- **kwargs
- ):
- '''
- Forward pass for the SMPLX model
-
- Parameters
- ----------
- global_orient: torch.tensor, optional, shape Bx3x3
- If given, ignore the member variable and use it as the global
- rotation of the body. Useful if someone wishes to predicts this
- with an external model. It is expected to be in rotation matrix
- format. (default=None)
- betas: torch.tensor, optional, shape BxN_b
- If given, ignore the member variable `betas` and use it
- instead. For example, it can used if shape parameters
- `betas` are predicted from some external model.
- (default=None)
- expression: torch.tensor, optional, shape BxN_e
- Expression coefficients.
- For example, it can used if expression parameters
- `expression` are predicted from some external model.
- body_pose: torch.tensor, optional, shape BxJx3x3
- If given, ignore the member variable `body_pose` and use it
- instead. For example, it can used if someone predicts the
- pose of the body joints are predicted from some external model.
- It should be a tensor that contains joint rotations in
- rotation matrix format. (default=None)
- left_hand_pose: torch.tensor, optional, shape Bx15x3x3
- If given, contains the pose of the left hand.
- It should be a tensor that contains joint rotations in
- rotation matrix format. (default=None)
- right_hand_pose: torch.tensor, optional, shape Bx15x3x3
- If given, contains the pose of the right hand.
- It should be a tensor that contains joint rotations in
- rotation matrix format. (default=None)
- jaw_pose: torch.tensor, optional, shape Bx3x3
- Jaw pose. It should either joint rotations in
- rotation matrix format.
- transl: torch.tensor, optional, shape Bx3
- Translation vector of the body.
- For example, it can used if the translation
- `transl` is predicted from some external model.
- (default=None)
- return_verts: bool, optional
- Return the vertices. (default=True)
- return_full_pose: bool, optional
- Returns the full pose vector (default=False)
- Returns
- -------
- output: ModelOutput
- A data class that contains the posed vertices and joints
- '''
- device, dtype = self.shapedirs.device, self.shapedirs.dtype
-
- model_vars = [global_orient, body_pose, left_hand_pose, right_hand_pose, jaw_pose]
- batch_size = 1
- for var in model_vars:
- if var is None:
- continue
- batch_size = max(batch_size, len(var))
-
- if global_orient is None:
- global_orient = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, -1, -1,
- -1).contiguous()
- if body_pose is None:
- body_pose = torch.eye(3, device=device, dtype=dtype).view(1, 1, 3, 3).expand(
- batch_size, self.NUM_BODY_JOINTS, -1, -1
- ).contiguous()
- if left_hand_pose is None:
- left_hand_pose = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, 15, -1,
- -1).contiguous()
- if right_hand_pose is None:
- right_hand_pose = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3,
- 3).expand(batch_size, 15, -1,
- -1).contiguous()
- if jaw_pose is None:
- jaw_pose = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, -1, -1,
- -1).contiguous()
- if leye_pose is None:
- leye_pose = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, -1, -1,
- -1).contiguous()
- if reye_pose is None:
- reye_pose = torch.eye(3, device=device,
- dtype=dtype).view(1, 1, 3, 3).expand(batch_size, -1, -1,
- -1).contiguous()
-
- # Concatenate all pose vectors
- full_pose = torch.cat([
- global_orient.reshape(-1, 1, 3, 3),
- body_pose.reshape(-1, self.NUM_BODY_JOINTS, 3, 3),
- jaw_pose.reshape(-1, 1, 3, 3),
- leye_pose.reshape(-1, 1, 3, 3),
- reye_pose.reshape(-1, 1, 3, 3),
- left_hand_pose.reshape(-1, self.NUM_HAND_JOINTS, 3, 3),
- right_hand_pose.reshape(-1, self.NUM_HAND_JOINTS, 3, 3)
- ],
- dim=1)
-
- rot_mats = full_pose.view(batch_size, -1, 3, 3)
-
- # Get the joints
- # NxJx3 array
- joints = vertices2joints(
- self.J_regressor,
- self.v_template.unsqueeze(0).expand(batch_size, -1, -1)
- )
-
- joints = torch.unsqueeze(joints, dim=-1)
-
- rel_joints = joints.clone()
- rel_joints[:, 1:] -= joints[:, self.parents[1:]]
-
- transforms_mat = transform_mat(rot_mats.reshape(-1, 3, 3),
- rel_joints.reshape(-1, 3,
- 1)).reshape(-1, joints.shape[1], 4, 4)
-
- transform_chain = [transforms_mat[:, 0]]
- for i in range(1, self.parents.shape[0]):
- # Subtract the joint location at the rest pose
- # No need for rotation, since it's identity when at rest
- curr_res = torch.matmul(transform_chain[self.parents[i]], transforms_mat[:, i])
- transform_chain.append(curr_res)
-
- transforms = torch.stack(transform_chain, dim=1)
-
- global_rotmat = transforms[:, :, :3, :3]
-
- # The last column of the transformations contains the posed joints
- posed_joints = transforms[:, :, :3, 3]
-
- return global_rotmat, posed_joints
-
-
-class SMPLX_ALL(nn.Module):
- """ Extension of the official SMPLX implementation to support more joints """
- def __init__(self, batch_size=1, use_face_contour=True, all_gender=False, **kwargs):
- super().__init__()
- numBetas = 10
- self.use_face_contour = use_face_contour
- if all_gender:
- self.genders = ['male', 'female', 'neutral']
- else:
- self.genders = ['neutral']
- for gender in self.genders:
- assert gender in ['male', 'female', 'neutral']
- self.model_dict = nn.ModuleDict({
- gender: SMPLX(
- path_config.SMPL_MODEL_DIR,
- gender=gender,
- ext='npz',
- num_betas=numBetas,
- use_pca=False,
- batch_size=batch_size,
- use_face_contour=use_face_contour,
- num_pca_comps=45,
- **kwargs
- )
- for gender in self.genders
- })
- self.model_neutral = self.model_dict['neutral']
- joints = [constants.JOINT_MAP[i] for i in constants.JOINT_NAMES]
- J_regressor_extra = np.load(path_config.JOINT_REGRESSOR_TRAIN_EXTRA)
- self.register_buffer(
- 'J_regressor_extra', torch.tensor(J_regressor_extra, dtype=torch.float32)
- )
- self.joint_map = torch.tensor(joints, dtype=torch.long)
- # smplx_to_smpl.pkl, file source: https://smpl-x.is.tue.mpg.de
- smplx_to_smpl = pickle.load(
- open(os.path.join(SMPL_MODEL_DIR, 'model_transfer/smplx_to_smpl.pkl'), 'rb')
- )
- self.register_buffer(
- 'smplx2smpl', torch.tensor(smplx_to_smpl['matrix'][None], dtype=torch.float32)
- )
-
- smpl2limb_vert_faces = get_partial_smpl('smpl')
- self.smpl2lhand = torch.from_numpy(smpl2limb_vert_faces['lhand']['vids']).long()
- self.smpl2rhand = torch.from_numpy(smpl2limb_vert_faces['rhand']['vids']).long()
-
- # left and right hand joint mapping
- smplx2lhand_joints = [
- constants.SMPLX_JOINT_IDS['left_{}'.format(name)] for name in constants.HAND_NAMES
- ]
- smplx2rhand_joints = [
- constants.SMPLX_JOINT_IDS['right_{}'.format(name)] for name in constants.HAND_NAMES
- ]
- self.smplx2lh_joint_map = torch.tensor(smplx2lhand_joints, dtype=torch.long)
- self.smplx2rh_joint_map = torch.tensor(smplx2rhand_joints, dtype=torch.long)
-
- # left and right foot joint mapping
- smplx2lfoot_joints = [
- constants.SMPLX_JOINT_IDS['left_{}'.format(name)] for name in constants.FOOT_NAMES
- ]
- smplx2rfoot_joints = [
- constants.SMPLX_JOINT_IDS['right_{}'.format(name)] for name in constants.FOOT_NAMES
- ]
- self.smplx2lf_joint_map = torch.tensor(smplx2lfoot_joints, dtype=torch.long)
- self.smplx2rf_joint_map = torch.tensor(smplx2rfoot_joints, dtype=torch.long)
-
- for g in self.genders:
- J_template = torch.einsum(
- 'ji,ik->jk', [self.model_dict[g].J_regressor[:24], self.model_dict[g].v_template]
- )
- J_dirs = torch.einsum(
- 'ji,ikl->jkl', [self.model_dict[g].J_regressor[:24], self.model_dict[g].shapedirs]
- )
-
- self.register_buffer(f'{g}_J_template', J_template)
- self.register_buffer(f'{g}_J_dirs', J_dirs)
-
- def forward(self, *args, **kwargs):
- batch_size = kwargs['body_pose'].shape[0]
- kwargs['get_skin'] = True
- if 'pose2rot' not in kwargs:
- kwargs['pose2rot'] = True
- if 'gender' not in kwargs:
- kwargs['gender'] = 2 * torch.ones(batch_size).to(kwargs['body_pose'].device)
-
- # pose for 55 joints: 1, 21, 15, 15, 1, 1, 1
- pose_keys = [
- 'global_orient', 'body_pose', 'left_hand_pose', 'right_hand_pose', 'jaw_pose',
- 'leye_pose', 'reye_pose'
- ]
- param_keys = ['betas'] + pose_keys
- if kwargs['pose2rot']:
- for key in pose_keys:
- if key in kwargs:
- # if key == 'left_hand_pose':
- # kwargs[key] += self.model_neutral.left_hand_mean
- # elif key == 'right_hand_pose':
- # kwargs[key] += self.model_neutral.right_hand_mean
- kwargs[key] = batch_rodrigues(kwargs[key].contiguous().view(-1, 3)).view([
- batch_size, -1, 3, 3
- ])
- if kwargs['body_pose'].shape[1] == 23:
- # remove hand pose in the body_pose
- kwargs['body_pose'] = kwargs['body_pose'][:, :21]
- gender_idx_list = []
- smplx_vertices, smplx_joints = [], []
- for gi, g in enumerate(['male', 'female', 'neutral']):
- gender_idx = ((kwargs['gender'] == gi).nonzero(as_tuple=True)[0])
- if len(gender_idx) == 0:
- continue
- gender_idx_list.extend([int(idx) for idx in gender_idx])
- gender_kwargs = {'get_skin': kwargs['get_skin'], 'pose2rot': kwargs['pose2rot']}
- gender_kwargs.update({k: kwargs[k][gender_idx] for k in param_keys if k in kwargs})
- gender_smplx_output = self.model_dict[g].forward(*args, **gender_kwargs)
- smplx_vertices.append(gender_smplx_output.vertices)
- smplx_joints.append(gender_smplx_output.joints)
-
- idx_rearrange = [gender_idx_list.index(i) for i in range(len(list(gender_idx_list)))]
- idx_rearrange = torch.tensor(idx_rearrange).long().to(kwargs['body_pose'].device)
-
- smplx_vertices = torch.cat(smplx_vertices)[idx_rearrange]
- smplx_joints = torch.cat(smplx_joints)[idx_rearrange]
-
- # constants.HAND_NAMES
- lhand_joints = smplx_joints[:, self.smplx2lh_joint_map]
- rhand_joints = smplx_joints[:, self.smplx2rh_joint_map]
- # constants.FACIAL_LANDMARKS
- face_joints = smplx_joints[:, -68:] if self.use_face_contour else smplx_joints[:, -51:]
- # constants.FOOT_NAMES
- lfoot_joints = smplx_joints[:, self.smplx2lf_joint_map]
- rfoot_joints = smplx_joints[:, self.smplx2rf_joint_map]
-
- smpl_vertices = torch.bmm(self.smplx2smpl.expand(batch_size, -1, -1), smplx_vertices)
- lhand_vertices = smpl_vertices[:, self.smpl2lhand]
- rhand_vertices = smpl_vertices[:, self.smpl2rhand]
- extra_joints = vertices2joints(self.J_regressor_extra, smpl_vertices)
- # smpl_output.joints: [B, 45, 3] extra_joints: [B, 9, 3]
- smplx_j45 = smplx_joints[:, constants.SMPLX2SMPL_J45]
- joints = torch.cat([smplx_j45, extra_joints], dim=1)
- smpl_joints = smplx_j45[:, :24]
- joints = joints[:, self.joint_map, :] # [B, 49, 3]
- joints_J24 = joints[:, -24:, :]
- joints_J19 = joints_J24[:, constants.J24_TO_J19, :]
- output = ModelOutput(
- vertices=smpl_vertices,
- smplx_vertices=smplx_vertices,
- lhand_vertices=lhand_vertices,
- rhand_vertices=rhand_vertices,
- # global_orient=smplx_output.global_orient,
- # body_pose=smplx_output.body_pose,
- joints=joints,
- joints_J19=joints_J19,
- smpl_joints=smpl_joints,
- # betas=smplx_output.betas,
- # full_pose=smplx_output.full_pose,
- lhand_joints=lhand_joints,
- rhand_joints=rhand_joints,
- lfoot_joints=lfoot_joints,
- rfoot_joints=rfoot_joints,
- face_joints=face_joints,
- )
- return output
-
- # def make_hand_regressor(self):
- # # borrowed from https://github.com/mks0601/Hand4Whole_RELEASE/blob/main/common/utils/human_models.py
- # regressor = self.model_neutral.J_regressor.numpy()
- # vertex_num = self.model_neutral.J_regressor.shape[-1]
- # lhand_regressor = np.concatenate((regressor[[20,37,38,39],:],
- # np.eye(vertex_num)[5361,None],
- # regressor[[25,26,27],:],
- # np.eye(vertex_num)[4933,None],
- # regressor[[28,29,30],:],
- # np.eye(vertex_num)[5058,None],
- # regressor[[34,35,36],:],
- # np.eye(vertex_num)[5169,None],
- # regressor[[31,32,33],:],
- # np.eye(vertex_num)[5286,None]))
- # rhand_regressor = np.concatenate((regressor[[21,52,53,54],:],
- # np.eye(vertex_num)[8079,None],
- # regressor[[40,41,42],:],
- # np.eye(vertex_num)[7669,None],
- # regressor[[43,44,45],:],
- # np.eye(vertex_num)[7794,None],
- # regressor[[49,50,51],:],
- # np.eye(vertex_num)[7905,None],
- # regressor[[46,47,48],:],
- # np.eye(vertex_num)[8022,None]))
- # return torch.from_numpy(lhand_regressor).float(), torch.from_numpy(rhand_regressor).float()
-
- def get_tpose(self, betas=None, gender=None):
- kwargs = {}
- if betas is None:
- betas = torch.zeros(1, 10).to(self.J_regressor_extra.device)
- kwargs['betas'] = betas
-
- batch_size = kwargs['betas'].shape[0]
- device = kwargs['betas'].device
-
- if gender is None:
- kwargs['gender'] = 2 * torch.ones(batch_size).to(device)
- else:
- kwargs['gender'] = gender
-
- param_keys = ['betas']
-
- gender_idx_list = []
- smplx_joints = []
- for gi, g in enumerate(['male', 'female', 'neutral']):
- gender_idx = ((kwargs['gender'] == gi).nonzero(as_tuple=True)[0])
- if len(gender_idx) == 0:
- continue
- gender_idx_list.extend([int(idx) for idx in gender_idx])
- gender_kwargs = {}
- gender_kwargs.update({k: kwargs[k][gender_idx] for k in param_keys if k in kwargs})
-
- J = getattr(self, f'{g}_J_template').unsqueeze(0) + blend_shapes(
- gender_kwargs['betas'], getattr(self, f'{g}_J_dirs')
- )
-
- smplx_joints.append(J)
-
- idx_rearrange = [gender_idx_list.index(i) for i in range(len(list(gender_idx_list)))]
- idx_rearrange = torch.tensor(idx_rearrange).long().to(device)
-
- smplx_joints = torch.cat(smplx_joints)[idx_rearrange]
-
- return smplx_joints
-
-
-class MANO(MANOLayer):
- """ Extension of the official MANO implementation to support more joints """
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def forward(self, *args, **kwargs):
- if 'pose2rot' not in kwargs:
- kwargs['pose2rot'] = True
- pose_keys = ['global_orient', 'right_hand_pose']
- batch_size = kwargs['global_orient'].shape[0]
- if kwargs['pose2rot']:
- for key in pose_keys:
- if key in kwargs:
- kwargs[key] = batch_rodrigues(kwargs[key].contiguous().view(-1, 3)).view([
- batch_size, -1, 3, 3
- ])
- kwargs['hand_pose'] = kwargs.pop('right_hand_pose')
- mano_output = super().forward(*args, **kwargs)
- th_verts = mano_output.vertices
- th_jtr = mano_output.joints
- # https://github.com/hassony2/manopth/blob/master/manopth/manolayer.py#L248-L260
- # In addition to MANO reference joints we sample vertices on each finger
- # to serve as finger tips
- tips = th_verts[:, [745, 317, 445, 556, 673]]
- th_jtr = torch.cat([th_jtr, tips], 1)
- # Reorder joints to match visualization utilities
- th_jtr = th_jtr[:,
- [0, 13, 14, 15, 16, 1, 2, 3, 17, 4, 5, 6, 18, 10, 11, 12, 19, 7, 8, 9, 20]]
- output = ModelOutput(
- rhand_vertices=th_verts,
- rhand_joints=th_jtr,
- )
- return output
-
-
-class FLAME(FLAMELayer):
- """ Extension of the official FLAME implementation to support more joints """
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def forward(self, *args, **kwargs):
- if 'pose2rot' not in kwargs:
- kwargs['pose2rot'] = True
- pose_keys = ['global_orient', 'jaw_pose', 'leye_pose', 'reye_pose']
- batch_size = kwargs['global_orient'].shape[0]
- if kwargs['pose2rot']:
- for key in pose_keys:
- if key in kwargs:
- kwargs[key] = batch_rodrigues(kwargs[key].contiguous().view(-1, 3)).view([
- batch_size, -1, 3, 3
- ])
- flame_output = super().forward(*args, **kwargs)
- output = ModelOutput(
- flame_vertices=flame_output.vertices,
- face_joints=flame_output.joints[:, 5:],
- )
- return output
-
-
-class SMPL_Family():
- def __init__(self, model_type='smpl', *args, **kwargs):
- if model_type == 'smpl':
- self.model = SMPL(model_path=SMPL_MODEL_DIR, *args, **kwargs)
- elif model_type == 'smplx':
- self.model = SMPLX_ALL(*args, **kwargs)
- elif model_type == 'mano':
- self.model = MANO(
- model_path=SMPL_MODEL_DIR, is_rhand=True, use_pca=False, *args, **kwargs
- )
- elif model_type == 'flame':
- self.model = FLAME(model_path=SMPL_MODEL_DIR, use_face_contour=True, *args, **kwargs)
-
- def __call__(self, *args, **kwargs):
- return self.model(*args, **kwargs)
-
- def get_tpose(self, *args, **kwargs):
- return self.model.get_tpose(*args, **kwargs)
-
- # def to(self, device):
- # self.model.to(device)
-
- # def cuda(self, device=None):
- # if device is None:
- # self.model.cuda()
- # else:
- # self.model.cuda(device)
-
-
-def get_smpl_faces():
- smpl = SMPL(model_path=SMPL_MODEL_DIR, batch_size=1)
- return smpl.faces
-
-
-def get_smplx_faces():
- smplx = SMPLX(SMPL_MODEL_DIR, batch_size=1)
- return smplx.faces
-
-
-def get_mano_faces(hand_type='right'):
- assert hand_type in ['right', 'left']
- is_rhand = True if hand_type == 'right' else False
- mano = MANO(SMPL_MODEL_DIR, batch_size=1, is_rhand=is_rhand)
-
- return mano.faces
-
-
-def get_flame_faces():
- flame = FLAME(SMPL_MODEL_DIR, batch_size=1)
-
- return flame.faces
-
-
-def get_model_faces(type='smpl'):
- if type == 'smpl':
- return get_smpl_faces()
- elif type == 'smplx':
- return get_smplx_faces()
- elif type == 'mano':
- return get_mano_faces()
- elif type == 'flame':
- return get_flame_faces()
-
-
-def get_model_tpose(type='smpl'):
- if type == 'smpl':
- return get_smpl_tpose()
- elif type == 'smplx':
- return get_smplx_tpose()
- elif type == 'mano':
- return get_mano_tpose()
- elif type == 'flame':
- return get_flame_tpose()
-
-
-def get_smpl_tpose():
- smpl = SMPL(
- create_betas=True,
- create_global_orient=True,
- create_body_pose=True,
- model_path=SMPL_MODEL_DIR,
- batch_size=1
- )
- vertices = smpl().vertices[0]
- return vertices.detach()
-
-
-def get_smpl_tpose_joint():
- smpl = SMPL(
- create_betas=True,
- create_global_orient=True,
- create_body_pose=True,
- model_path=SMPL_MODEL_DIR,
- batch_size=1
- )
- tpose_joint = smpl().smpl_joints[0]
- return tpose_joint.detach()
-
-
-def get_smplx_tpose():
- smplx = SMPLXLayer(SMPL_MODEL_DIR, batch_size=1)
- vertices = smplx().vertices[0]
- return vertices
-
-
-def get_smplx_tpose_joint():
- smplx = SMPLXLayer(SMPL_MODEL_DIR, batch_size=1)
- tpose_joint = smplx().joints[0]
- return tpose_joint
-
-
-def get_mano_tpose():
- mano = MANO(SMPL_MODEL_DIR, batch_size=1, is_rhand=True)
- vertices = mano(global_orient=torch.zeros(1, 3),
- right_hand_pose=torch.zeros(1, 15 * 3)).rhand_vertices[0]
- return vertices
-
-
-def get_flame_tpose():
- flame = FLAME(SMPL_MODEL_DIR, batch_size=1)
- vertices = flame(global_orient=torch.zeros(1, 3)).flame_vertices[0]
- return vertices
-
-
-def get_part_joints(smpl_joints):
- batch_size = smpl_joints.shape[0]
-
- # part_joints = torch.zeros().to(smpl_joints.device)
-
- one_seg_pairs = [(0, 1), (0, 2), (0, 3), (3, 6), (9, 12), (9, 13), (9, 14), (12, 15), (13, 16),
- (14, 17)]
- two_seg_pairs = [(1, 4), (2, 5), (4, 7), (5, 8), (16, 18), (17, 19), (18, 20), (19, 21)]
-
- one_seg_pairs.extend(two_seg_pairs)
-
- single_joints = [(10), (11), (15), (22), (23)]
-
- part_joints = []
-
- for j_p in one_seg_pairs:
- new_joint = torch.mean(smpl_joints[:, j_p], dim=1, keepdim=True)
- part_joints.append(new_joint)
-
- for j_p in single_joints:
- part_joints.append(smpl_joints[:, j_p:j_p + 1])
-
- part_joints = torch.cat(part_joints, dim=1)
-
- return part_joints
-
-
-def get_partial_smpl(body_model='smpl', device=torch.device('cuda')):
-
- body_model_faces = get_model_faces(body_model)
- body_model_num_verts = len(get_model_tpose(body_model))
-
- part_vert_faces = {}
-
- for part in ['lhand', 'rhand', 'face', 'arm', 'forearm', 'larm', 'rarm', 'lwrist', 'rwrist']:
- part_vid_fname = '{}/{}_{}_vids.npz'.format(path_config.PARTIAL_MESH_DIR, body_model, part)
- if os.path.exists(part_vid_fname):
- part_vids = np.load(part_vid_fname)
- part_vert_faces[part] = {'vids': part_vids['vids'], 'faces': part_vids['faces']}
- else:
- if part in ['lhand', 'rhand']:
- with open(
- os.path.join(SMPL_MODEL_DIR, 'model_transfer/MANO_SMPLX_vertex_ids.pkl'), 'rb'
- ) as json_file:
- smplx_mano_id = pickle.load(json_file)
- with open(
- os.path.join(SMPL_MODEL_DIR, 'model_transfer/smplx_to_smpl.pkl'), 'rb'
- ) as json_file:
- smplx_smpl_id = pickle.load(json_file)
-
- smplx_tpose = get_smplx_tpose()
- smpl_tpose = np.matmul(smplx_smpl_id['matrix'], smplx_tpose)
-
- if part == 'lhand':
- mano_vert = smplx_tpose[smplx_mano_id['left_hand']]
- elif part == 'rhand':
- mano_vert = smplx_tpose[smplx_mano_id['right_hand']]
-
- smpl2mano_id = []
- for vert in mano_vert:
- v_diff = smpl_tpose - vert
- v_diff = torch.sum(v_diff * v_diff, dim=1)
- v_closest = torch.argmin(v_diff)
- smpl2mano_id.append(int(v_closest))
-
- smpl2mano_vids = np.array(smpl2mano_id).astype(np.long)
- mano_faces = get_mano_faces(hand_type='right' if part == 'rhand' else 'left'
- ).astype(np.long)
-
- np.savez(part_vid_fname, vids=smpl2mano_vids, faces=mano_faces)
- part_vert_faces[part] = {'vids': smpl2mano_vids, 'faces': mano_faces}
-
- elif part in ['face', 'arm', 'forearm', 'larm', 'rarm']:
- with open(
- os.path.join(SMPL_MODEL_DIR, '{}_vert_segmentation.json'.format(body_model)),
- 'rb'
- ) as json_file:
- smplx_part_id = json.load(json_file)
-
- # main_body_part = list(smplx_part_id.keys())
- # print('main_body_part', main_body_part)
-
- if part == 'face':
- selected_body_part = ['head']
- elif part == 'arm':
- selected_body_part = [
- 'rightHand',
- 'leftArm',
- 'leftShoulder',
- 'rightShoulder',
- 'rightArm',
- 'leftHandIndex1',
- 'rightHandIndex1',
- 'leftForeArm',
- 'rightForeArm',
- 'leftHand',
- ]
- # selected_body_part = ['rightHand', 'leftArm', 'rightArm', 'leftHandIndex1', 'rightHandIndex1', 'leftForeArm', 'rightForeArm', 'leftHand',]
- elif part == 'forearm':
- selected_body_part = [
- 'rightHand',
- 'leftHandIndex1',
- 'rightHandIndex1',
- 'leftForeArm',
- 'rightForeArm',
- 'leftHand',
- ]
- elif part == 'arm_eval':
- selected_body_part = ['leftArm', 'rightArm', 'leftForeArm', 'rightForeArm']
- elif part == 'larm':
- # selected_body_part = ['leftArm', 'leftForeArm']
- selected_body_part = ['leftForeArm']
- elif part == 'rarm':
- # selected_body_part = ['rightArm', 'rightForeArm']
- selected_body_part = ['rightForeArm']
-
- part_body_idx = []
- for k in selected_body_part:
- part_body_idx.extend(smplx_part_id[k])
-
- part_body_fid = []
- for f_id, face in enumerate(body_model_faces):
- if any(f in part_body_idx for f in face):
- part_body_fid.append(f_id)
-
- smpl2head_vids = np.unique(body_model_faces[part_body_fid]).astype(np.long)
-
- mesh_vid_raw = np.arange(body_model_num_verts)
- head_vid_new = np.arange(len(smpl2head_vids))
- mesh_vid_raw[smpl2head_vids] = head_vid_new
-
- head_faces = body_model_faces[part_body_fid]
- head_faces = mesh_vid_raw[head_faces].astype(np.long)
-
- np.savez(part_vid_fname, vids=smpl2head_vids, faces=head_faces)
- part_vert_faces[part] = {'vids': smpl2head_vids, 'faces': head_faces}
-
- elif part in ['lwrist', 'rwrist']:
-
- if body_model == 'smplx':
- body_model_verts = get_smplx_tpose()
- tpose_joint = get_smplx_tpose_joint()
- elif body_model == 'smpl':
- body_model_verts = get_smpl_tpose()
- tpose_joint = get_smpl_tpose_joint()
-
- wrist_joint = tpose_joint[20] if part == 'lwrist' else tpose_joint[21]
-
- dist = 0.005
- wrist_vids = []
- for vid, vt in enumerate(body_model_verts):
-
- v_j_dist = torch.sum((vt - wrist_joint)**2)
-
- if v_j_dist < dist:
- wrist_vids.append(vid)
-
- wrist_vids = np.array(wrist_vids)
-
- part_body_fid = []
- for f_id, face in enumerate(body_model_faces):
- if any(f in wrist_vids for f in face):
- part_body_fid.append(f_id)
-
- smpl2part_vids = np.unique(body_model_faces[part_body_fid]).astype(np.long)
-
- mesh_vid_raw = np.arange(body_model_num_verts)
- part_vid_new = np.arange(len(smpl2part_vids))
- mesh_vid_raw[smpl2part_vids] = part_vid_new
-
- part_faces = body_model_faces[part_body_fid]
- part_faces = mesh_vid_raw[part_faces].astype(np.long)
-
- np.savez(part_vid_fname, vids=smpl2part_vids, faces=part_faces)
- part_vert_faces[part] = {'vids': smpl2part_vids, 'faces': part_faces}
-
- # import trimesh
- # mesh = trimesh.Trimesh(vertices=body_model_verts[smpl2part_vids], faces=part_faces, process=False)
- # mesh.export(f'results/smplx_{part}.obj')
-
- # mesh = trimesh.Trimesh(vertices=body_model_verts, faces=body_model_faces, process=False)
- # mesh.export(f'results/smplx_model.obj')
-
- return part_vert_faces
diff --git a/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/jaas.md b/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/jaas.md
deleted file mode 100644
index 6268d608f4926063eb21bd302f7c158de221454b..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/jaas.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# JaaS Authentication
-
-## Overview
-
-The DataHub frontend server comes with support for plugging in [JaaS](https://docs.oracle.com/javase/7/docs/technotes/guides/security/jaas/JAASRefGuide.html) modules.
-This allows you to use a custom authentication protocol to log your users into DataHub.
-
-By default, we in include sample configuration of a file-based username / password authentication module ([PropertyFileLoginModule](http://archive.eclipse.org/jetty/8.0.0.M3/apidocs/org/eclipse/jetty/plus/jaas/spi/PropertyFileLoginModule.html))
-that is configured with a single username / password combination: datahub - datahub.
-
-To change or extend the default behavior, you have multiple options, each dependent on which deployment environment you're operating in.
-
-### Modify user.props file directly (Local Testing)
-
-The first option for customizing file-based users is to modify the file `datahub-frontend/app/conf/user.props` directly.
-Once you've added your desired users, you can simply run `./dev.sh` or `./datahub-frontend/run-local-frontend` to validate your
-new users can log in.
-
-### Mount a custom user.props file (Docker Compose)
-
-By default, the `datahub-frontend` container will look for a file called `user.props` mounted at the container path
-`/datahub-frontend/conf/user.props`. If you wish to launch this container with a custom set of users, you'll need to override the default
-file mounting when running using `docker-compose`.
-
-To do so, change the `datahub-frontend-react` service in the docker-compose.yml file containing it to include the custom file:
-
-```
-datahub-frontend-react:
- build:
- context: ../
- dockerfile: docker/datahub-frontend/Dockerfile
- image: linkedin/datahub-frontend-react:${DATAHUB_VERSION:-head}
- env_file: datahub-frontend/env/docker.env
- hostname: datahub-frontend-react
- container_name: datahub-frontend-react
- ports:
- - "9002:9002"
- depends_on:
- - datahub-gms
- volumes:
- - ./my-custom-dir/user.props:/datahub-frontend/conf/user.props
-```
-
-And then run `docker-compose up` against your compose file.
-
-
-## Custom JaaS Configuration
-
-In order to change the default JaaS module configuration, you will have to launch the `datahub-frontend-react` container with the custom `jaas.conf` file mounted as a volume
-at the location `/datahub-frontend/conf/jaas.conf`.
-
-To do so, change the `datahub-frontend-react` service in the docker-compose.yml file containing it to include the custom file:
-
-```
-datahub-frontend-react:
- build:
- context: ../
- dockerfile: docker/datahub-frontend/Dockerfile
- image: linkedin/datahub-frontend-react:${DATAHUB_VERSION:-head}
- env_file: datahub-frontend/env/docker.env
- hostname: datahub-frontend-react
- container_name: datahub-frontend-react
- ports:
- - "9002:9002"
- depends_on:
- - datahub-gms
- volumes:
- - ./my-custom-dir/jaas.conf:/datahub-frontend/conf/jaas.conf
-```
-
-And then run `docker-compose up` against your compose file.
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py
deleted file mode 100644
index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import mmcv
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weighted_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def quality_focal_loss(pred, target, beta=2.0):
- r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of classification
- and quality (IoU) estimation with shape (N, C), C is the number of
- classes.
- target (tuple([torch.Tensor])): Target category label with shape (N,)
- and target quality label with shape (N,).
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- assert len(target) == 2, """target for QFL must be a tuple of two elements,
- including category label and quality label, respectively"""
- # label denotes the category id, score denotes the quality score
- label, score = target
-
- # negatives are supervised by 0 quality score
- pred_sigmoid = pred.sigmoid()
- scale_factor = pred_sigmoid
- zerolabel = scale_factor.new_zeros(pred.shape)
- loss = F.binary_cross_entropy_with_logits(
- pred, zerolabel, reduction='none') * scale_factor.pow(beta)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = pred.size(1)
- pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1)
- pos_label = label[pos].long()
- # positives are supervised by bbox quality (IoU) score
- scale_factor = score[pos] - pred_sigmoid[pos, pos_label]
- loss[pos, pos_label] = F.binary_cross_entropy_with_logits(
- pred[pos, pos_label], score[pos],
- reduction='none') * scale_factor.abs().pow(beta)
-
- loss = loss.sum(dim=1, keepdim=False)
- return loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def distribution_focal_loss(pred, label):
- r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding boxes
- (before softmax) with shape (N, n+1), n is the max value of the
- integral set `{0, ..., n}` in paper.
- label (torch.Tensor): Target distance label for bounding boxes with
- shape (N,).
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- dis_left = label.long()
- dis_right = dis_left + 1
- weight_left = dis_right.float() - label
- weight_right = label - dis_left.float()
- loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \
- + F.cross_entropy(pred, dis_right, reduction='none') * weight_right
- return loss
-
-
-@LOSSES.register_module()
-class QualityFocalLoss(nn.Module):
- r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- use_sigmoid (bool): Whether sigmoid operation is conducted in QFL.
- Defaults to True.
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
- reduction (str): Options are "none", "mean" and "sum".
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self,
- use_sigmoid=True,
- beta=2.0,
- reduction='mean',
- loss_weight=1.0):
- super(QualityFocalLoss, self).__init__()
- assert use_sigmoid is True, 'Only sigmoid in QFL supported now.'
- self.use_sigmoid = use_sigmoid
- self.beta = beta
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of
- classification and quality (IoU) estimation with shape (N, C),
- C is the number of classes.
- target (tuple([torch.Tensor])): Target category label with shape
- (N,) and target quality label with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- loss_cls = self.loss_weight * quality_focal_loss(
- pred,
- target,
- weight,
- beta=self.beta,
- reduction=reduction,
- avg_factor=avg_factor)
- else:
- raise NotImplementedError
- return loss_cls
-
-
-@LOSSES.register_module()
-class DistributionFocalLoss(nn.Module):
- r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self, reduction='mean', loss_weight=1.0):
- super(DistributionFocalLoss, self).__init__()
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding
- boxes (before softmax) with shape (N, n+1), n is the max value
- of the integral set `{0, ..., n}` in paper.
- target (torch.Tensor): Target distance label for bounding boxes
- with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_cls = self.loss_weight * distribution_focal_loss(
- pred, target, weight, reduction=reduction, avg_factor=avg_factor)
- return loss_cls
diff --git a/spaces/ai-guru/composer/static/_app/chunks/index-d282aaf8.js b/spaces/ai-guru/composer/static/_app/chunks/index-d282aaf8.js
deleted file mode 100644
index 281bfe9e6ada0cdbead51e9db68a6ee7cae25410..0000000000000000000000000000000000000000
--- a/spaces/ai-guru/composer/static/_app/chunks/index-d282aaf8.js
+++ /dev/null
@@ -1 +0,0 @@
-import{E as f,s as l}from"./index-7c452e28.js";const e=[];function h(n,u=f){let o;const i=new Set;function r(t){if(l(n,t)&&(n=t,o)){const c=!e.length;for(const s of i)s[1](),e.push(s,n);if(c){for(let s=0;s{i.delete(s),i.size===0&&(o(),o=null)}}return{set:r,update:b,subscribe:p}}export{h as w};
diff --git a/spaces/aijack/jojo/e4e/models/encoders/model_irse.py b/spaces/aijack/jojo/e4e/models/encoders/model_irse.py
deleted file mode 100644
index 6a94d67542f961ff6533f0335cf4cb0fa54024fb..0000000000000000000000000000000000000000
--- a/spaces/aijack/jojo/e4e/models/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from e4e.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/akhaliq/speechbrain-speech-seperation/app.py b/spaces/akhaliq/speechbrain-speech-seperation/app.py
deleted file mode 100644
index 6123e2537a26af9dcc71b29afe5ad9efc435489c..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/speechbrain-speech-seperation/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from speechbrain.pretrained import SepformerSeparation as separator
-import torchaudio
-import gradio as gr
-
-model = separator.from_hparams(source="speechbrain/sepformer-wsj02mix", savedir='pretrained_models/sepformer-wsj02mix')
-
-def speechbrain(aud):
- est_sources = model.separate_file(path=aud.name)
- torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
- torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
- return "source1hat.wav", "source2hat.wav"
-
-inputs = gr.inputs.Audio(label="Input Audio", type="file")
-outputs = [
- gr.outputs.Audio(label="Output Audio One", type="file"),
- gr.outputs.Audio(label="Output Audio Two", type="file")
-]
-
-title = "Speech Seperation"
-description = "Gradio demo for Speech Seperation by SpeechBrain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below."
-article = "
1- What's your motivation to be a mentor with SharpestMinds? - Had a ok experience while I was a mentee - Believe can be a better mentor and provide a better mentorship experience. Used to be a teacher and a tutor for students in online STEM projects.
2- What's your career journey in the Data field? - Have a master degree in a field not related to tech. - Moved industry from being a teacher to get into data related field. - Did a Data and coding BootCamp with General Assembly. - Got a job at Macey's as Retention analyst but was laid off - found SM and became a mentee after that. - Worked at Sephora as a retail Business Analyst - Work involved forecasting sales and generating business reports. - Joined a startup after that at a hearing aid company - clearcaptions worked on SQL and process improvements. - Currently working at Intuit as Product analyst. Work involves A/B testing, building tableau dashboards, and working on SQL.
3- How was your experience as a SM mentee? - It was mixed, Mentor started strong and made introductions to peers. But the network didnt help out a lot. Mentor relied on networking a lot. Shared learning resources and weekly touch bases were useful to stay accountable. But eventually got a job with a help of a recruiter.
4- What's the biggest challenge a newcomer faces when they want to land a analytics role? How can you help them with this? - The biggest challenge is getting the foot in the door. For people who don't have traditional background in tech the industry is resistant to their profiles and switch is difficult. Hiring managers are rigid and it's difficult to convince them during technical interviews. Will help mentees with tech interviews and developing hard skills.
5- Do you have any questions regarding SM and platform? - How many hours of commitment per week? - Mentee demographic profile? - Avg % of ISA? - Do mentors reach out mentees or vica-versa? - What are the next steps?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.sh b/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.sh
deleted file mode 100644
index 980c0aaf33012afae0d1d1fda19ffb426cb35a00..0000000000000000000000000000000000000000
--- a/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.sh
+++ /dev/null
@@ -1,141 +0,0 @@
-#!/bin/bash
-#################################################
-# Please do not make any changes to this file, #
-# change the variables in webui-user.sh instead #
-#################################################
-# Read variables from webui-user.sh
-# shellcheck source=/dev/null
-if [[ -f webui-user.sh ]]
-then
- source ./webui-user.sh
-fi
-
-# Set defaults
-# Install directory without trailing slash
-if [[ -z "${install_dir}" ]]
-then
- install_dir="/home/$(whoami)"
-fi
-
-# Name of the subdirectory (defaults to stable-diffusion-webui)
-if [[ -z "${clone_dir}" ]]
-then
- clone_dir="stable-diffusion-webui"
-fi
-
-# python3 executable
-if [[ -z "${python_cmd}" ]]
-then
- python_cmd="python3"
-fi
-
-# git executable
-if [[ -z "${GIT}" ]]
-then
- export GIT="git"
-fi
-
-# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
-if [[ -z "${venv_dir}" ]]
-then
- venv_dir="venv"
-fi
-
-if [[ -z "${LAUNCH_SCRIPT}" ]]
-then
- LAUNCH_SCRIPT="launch.py"
-fi
-
-# Disable sentry logging
-export ERROR_REPORTING=FALSE
-
-# Do not reinstall existing pip packages on Debian/Ubuntu
-export PIP_IGNORE_INSTALLED=0
-
-# Pretty print
-delimiter="################################################################"
-
-printf "\n%s\n" "${delimiter}"
-printf "\e[1m\e[32mInstall script for stable-diffusion + Web UI\n"
-printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m"
-printf "\n%s\n" "${delimiter}"
-
-# Do not run as root
-if [[ $(id -u) -eq 0 ]]
-then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-else
- printf "\n%s\n" "${delimiter}"
- printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)"
- printf "\n%s\n" "${delimiter}"
-fi
-
-if [[ -d .git ]]
-then
- printf "\n%s\n" "${delimiter}"
- printf "Repo already cloned, using it as install directory"
- printf "\n%s\n" "${delimiter}"
- install_dir="${PWD}/../"
- clone_dir="${PWD##*/}"
-fi
-
-# Check prerequisites
-for preq in "${GIT}" "${python_cmd}"
-do
- if ! hash "${preq}" &>/dev/null
- then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: %s is not installed, aborting...\e[0m" "${preq}"
- printf "\n%s\n" "${delimiter}"
- exit 1
- fi
-done
-
-if ! "${python_cmd}" -c "import venv" &>/dev/null
-then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: python3-venv is not installed, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-fi
-
-printf "\n%s\n" "${delimiter}"
-printf "Clone or update stable-diffusion-webui"
-printf "\n%s\n" "${delimiter}"
-cd "${install_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/, aborting...\e[0m" "${install_dir}"; exit 1; }
-if [[ -d "${clone_dir}" ]]
-then
- cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
- "${GIT}" pull
-else
- "${GIT}" clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git "${clone_dir}"
- cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
-fi
-
-printf "\n%s\n" "${delimiter}"
-printf "Create and activate python venv"
-printf "\n%s\n" "${delimiter}"
-cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
-if [[ ! -d "${venv_dir}" ]]
-then
- "${python_cmd}" -m venv "${venv_dir}"
- first_launch=1
-fi
-# shellcheck source=/dev/null
-if [[ -f "${venv_dir}"/bin/activate ]]
-then
- source "${venv_dir}"/bin/activate
-else
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-fi
-
-printf "\n%s\n" "${delimiter}"
-printf "Launching launch.py..."
-printf "\n%s\n" "${delimiter}"
-"${python_cmd}" "${LAUNCH_SCRIPT}"
diff --git a/spaces/awacke1/Generative-AI-SOP/README.md b/spaces/awacke1/Generative-AI-SOP/README.md
deleted file mode 100644
index 30755214ea15f22c570f9486d0c09756d324bf33..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Generative-AI-SOP/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AISOP-ChatGPT-Standard-Operating-Procedures
-emoji: ⚕️AISOP👩⚕️
-colorFrom: gray
-colorTo: red
-sdk: static
-pinned: false
-license: mit
----
-
-HTML5 Space: https://huggingface.co/spaces/awacke1/Generative-AI-SOP/
-Streamlit Space: https://huggingface.co/spaces/awacke1/Generative-AI-SOP
-Gradio ChatGPT Space: https://huggingface.co/spaces/awacke1/ChatGPT-SOP
diff --git a/spaces/awacke1/Map-California-AI/app.py b/spaces/awacke1/Map-California-AI/app.py
deleted file mode 100644
index 9adcf523d20ad51a9af52570111d7b0c96ae4903..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Map-California-AI/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import streamlit as st
-import folium
-from streamlit_folium import folium_static
-from folium.plugins import MarkerCluster
-
-# Define California attractions data
-california_attractions = [
- ('The Getty Center', 34.0780, -118.4741, 'The Getty Center is known for its architecture, gardens, and views overlooking Los Angeles.'),
- ('Venice Beach', 33.9850, -118.4695, 'Venice Beach is famous for its oceanfront boardwalk and Muscle Beach gym.'),
- ('Santa Monica Pier', 34.0104, -118.4962, 'Santa Monica Pier features a range of entertainment, dining, and shopping experiences.'),
- ('Golden Gate Bridge', 37.8199, -122.4783, 'The Golden Gate Bridge is a suspension bridge spanning the Golden Gate, the entrance to San Francisco Bay.'),
- ('Yosemite National Park', 37.8651, -119.5383, 'Known for its waterfalls, deep valleys, and iconic view of El Capitan.'),
- ('Disneyland', 33.8121, -117.9190, 'Disneyland Resort, located in Anaheim, is the first of two theme parks built under the Disneyland umbrella.'),
- ('Napa Valley', 38.5025, -122.2654, 'Napa Valley is known for its world-class wineries.'),
- ('Lake Tahoe', 39.0968, -120.0324, 'Lake Tahoe is a large freshwater lake known for its clear blue water.'),
- ('Universal Studios', 34.1381, -118.3534, 'Universal Studios Hollywood includes a movie-based theme park and studios that offers tours.'),
- ('Alcatraz Island', 37.8267, -122.4230, 'Alcatraz Island is home to the abandoned prison and the site of the oldest operating lighthouse.')
-]
-
-# Create a map centered on California
-m = folium.Map(location=[36.7783, -119.4179], zoom_start=6)
-
-# Add markers for each attraction and add them to a MarkerCluster
-marker_cluster = MarkerCluster().add_to(m)
-for place in california_attractions:
- folium.Marker(
- location=[place[1], place[2]],
- popup=f'{place[0]} {place[3]}',
- icon=folium.Icon(color='green')
- ).add_to(marker_cluster)
-
-# Add PolyLine for paths between markers with animation
-locations = [place[1:3] for place in california_attractions]
-path = folium.PolyLine(locations, color='blue', opacity=0.8, weight=5, smooth_factor=0.5).add_to(m)
-folium.plugins.PolyLineTextPath(
- polyline=path,
- text='\u25BA',
- repeat=True,
- offset=6,
- attributes={'fill': 'blue', 'font-weight': 'bold', 'font-size': '12'}
-).add_to(path)
-
-folium_static(m)
-
-st.markdown("""
-# 🌞 California Attractions 🌴
-The map above shows the location of various attractions in California. Hover over the markers to learn more about each location.
-""")
-
-# Function to update the map when a button is clicked
-def update_map(place_data):
- m.location = [place_data[1], place_data[2]]
- m.zoom_start = 13
- folium_static(m)
-
-for i in range(0, len(california_attractions), 3):
- cols = st.columns(3)
- for j in range(3):
- if i + j < len(california_attractions):
- with cols[j]:
- if st.button(california_attractions[i + j][0]):
- update_map(california_attractions[i + j])
-folium_static(m)
-
-st.markdown("""
-## 🍷 Napa Valley: The Wine Wonderland 🍇
-Napa Valley, located in the heart of California, is synonymous with premium wines, fine dining, and breathtaking vistas. Not only is it a world-class wine-producing region, but it's also a paradise for foodies and outdoor enthusiasts. 🥂
-Whether you're a sommelier or a casual wine drinker, Napa Valley offers a wide range of experiences, from vineyard tours and wine-tasting sessions to hot air balloon rides over the scenic countryside. 🎈
-The valley is home to over 400 wineries, each with its own unique blend of grape varieties, production techniques, and flavors. 🍾
-""")
diff --git a/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/index.html b/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/index.html
deleted file mode 100644
index a7773ab3c64c5e1d939315fe5ca95fd552fd7212..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Try.Playing.Learning.Sharing.On.This/index.html
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
Flappy Plane Swoop Sim
-
User input: WASD
-
This WebGL demo demonstrates PlayCanvas runnable in an HTML5 playable surface available anywhere your browser goes.
- Check it out here:🤗Love Huggingface for HTML5.
-
-... (Mirror #1) Download Crack Warcraft 3 Frozen Throne 126. March 24, 2018. Aeon Visualizer Platinum Crack MAXSPEED. March 23, 2018. 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Body Of Lies Subtitles 720p Projector How to Stream the Action-Packed Film with Clear Subtitles.md b/spaces/bioriAsaeru/text-to-voice/Body Of Lies Subtitles 720p Projector How to Stream the Action-Packed Film with Clear Subtitles.md
deleted file mode 100644
index 1d3ac1defbe2a016f09160f53b55b455241f84d8..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Body Of Lies Subtitles 720p Projector How to Stream the Action-Packed Film with Clear Subtitles.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
On an individual level there are two issues of practicality and technology in trying to watch Branca de Neve: the majority of the film, compromises a pure black image, bookended and interstitial with a few images, a canvas onto which a \u2018radio-play\u2019 of Robert Walser\u2019s Branca de Neve \u2014 an anti-fairy tale version of Snow White \u2014 plays out. For those not fluent in Portuguese this results in a compromised experienced, not only is the black screen ruptured by the basically constant white of subtitles. But a purely auditory experience becomes one that is split between reading and listening. Famously Jean Marie Straub and Dani\u00E8le Huillet sought to, not bypass, but draw attention to this dilemma by occasionally allowing some passages to go by untranslated, stressing that likewise to read and not hear is an as important loss; that the auditory sensation of the spoken word can be meaningful and political beyond translation. A second issue is purely a visual, technological one. A celluloid black image can not be emulated by a digital screen, even less so outside of a dcp projector. I watched the film on my laptop; the TV in my household simply unable to render anything close to actual black, instead opting for a very dark brown with a constant shifting of digital artifacting. A 35mm copy of the film with english subtitles seems to exist. If it is possible and safe to I would very much like to try and program it.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Cinderella Story in Tamil PDF Free 14 - .md b/spaces/bioriAsaeru/text-to-voice/Cinderella Story in Tamil PDF Free 14 - .md
deleted file mode 100644
index a4b447eecbb3a193e54635c6668cf87dab948610..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Cinderella Story in Tamil PDF Free 14 - .md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Boxer Tiny Pocket shirt Bioabsorbable Stents Market Expected to Witness Healthy Growth Rates During the Forecast 2019-2026 Bergen-Belsen-Prozess epub free nude teen images Global Peptide Synthesis Industry 2019-2026 Market Analysis, Trends, Growth, Demand, Regional Outlook and Forecast Research pictures of a infetade serena williams masterbate porn girl scout sex story Proof Disagree Unespied Footing Office Avoid Produced Granddads amongst Subsizar Mermaid With Harp January Woman The Soul Of Mermaid The Fire Of a Lioness T-Shirt.
rajadhi raja malayalam movie free download utorrent Andaz Apna Apna hd 720p movie download hajitha font 20 bus stop telugu movie free download 720p wespank real punishment of children x pert highscore plus download free singam 2 movie download tamilrockers 17 malayalam old kambi kathakal pdf download The Pool dual audio in hindi hd 720p torrent Resumen de la obra haces falta de carlos cuauhtemoc sanchez
-
Ghostly apparitions download aashiqui 2 tamil dubbed movie download Crack Slate Digital Fg x Virtual Mastering Processor Torrent lockout 2012 movie dual audio hindi eng free download torrent power plant engineering by gr nagpal pdf free 87 windows 7 boot disc porque los hombres aman a las cabronas book pdf gratis bam bam bole masti main dole video song free download american sniper tamil dubbed movie download powered by drbguestbook 596
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Download Terjemah Kitab Qurrotul Uyun Lengkap - Jejak Mufassir[2] This is another blog post that offers a link to download a PDF file of the translation of the book along with a short description and a request to support their YouTube channel..md b/spaces/bioriAsaeru/text-to-voice/Download Terjemah Kitab Qurrotul Uyun Lengkap - Jejak Mufassir[2] This is another blog post that offers a link to download a PDF file of the translation of the book along with a short description and a request to support their YouTube channel..md
deleted file mode 100644
index 16162299687cb406cd49a979b69e2a3bf033fcf3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download Terjemah Kitab Qurrotul Uyun Lengkap - Jejak Mufassir[2] This is another blog post that offers a link to download a PDF file of the translation of the book along with a short description and a request to support their YouTube channel..md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
waves diamond bundle 5 2 rtas plugins mac os x download game onet portable untuk pc free download Classics of Western Philosophy book pdf Verdetto finale italian song free download Mac OS X 10.5 Leopard Install DVD full iso image.rar.rar new headway intermediate student book cd free download the The Invasion italian dubbed free download crack key for cardrecovery v5.30 build 1206 download film 5 Maqbool introductory linear algebra by bernard kolman pdf free download
-
windows xp iso download deutsch Pro tools 12 torrent download Adobe Animate CC 2019 19.0.0 Crack .rar microsoft encarta kids 2009 free download full version.rar Download Khichdi - The Movie Hd Movie Torrentl Password.txt 1.4 KB.rar turbo fire free download utorrent for pc delhi belly 2011 br rip 720p Devil May Cry Hd Collection Style Switcher Mod Gifs gay mia khalifa porn, movies.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Gta Namaste America Game Download Full 46 What Makes GTA San Andreas Namaste America One of the Most Popular Games of All Time.md b/spaces/bioriAsaeru/text-to-voice/Gta Namaste America Game Download Full 46 What Makes GTA San Andreas Namaste America One of the Most Popular Games of All Time.md
deleted file mode 100644
index 9fc86a02cf9a847db9fa67c43fa0a53baa0a21d4..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Gta Namaste America Game Download Full 46 What Makes GTA San Andreas Namaste America One of the Most Popular Games of All Time.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Use my savegame.Made on iOS v1.07 and works on v1.05 and up
CJ's weapon list Hands- UNARMED Pistol-Silenced 9mm Shotgun-Pump action shotgun Rifle-Sniper rifle Assault rifle-M4 Thrown-Satchel charges/ Remote explosives Camera Heavy Weapon-MINIGUN!! (everyone's favourite)
Money=999,999,999 max. All weapons have hitman stats Max respect Max lung capacity Max muscle Max gambling skill
Infinite sprinting Infinte ammo for all weapons (100% reward)
Completed stunt jumps and burglary missions which are optional and not required for 100%.
100% overview (most of u know but some don't):-
All storyline missions done Heist missions done Zero missions done All schools completed with gold and silver All races completed All side mission and odd jobs done All 29 safehouses purchased Ammunation shooting challenge completed All 30 vehicles brought for export All assets (trucking,quarry,valet,all courier mission) aquired New moves learnt from all gyms All horseshoes,oysters,tags,snapshots done
Most important:::::::-( current safehouse vehicles)
1. SWAT Tank in doherty garage(despite the last mission glitch) 2. FBI Truck in doherty garage which was never used in the game(with some clever save game modding) 3. Hotknife in doherty 4. PCJ-600 in doherty for bike lovers
Sorry,but i could also get the Andromada if the Verdant Meadows hangar would have been large enough.
Times wasted:2 (sorry 'bout that). Times busted:0
Must download!!!!!!!!!!!!
DOWLOAD FROM HERE:: If above does not work for you ,use this link then: =1GY9kqq_d3GM2nJN6G5hCxQXePHOBwVqg
Credits: Samutz,for providing permanent status yuvraj6122 for completing this savegame lol
-
In mid-June 2005, a software patch for the game dubbed the "Hot Coffee (mod)" was released by Patrick Wildenborg (under the Internet alias "PatrickW"), a 38-year-old modder from the Netherlands. The name "Hot Coffee" refers to the way the unmodified game alludes to unseen sex scenes. In the original version of the game, the player takes CJ's girlfriend to her front door, and she asks him if he would like to come in for "some coffee". He agrees, and the camera stays outside, swaying back and forth a bit, while moaning sounds are heard. After installing the patch, users can enter the main character's girlfriends' houses and engage in a crudely rendered, fully clothed sexual intercourse mini-game. The fallout from the controversy resulted in a public response from high-ranking politicians in the United States and elsewhere and resulted in the game's recall and re-release.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Jugalbandi download in hindi kickass 720p Watch the musical drama online.md b/spaces/bioriAsaeru/text-to-voice/Jugalbandi download in hindi kickass 720p Watch the musical drama online.md
deleted file mode 100644
index 7dac75a8673acc80b6d32c929cd8b1adf58b6221..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Jugalbandi download in hindi kickass 720p Watch the musical drama online.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Kaali Topi Laal Rumaal man 3 download full movie free
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/KMS Tools For Windows And Office All Versions !!TOP!!.md b/spaces/bioriAsaeru/text-to-voice/KMS Tools For Windows And Office All Versions !!TOP!!.md
deleted file mode 100644
index 12669f50a28797d2dcc3c1657a9328a0b33af7ae..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/KMS Tools For Windows And Office All Versions !!TOP!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-It popped up in all sorts of places—for example, Microsoft Outlook 97 used VBScript as its ... KMS Tools Activator activates previous versions of Microsoft Office ... 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Keygen [BETTER] Xf AutoCAD Map 3D 2016 X32 Exe.md b/spaces/bioriAsaeru/text-to-voice/Keygen [BETTER] Xf AutoCAD Map 3D 2016 X32 Exe.md
deleted file mode 100644
index 8743cc9041b3b60eb939efb05a50491d6db61e3a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Keygen [BETTER] Xf AutoCAD Map 3D 2016 X32 Exe.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-We have one of the most complete and powerful 3D analysis software on the market, offering a wide range of powerful tools, a. Our 2016 product list includes software for AutoCAD and other Autodesk products. Autodesk 2016 products such as AutoCAD, AutoCAD LT, Inventor and Fusion 360, just to name a few. Learn more about Autodesk 2016 software, hardware, and training solutions.Q:
-
-How to use Spark to process millions of rows with slow JDBC query
-
-I'm a newbie of Apache Spark and Spark Streaming.
-
-I have a local SparkContext and SparkSession in my code, but I cannot connect to local driver, I tried to change the SparkUrl in config like:
-
-spark.conf.set("spark.jars.packages", "com.xxx:xxx:1.0")
-
-spark.conf.set("spark.master", "local")
-
-spark.conf.set("spark.sql.jars", "/home/xxx/Spark/spark-2.4.4-bin-hadoop2.7/jars/sqljars-1.2.1.jar")
-
-spark.conf.set("spark.sql.jars", "/home/xxx/Spark/spark-2.4.4-bin-hadoop2.7/jars/spark-sqljars-1.2.1.jar")
-
-spark.conf.set("spark.hadoop.fs.defaultFS", "file:///")
-
-spark.conf.set("spark.default.parallelism", "3")
-
-but still cannot connect to local driver, when I used Spark-shell to connect to local driver, got this error:
-
-18/03/06 12:35:29 ERROR SparkContext: Error initializing SparkContext.
-
-java.net.UnknownHostException: local
-
-How to connect Spark to local JDBC?
-
-A:
-
-Try changing the master to any valid Spark URL:
-
-from this:
-
-to this:
-
-spark.conf.set("spark.master 4fefd39f24
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Kim.Kardashian.Superstar.XXX.DVDRiP.XviD DivXfacTory.md b/spaces/bioriAsaeru/text-to-voice/Kim.Kardashian.Superstar.XXX.DVDRiP.XviD DivXfacTory.md
deleted file mode 100644
index 14c01a32cbba3c0e6209cf863226e7402a6ffe5d..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kim.Kardashian.Superstar.XXX.DVDRiP.XviD DivXfacTory.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Las Culturas Precolombinas Henri Lehmann El libro que revela los secretos de las culturas precolombinas.md b/spaces/bioriAsaeru/text-to-voice/Las Culturas Precolombinas Henri Lehmann El libro que revela los secretos de las culturas precolombinas.md
deleted file mode 100644
index 6cd54a759b67c5a9a4c691d89a0ee91c650ada25..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Las Culturas Precolombinas Henri Lehmann El libro que revela los secretos de las culturas precolombinas.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-"""
-
-summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt
-
-MODELS = [
- "gpt-3.5-turbo",
- "gpt-3.5-turbo-0301",
- "gpt-4",
- "gpt-4-0314",
- "gpt-4-32k",
- "gpt-4-32k-0314",
-] # 可选的模型
-
-
-WEBSEARCH_PTOMPT_TEMPLATE = """\
-Web search results:
-
-{web_results}
-Current date: {current_date}
-
-Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject.
-Query: {query}
-Reply in 中文"""
-
-PROMPT_TEMPLATE = """\
-Context information is below.
----------------------
-{context_str}
----------------------
-Current date: {current_date}.
-Using the provided context information, write a comprehensive reply to the given query.
-Make sure to cite results using [number] notation after the reference.
-If the provided context information refer to multiple subjects with the same name, write separate answers for each subject.
-Use prior knowledge only if the given context didn't provide enough information.
-Answer the question: {query_str}
-Reply in 中文
-"""
-
-REFINE_TEMPLATE = """\
-The original question is as follows: {query_str}
-We have provided an existing answer: {existing_answer}
-We have the opportunity to refine the existing answer
-(only if needed) with some more context below.
-------------
-{context_msg}
-------------
-Given the new context, refine the original answer to better
-Answer in the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch.
-If the context isn't useful, return the original answer.
-"""
diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/instruction_template.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/train/instruction_template.py
deleted file mode 100644
index 4b449fd79a1d97241c33f0ea0d9eace91b63466d..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/instruction_template.py
+++ /dev/null
@@ -1,13 +0,0 @@
-VG_RELATION_TEMPLATES = [
- "Question: What is the relationship between<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> and<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>? Answer: {relation}.",
- "Question: What is the relationship between<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> and<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>? Answer:<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> {use_is} {relation}<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>.",
- "Question: What {is_or_does}<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> {relation_do}? Answer:<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> {use_is} {relation}<|#object#|>{nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>.",
- "Question: What {use_is} {relation}<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>? Answer:<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> {use_is} {relation}<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>.",
- "Question: What {is_or_does}<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> {relation_do}? Answer:<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>.",
- "Question: What {use_is} {relation}<|#object#|> {nameB}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>? Answer:<|#object#|> {nameA}<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>.",
-]
-
-PISC_TEMPLATES = [
- "Question: What is the social relationship between this<|#object#|> person<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|> and that<|#object#|> person<|#endofobject#|><|#visual#|><|#box#|><|#endofobject#|>? Answer: {relation}.",
- "Question: What is the social relationship between these<|#object#|> people<|#endofobject#|><|#visual#|><|#box#|><|#box#|><|#endofobject#|>? Answer: {relation}.",
-]
diff --git a/spaces/chendl/compositional_test/multimodal/tools/convert_mmc4_to_wds.py b/spaces/chendl/compositional_test/multimodal/tools/convert_mmc4_to_wds.py
deleted file mode 100644
index 1798e89403b8cf7b5606176449b9e859fd82adbc..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/tools/convert_mmc4_to_wds.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import argparse
-import base64
-import json
-import os
-import tarfile
-import uuid
-import zipfile
-import time
-
-import braceexpand
-import webdataset as wds
-from tqdm import tqdm
-from tqdm.contrib.concurrent import process_map
-
-arg_parser = argparse.ArgumentParser()
-arg_parser.add_argument("--output_dir", type=str)
-arg_parser.add_argument(
- "--image_shards",
- type=str,
- help="Pass in a list of shards in the format path_to_shard/shard_{0..23098}_images_v2.tar",
-)
-arg_parser.add_argument(
- "--doc_shards",
- type=str,
- help="Pass in a list of shards in the format path_to_shard/docs_shard_{0..23098}_v2.jsonl.zip",
-)
-arg_parser.add_argument(
- "--thread",
- type=int,
- default=128,
-)
-args = arg_parser.parse_args()
-
-def get_txt_to_filename_dict(image_shards, disable_tqdm=False):
- txt_to_filename_dict = {}
- dataset = wds.WebDataset(image_shards).decode("pil").to_tuple("txt", "json")
- for data in tqdm(dataset, disable=disable_tqdm):
- txt = data[0].split(".")[0]
- txt_to_filename_dict[txt] = data[1]['key']
- return txt_to_filename_dict
-
-
-def single_thread(args):
- i = args["i"]
- output_dir = args["output_dir"]
- doc_shards = args["doc_shards"]
- image_shards = args["image_shards"]
- if i == 0:
- tqdm.write(f"output_dir: {output_dir}")
- tqdm.write(f"doc_shards: {doc_shards[:5]}")
- tqdm.write(f"image_shards: {image_shards[:5]}")
- with wds.ShardWriter(os.path.join(output_dir, "%09d.tar"), maxcount=1000) as sink:
- sink.verbose = False
- for doc_shard, image_shard in tqdm(zip(doc_shards, image_shards), disable=(i != 0), total=len(doc_shards)):
- # txt_to_filename_dict = get_txt_to_filename_dict(image_shard, disable_tqdm=(i != 0))
- # image_tar = tarfile.open(image_shard)
- # Open the ZIP archive and extract the JSON file
- with zipfile.ZipFile(doc_shard, "r") as zip_file:
- # Assumes the JSON file is the first file in the archive
- json_filename = zip_file.namelist()[0]
- with zip_file.open(json_filename, "r") as json_file:
- pbar = tqdm(json_file, disable=True)
- total_num = 0
- exist_num = 0
- for sample_data in pbar:
- # get image names from json
- sample_data = json.loads(sample_data)
- image_info = sample_data["image_info"]
- image_names = [image["image_name"] for image in image_info]
-
- # Add each image to the tar file
- for img_idx, image_name in enumerate(image_names):
- total_num += 1
- try:
- image = image_tar.extractfile(txt_to_filename_dict[image_name.split(".")[0]]+".jpg")
- # convert to base64
- image_bytes = image.read()
- image_base64 = base64.b64encode(image_bytes).decode("utf-8")
- exist_num += 1
- except:
- tqdm.write(f"{image_name.split('.')[0]}")
- image_base64 = "null"
- sample_data["image_info"][img_idx][
- "image_base64"
- ] = image_base64
-
- key_str = uuid.uuid4().hex
- sink.write({"__key__": key_str, "json": sample_data})
- pbar.set_description(f"{exist_num/total_num:.2f}")
- # image_tar.close()
-
-
-def main():
- timestamp = int(time.time())
- os.makedirs(args.output_dir, exist_ok=True)
- os.makedirs(os.path.join(args.output_dir, str(timestamp)), exist_ok=True)
- tasks = []
- for i in range(args.thread):
- thread_dir = os.path.join(args.output_dir, str(timestamp), str(i))
- os.makedirs(thread_dir, exist_ok=True)
- tasks.append({
- "i": i,
- "output_dir": thread_dir,
- "doc_shards": [],
- "image_shards": [],
- })
-
- doc_shards = list(braceexpand.braceexpand(args.doc_shards))
- image_shards = list(braceexpand.braceexpand(args.image_shards))
-
- assert len(doc_shards) == len(
- image_shards
- ), "Each doc shards must have a corresponding image shard"
-
- for i, (doc_shard, image_shard) in enumerate(zip(doc_shards, image_shards)):
- tasks[i % args.thread]["doc_shards"].append(doc_shard)
- tasks[i % args.thread]["image_shards"].append(image_shard)
-
- # assert len(tasks) == args.thread
- # process_map(single_thread, tasks, max_workers=args.thread, disable=True)
- single_thread(tasks[0])
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/dml/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/dml/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py
deleted file mode 100644
index 066eef38fc720265366afee9a8cd415fc560459e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/plistlib/__init__.py
+++ /dev/null
@@ -1,681 +0,0 @@
-import collections.abc
-import re
-from typing import (
- Any,
- Callable,
- Dict,
- List,
- Mapping,
- MutableMapping,
- Optional,
- Sequence,
- Type,
- Union,
- IO,
-)
-import warnings
-from io import BytesIO
-from datetime import datetime
-from base64 import b64encode, b64decode
-from numbers import Integral
-from types import SimpleNamespace
-from functools import singledispatch
-
-from fontTools.misc import etree
-
-from fontTools.misc.textTools import tostr
-
-
-# By default, we
-# - deserialize elements as bytes and
-# - serialize bytes as elements.
-# Before, on Python 2, we
-# - deserialized elements as plistlib.Data objects, in order to
-# distinguish them from the built-in str type (which is bytes on python2)
-# - serialized bytes as elements (they must have only contained
-# ASCII characters in this case)
-# You can pass use_builtin_types=[True|False] to the load/dump etc. functions
-# to enforce a specific treatment.
-# NOTE that unicode type always maps to element, and plistlib.Data
-# always maps to element, regardless of use_builtin_types.
-USE_BUILTIN_TYPES = True
-
-XML_DECLARATION = b""""""
-
-PLIST_DOCTYPE = (
- b''
-)
-
-
-# Date should conform to a subset of ISO 8601:
-# YYYY '-' MM '-' DD 'T' HH ':' MM ':' SS 'Z'
-_date_parser = re.compile(
- r"(?P\d\d\d\d)"
- r"(?:-(?P\d\d)"
- r"(?:-(?P\d\d)"
- r"(?:T(?P\d\d)"
- r"(?::(?P\d\d)"
- r"(?::(?P\d\d))"
- r"?)?)?)?)?Z",
- re.ASCII,
-)
-
-
-def _date_from_string(s: str) -> datetime:
- order = ("year", "month", "day", "hour", "minute", "second")
- m = _date_parser.match(s)
- if m is None:
- raise ValueError(f"Expected ISO 8601 date string, but got '{s:r}'.")
- gd = m.groupdict()
- lst = []
- for key in order:
- val = gd[key]
- if val is None:
- break
- lst.append(int(val))
- # NOTE: mypy doesn't know that lst is 6 elements long.
- return datetime(*lst) # type:ignore
-
-
-def _date_to_string(d: datetime) -> str:
- return "%04d-%02d-%02dT%02d:%02d:%02dZ" % (
- d.year,
- d.month,
- d.day,
- d.hour,
- d.minute,
- d.second,
- )
-
-
-class Data:
- """Represents binary data when ``use_builtin_types=False.``
-
- This class wraps binary data loaded from a plist file when the
- ``use_builtin_types`` argument to the loading function (:py:func:`fromtree`,
- :py:func:`load`, :py:func:`loads`) is false.
-
- The actual binary data is retrieved using the ``data`` attribute.
- """
-
- def __init__(self, data: bytes) -> None:
- if not isinstance(data, bytes):
- raise TypeError("Expected bytes, found %s" % type(data).__name__)
- self.data = data
-
- @classmethod
- def fromBase64(cls, data: Union[bytes, str]) -> "Data":
- return cls(b64decode(data))
-
- def asBase64(self, maxlinelength: int = 76, indent_level: int = 1) -> bytes:
- return _encode_base64(
- self.data, maxlinelength=maxlinelength, indent_level=indent_level
- )
-
- def __eq__(self, other: Any) -> bool:
- if isinstance(other, self.__class__):
- return self.data == other.data
- elif isinstance(other, bytes):
- return self.data == other
- else:
- return NotImplemented
-
- def __repr__(self) -> str:
- return "%s(%s)" % (self.__class__.__name__, repr(self.data))
-
-
-def _encode_base64(
- data: bytes, maxlinelength: Optional[int] = 76, indent_level: int = 1
-) -> bytes:
- data = b64encode(data)
- if data and maxlinelength:
- # split into multiple lines right-justified to 'maxlinelength' chars
- indent = b"\n" + b" " * indent_level
- max_length = max(16, maxlinelength - len(indent))
- chunks = []
- for i in range(0, len(data), max_length):
- chunks.append(indent)
- chunks.append(data[i : i + max_length])
- chunks.append(indent)
- data = b"".join(chunks)
- return data
-
-
-# Mypy does not support recursive type aliases as of 0.782, Pylance does.
-# https://github.com/python/mypy/issues/731
-# https://devblogs.microsoft.com/python/pylance-introduces-five-new-features-that-enable-type-magic-for-python-developers/#1-support-for-recursive-type-aliases
-PlistEncodable = Union[
- bool,
- bytes,
- Data,
- datetime,
- float,
- Integral,
- Mapping[str, Any],
- Sequence[Any],
- str,
-]
-
-
-class PlistTarget:
- """Event handler using the ElementTree Target API that can be
- passed to a XMLParser to produce property list objects from XML.
- It is based on the CPython plistlib module's _PlistParser class,
- but does not use the expat parser.
-
- >>> from fontTools.misc import etree
- >>> parser = etree.XMLParser(target=PlistTarget())
- >>> result = etree.XML(
- ... ""
- ... " something"
- ... " blah"
- ... "",
- ... parser=parser)
- >>> result == {"something": "blah"}
- True
-
- Links:
- https://github.com/python/cpython/blob/main/Lib/plistlib.py
- http://lxml.de/parsing.html#the-target-parser-interface
- """
-
- def __init__(
- self,
- use_builtin_types: Optional[bool] = None,
- dict_type: Type[MutableMapping[str, Any]] = dict,
- ) -> None:
- self.stack: List[PlistEncodable] = []
- self.current_key: Optional[str] = None
- self.root: Optional[PlistEncodable] = None
- if use_builtin_types is None:
- self._use_builtin_types = USE_BUILTIN_TYPES
- else:
- if use_builtin_types is False:
- warnings.warn(
- "Setting use_builtin_types to False is deprecated and will be "
- "removed soon.",
- DeprecationWarning,
- )
- self._use_builtin_types = use_builtin_types
- self._dict_type = dict_type
-
- def start(self, tag: str, attrib: Mapping[str, str]) -> None:
- self._data: List[str] = []
- handler = _TARGET_START_HANDLERS.get(tag)
- if handler is not None:
- handler(self)
-
- def end(self, tag: str) -> None:
- handler = _TARGET_END_HANDLERS.get(tag)
- if handler is not None:
- handler(self)
-
- def data(self, data: str) -> None:
- self._data.append(data)
-
- def close(self) -> PlistEncodable:
- if self.root is None:
- raise ValueError("No root set.")
- return self.root
-
- # helpers
-
- def add_object(self, value: PlistEncodable) -> None:
- if self.current_key is not None:
- stack_top = self.stack[-1]
- if not isinstance(stack_top, collections.abc.MutableMapping):
- raise ValueError("unexpected element: %r" % stack_top)
- stack_top[self.current_key] = value
- self.current_key = None
- elif not self.stack:
- # this is the root object
- self.root = value
- else:
- stack_top = self.stack[-1]
- if not isinstance(stack_top, list):
- raise ValueError("unexpected element: %r" % stack_top)
- stack_top.append(value)
-
- def get_data(self) -> str:
- data = "".join(self._data)
- self._data = []
- return data
-
-
-# event handlers
-
-
-def start_dict(self: PlistTarget) -> None:
- d = self._dict_type()
- self.add_object(d)
- self.stack.append(d)
-
-
-def end_dict(self: PlistTarget) -> None:
- if self.current_key:
- raise ValueError("missing value for key '%s'" % self.current_key)
- self.stack.pop()
-
-
-def end_key(self: PlistTarget) -> None:
- if self.current_key or not isinstance(self.stack[-1], collections.abc.Mapping):
- raise ValueError("unexpected key")
- self.current_key = self.get_data()
-
-
-def start_array(self: PlistTarget) -> None:
- a: List[PlistEncodable] = []
- self.add_object(a)
- self.stack.append(a)
-
-
-def end_array(self: PlistTarget) -> None:
- self.stack.pop()
-
-
-def end_true(self: PlistTarget) -> None:
- self.add_object(True)
-
-
-def end_false(self: PlistTarget) -> None:
- self.add_object(False)
-
-
-def end_integer(self: PlistTarget) -> None:
- self.add_object(int(self.get_data()))
-
-
-def end_real(self: PlistTarget) -> None:
- self.add_object(float(self.get_data()))
-
-
-def end_string(self: PlistTarget) -> None:
- self.add_object(self.get_data())
-
-
-def end_data(self: PlistTarget) -> None:
- if self._use_builtin_types:
- self.add_object(b64decode(self.get_data()))
- else:
- self.add_object(Data.fromBase64(self.get_data()))
-
-
-def end_date(self: PlistTarget) -> None:
- self.add_object(_date_from_string(self.get_data()))
-
-
-_TARGET_START_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = {
- "dict": start_dict,
- "array": start_array,
-}
-
-_TARGET_END_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = {
- "dict": end_dict,
- "array": end_array,
- "key": end_key,
- "true": end_true,
- "false": end_false,
- "integer": end_integer,
- "real": end_real,
- "string": end_string,
- "data": end_data,
- "date": end_date,
-}
-
-
-# functions to build element tree from plist data
-
-
-def _string_element(value: str, ctx: SimpleNamespace) -> etree.Element:
- el = etree.Element("string")
- el.text = value
- return el
-
-
-def _bool_element(value: bool, ctx: SimpleNamespace) -> etree.Element:
- if value:
- return etree.Element("true")
- return etree.Element("false")
-
-
-def _integer_element(value: int, ctx: SimpleNamespace) -> etree.Element:
- if -1 << 63 <= value < 1 << 64:
- el = etree.Element("integer")
- el.text = "%d" % value
- return el
- raise OverflowError(value)
-
-
-def _real_element(value: float, ctx: SimpleNamespace) -> etree.Element:
- el = etree.Element("real")
- el.text = repr(value)
- return el
-
-
-def _dict_element(
- d: Mapping[str, PlistEncodable], ctx: SimpleNamespace
-) -> etree.Element:
- el = etree.Element("dict")
- items = d.items()
- if ctx.sort_keys:
- items = sorted(items) # type: ignore
- ctx.indent_level += 1
- for key, value in items:
- if not isinstance(key, str):
- if ctx.skipkeys:
- continue
- raise TypeError("keys must be strings")
- k = etree.SubElement(el, "key")
- k.text = tostr(key, "utf-8")
- el.append(_make_element(value, ctx))
- ctx.indent_level -= 1
- return el
-
-
-def _array_element(
- array: Sequence[PlistEncodable], ctx: SimpleNamespace
-) -> etree.Element:
- el = etree.Element("array")
- if len(array) == 0:
- return el
- ctx.indent_level += 1
- for value in array:
- el.append(_make_element(value, ctx))
- ctx.indent_level -= 1
- return el
-
-
-def _date_element(date: datetime, ctx: SimpleNamespace) -> etree.Element:
- el = etree.Element("date")
- el.text = _date_to_string(date)
- return el
-
-
-def _data_element(data: bytes, ctx: SimpleNamespace) -> etree.Element:
- el = etree.Element("data")
- # NOTE: mypy is confused about whether el.text should be str or bytes.
- el.text = _encode_base64( # type: ignore
- data,
- maxlinelength=(76 if ctx.pretty_print else None),
- indent_level=ctx.indent_level,
- )
- return el
-
-
-def _string_or_data_element(raw_bytes: bytes, ctx: SimpleNamespace) -> etree.Element:
- if ctx.use_builtin_types:
- return _data_element(raw_bytes, ctx)
- else:
- try:
- string = raw_bytes.decode(encoding="ascii", errors="strict")
- except UnicodeDecodeError:
- raise ValueError(
- "invalid non-ASCII bytes; use unicode string instead: %r" % raw_bytes
- )
- return _string_element(string, ctx)
-
-
-# The following is probably not entirely correct. The signature should take `Any`
-# and return `NoReturn`. At the time of this writing, neither mypy nor Pyright
-# can deal with singledispatch properly and will apply the signature of the base
-# function to all others. Being slightly dishonest makes it type-check and return
-# usable typing information for the optimistic case.
-@singledispatch
-def _make_element(value: PlistEncodable, ctx: SimpleNamespace) -> etree.Element:
- raise TypeError("unsupported type: %s" % type(value))
-
-
-_make_element.register(str)(_string_element)
-_make_element.register(bool)(_bool_element)
-_make_element.register(Integral)(_integer_element)
-_make_element.register(float)(_real_element)
-_make_element.register(collections.abc.Mapping)(_dict_element)
-_make_element.register(list)(_array_element)
-_make_element.register(tuple)(_array_element)
-_make_element.register(datetime)(_date_element)
-_make_element.register(bytes)(_string_or_data_element)
-_make_element.register(bytearray)(_data_element)
-_make_element.register(Data)(lambda v, ctx: _data_element(v.data, ctx))
-
-
-# Public functions to create element tree from plist-compatible python
-# data structures and viceversa, for use when (de)serializing GLIF xml.
-
-
-def totree(
- value: PlistEncodable,
- sort_keys: bool = True,
- skipkeys: bool = False,
- use_builtin_types: Optional[bool] = None,
- pretty_print: bool = True,
- indent_level: int = 1,
-) -> etree.Element:
- """Convert a value derived from a plist into an XML tree.
-
- Args:
- value: Any kind of value to be serialized to XML.
- sort_keys: Whether keys of dictionaries should be sorted.
- skipkeys (bool): Whether to silently skip non-string dictionary
- keys.
- use_builtin_types (bool): If true, byte strings will be
- encoded in Base-64 and wrapped in a ``data`` tag; if
- false, they will be either stored as ASCII strings or an
- exception raised if they cannot be decoded as such. Defaults
- to ``True`` if not present. Deprecated.
- pretty_print (bool): Whether to indent the output.
- indent_level (int): Level of indentation when serializing.
-
- Returns: an ``etree`` ``Element`` object.
-
- Raises:
- ``TypeError``
- if non-string dictionary keys are serialized
- and ``skipkeys`` is false.
- ``ValueError``
- if non-ASCII binary data is present
- and `use_builtin_types` is false.
- """
- if use_builtin_types is None:
- use_builtin_types = USE_BUILTIN_TYPES
- else:
- use_builtin_types = use_builtin_types
- context = SimpleNamespace(
- sort_keys=sort_keys,
- skipkeys=skipkeys,
- use_builtin_types=use_builtin_types,
- pretty_print=pretty_print,
- indent_level=indent_level,
- )
- return _make_element(value, context)
-
-
-def fromtree(
- tree: etree.Element,
- use_builtin_types: Optional[bool] = None,
- dict_type: Type[MutableMapping[str, Any]] = dict,
-) -> Any:
- """Convert an XML tree to a plist structure.
-
- Args:
- tree: An ``etree`` ``Element``.
- use_builtin_types: If True, binary data is deserialized to
- bytes strings. If False, it is wrapped in :py:class:`Data`
- objects. Defaults to True if not provided. Deprecated.
- dict_type: What type to use for dictionaries.
-
- Returns: An object (usually a dictionary).
- """
- target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type)
- for action, element in etree.iterwalk(tree, events=("start", "end")):
- if action == "start":
- target.start(element.tag, element.attrib)
- elif action == "end":
- # if there are no children, parse the leaf's data
- if not len(element):
- # always pass str, not None
- target.data(element.text or "")
- target.end(element.tag)
- return target.close()
-
-
-# python3 plistlib API
-
-
-def load(
- fp: IO[bytes],
- use_builtin_types: Optional[bool] = None,
- dict_type: Type[MutableMapping[str, Any]] = dict,
-) -> Any:
- """Load a plist file into an object.
-
- Args:
- fp: An opened file.
- use_builtin_types: If True, binary data is deserialized to
- bytes strings. If False, it is wrapped in :py:class:`Data`
- objects. Defaults to True if not provided. Deprecated.
- dict_type: What type to use for dictionaries.
-
- Returns:
- An object (usually a dictionary) representing the top level of
- the plist file.
- """
-
- if not hasattr(fp, "read"):
- raise AttributeError("'%s' object has no attribute 'read'" % type(fp).__name__)
- target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type)
- parser = etree.XMLParser(target=target)
- result = etree.parse(fp, parser=parser)
- # lxml returns the target object directly, while ElementTree wraps
- # it as the root of an ElementTree object
- try:
- return result.getroot()
- except AttributeError:
- return result
-
-
-def loads(
- value: bytes,
- use_builtin_types: Optional[bool] = None,
- dict_type: Type[MutableMapping[str, Any]] = dict,
-) -> Any:
- """Load a plist file from a string into an object.
-
- Args:
- value: A bytes string containing a plist.
- use_builtin_types: If True, binary data is deserialized to
- bytes strings. If False, it is wrapped in :py:class:`Data`
- objects. Defaults to True if not provided. Deprecated.
- dict_type: What type to use for dictionaries.
-
- Returns:
- An object (usually a dictionary) representing the top level of
- the plist file.
- """
-
- fp = BytesIO(value)
- return load(fp, use_builtin_types=use_builtin_types, dict_type=dict_type)
-
-
-def dump(
- value: PlistEncodable,
- fp: IO[bytes],
- sort_keys: bool = True,
- skipkeys: bool = False,
- use_builtin_types: Optional[bool] = None,
- pretty_print: bool = True,
-) -> None:
- """Write a Python object to a plist file.
-
- Args:
- value: An object to write.
- fp: A file opened for writing.
- sort_keys (bool): Whether keys of dictionaries should be sorted.
- skipkeys (bool): Whether to silently skip non-string dictionary
- keys.
- use_builtin_types (bool): If true, byte strings will be
- encoded in Base-64 and wrapped in a ``data`` tag; if
- false, they will be either stored as ASCII strings or an
- exception raised if they cannot be represented. Defaults
- pretty_print (bool): Whether to indent the output.
- indent_level (int): Level of indentation when serializing.
-
- Raises:
- ``TypeError``
- if non-string dictionary keys are serialized
- and ``skipkeys`` is false.
- ``ValueError``
- if non-representable binary data is present
- and `use_builtin_types` is false.
- """
-
- if not hasattr(fp, "write"):
- raise AttributeError("'%s' object has no attribute 'write'" % type(fp).__name__)
- root = etree.Element("plist", version="1.0")
- el = totree(
- value,
- sort_keys=sort_keys,
- skipkeys=skipkeys,
- use_builtin_types=use_builtin_types,
- pretty_print=pretty_print,
- )
- root.append(el)
- tree = etree.ElementTree(root)
- # we write the doctype ourselves instead of using the 'doctype' argument
- # of 'write' method, becuse lxml will force adding a '\n' even when
- # pretty_print is False.
- if pretty_print:
- header = b"\n".join((XML_DECLARATION, PLIST_DOCTYPE, b""))
- else:
- header = XML_DECLARATION + PLIST_DOCTYPE
- fp.write(header)
- tree.write( # type: ignore
- fp,
- encoding="utf-8",
- pretty_print=pretty_print,
- xml_declaration=False,
- )
-
-
-def dumps(
- value: PlistEncodable,
- sort_keys: bool = True,
- skipkeys: bool = False,
- use_builtin_types: Optional[bool] = None,
- pretty_print: bool = True,
-) -> bytes:
- """Write a Python object to a string in plist format.
-
- Args:
- value: An object to write.
- sort_keys (bool): Whether keys of dictionaries should be sorted.
- skipkeys (bool): Whether to silently skip non-string dictionary
- keys.
- use_builtin_types (bool): If true, byte strings will be
- encoded in Base-64 and wrapped in a ``data`` tag; if
- false, they will be either stored as strings or an
- exception raised if they cannot be represented. Defaults
- pretty_print (bool): Whether to indent the output.
- indent_level (int): Level of indentation when serializing.
-
- Returns:
- string: A plist representation of the Python object.
-
- Raises:
- ``TypeError``
- if non-string dictionary keys are serialized
- and ``skipkeys`` is false.
- ``ValueError``
- if non-representable binary data is present
- and `use_builtin_types` is false.
- """
- fp = BytesIO()
- dump(
- value,
- fp,
- sort_keys=sort_keys,
- skipkeys=skipkeys,
- use_builtin_types=use_builtin_types,
- pretty_print=pretty_print,
- )
- return fp.getvalue()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-adc2d4ca.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-adc2d4ca.js
deleted file mode 100644
index 2abf91f03ae5b16129b648c0a77937cc1c559c8d..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Blocks-adc2d4ca.js
+++ /dev/null
@@ -1,50 +0,0 @@
-const VERSION_RE = new RegExp("3.36.1/", "g");function import_fix(mod, base) {const url = new URL(mod, base); return import(`https://gradio.s3-us-west-2.amazonaws.com/3.36.1/${url.pathname?.startsWith('/') ? url.pathname.substring(1).replace(VERSION_RE, "") : url.pathname.replace(VERSION_RE, "")}`);}import{n as $,i as $o,a as Ko,l as el,c as tl,d as nl,g as rl,w as vt,b as Le,_ as F,S as ue,e as ce,s as fe,f as Bt,h as De,j as Ht,k as W,m as de,o as Z,p as y,q as il,r as ol,t as ll,u as le,v as N,x as Y,y as ae,z as B,A as E,B as $e,C as Et,D as al,E as sl,F as Ae,G as oe,H as xn,I as ul,J as be,K as d,L as ge,M as m,N as k,O as M,P as I,Q as Se,R as q,T as Ue,U as Ye,V as Ie,W as cl,X as fl,Y as Lt,Z as _l,$ as hl,a0 as pl,a1 as dl,a2 as ml,a3 as gl,a4 as bl,a5 as vl,a6 as El,a7 as yl}from"./index-f877dfd5.js";import{B as yt,a as Sl,c as wl,f as jt}from"./Button-11a87b79.js";function Tl(e,t,n,r){if(!t)return $;const i=e.getBoundingClientRect();if(t.left===i.left&&t.right===i.right&&t.top===i.top&&t.bottom===i.bottom)return $;const{delay:o=0,duration:a=300,easing:l=$o,start:u=Ko()+o,end:s=u+a,tick:c=$,css:h}=n(e,{from:t,to:i},r);let _=!0,p=!1,v;function b(){h&&(v=tl(e,0,1,a,o,l,h)),o||(p=!0)}function g(){h&&nl(e,v),_=!1}return el(S=>{if(!p&&S>=u&&(p=!0),p&&S>=s&&(c(1,0),g()),!_)return!1;if(p){const A=S-u,T=0+1*l(A/a);c(T,1-T)}return!0}),b(),c(0,1),g}function Il(e){const t=getComputedStyle(e);if(t.position!=="absolute"&&t.position!=="fixed"){const{width:n,height:r}=t,i=e.getBoundingClientRect();e.style.position="absolute",e.style.width=n,e.style.height=r,Al(e,i)}}function Al(e,t){const n=e.getBoundingClientRect();if(t.left!==n.left||t.top!==n.top){const r=getComputedStyle(e),i=r.transform==="none"?"":r.transform;e.style.transform=`${i} translate(${t.left-n.left}px, ${t.top-n.top}px)`}}var kl=function(t){return Cl(t)&&!Pl(t)};function Cl(e){return!!e&&typeof e=="object"}function Pl(e){var t=Object.prototype.toString.call(e);return t==="[object RegExp]"||t==="[object Date]"||Hl(e)}var Ol=typeof Symbol=="function"&&Symbol.for,Bl=Ol?Symbol.for("react.element"):60103;function Hl(e){return e.$$typeof===Bl}function Ll(e){return Array.isArray(e)?[]:{}}function Fe(e,t){return t.clone!==!1&&t.isMergeableObject(e)?Pe(Ll(e),e,t):e}function jl(e,t,n){return e.concat(t).map(function(r){return Fe(r,n)})}function Nl(e,t){if(!t.customMerge)return Pe;var n=t.customMerge(e);return typeof n=="function"?n:Pe}function xl(e){return Object.getOwnPropertySymbols?Object.getOwnPropertySymbols(e).filter(function(t){return Object.propertyIsEnumerable.call(e,t)}):[]}function Nt(e){return Object.keys(e).concat(xl(e))}function Rn(e,t){try{return t in e}catch{return!1}}function Rl(e,t){return Rn(e,t)&&!(Object.hasOwnProperty.call(e,t)&&Object.propertyIsEnumerable.call(e,t))}function Ml(e,t,n){var r={};return n.isMergeableObject(e)&&Nt(e).forEach(function(i){r[i]=Fe(e[i],n)}),Nt(t).forEach(function(i){Rl(e,i)||(Rn(e,i)&&n.isMergeableObject(t[i])?r[i]=Nl(i,n)(e[i],t[i],n):r[i]=Fe(t[i],n))}),r}function Pe(e,t,n){n=n||{},n.arrayMerge=n.arrayMerge||jl,n.isMergeableObject=n.isMergeableObject||kl,n.cloneUnlessOtherwiseSpecified=Fe;var r=Array.isArray(t),i=Array.isArray(e),o=r===i;return o?r?n.arrayMerge(e,t,n):Ml(e,t,n):Fe(t,n)}Pe.all=function(t,n){if(!Array.isArray(t))throw new Error("first argument should be an array");return t.reduce(function(r,i){return Pe(r,i,n)},{})};var Dl=Pe,Fl=Dl;const Gl=rl(Fl);var ft=function(e,t){return ft=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,r){n.__proto__=r}||function(n,r){for(var i in r)Object.prototype.hasOwnProperty.call(r,i)&&(n[i]=r[i])},ft(e,t)};function Ke(e,t){if(typeof t!="function"&&t!==null)throw new TypeError("Class extends value "+String(t)+" is not a constructor or null");ft(e,t);function n(){this.constructor=e}e.prototype=t===null?Object.create(t):(n.prototype=t.prototype,new n)}var X=function(){return X=Object.assign||function(t){for(var n,r=1,i=arguments.length;r0}),n=[],r=0,i=t;r1)throw new RangeError("integer-width stems only accept a single optional option");i.options[0].replace(Yl,function(u,s,c,h,_,p){if(s)t.minimumIntegerDigits=c.length;else{if(h&&_)throw new Error("We currently do not support maximum integer digits");if(p)throw new Error("We currently do not support exact integer digits")}return""});continue}if(Wn.test(i.stem)){t.minimumIntegerDigits=i.stem.length;continue}if(Rt.test(i.stem)){if(i.options.length>1)throw new RangeError("Fraction-precision stems only accept a single optional option");i.stem.replace(Rt,function(u,s,c,h,_,p){return c==="*"?t.minimumFractionDigits=s.length:h&&h[0]==="#"?t.maximumFractionDigits=h.length:_&&p?(t.minimumFractionDigits=_.length,t.maximumFractionDigits=_.length+p.length):(t.minimumFractionDigits=s.length,t.maximumFractionDigits=s.length),""});var o=i.options[0];o==="w"?t=X(X({},t),{trailingZeroDisplay:"stripIfInteger"}):o&&(t=X(X({},t),Mt(o)));continue}if(Xn.test(i.stem)){t=X(X({},t),Mt(i.stem));continue}var a=Zn(i.stem);a&&(t=X(X({},t),a));var l=Jl(i.stem);l&&(t=X(X({},t),l))}return t}var qe={AX:["H"],BQ:["H"],CP:["H"],CZ:["H"],DK:["H"],FI:["H"],ID:["H"],IS:["H"],ML:["H"],NE:["H"],RU:["H"],SE:["H"],SJ:["H"],SK:["H"],AS:["h","H"],BT:["h","H"],DJ:["h","H"],ER:["h","H"],GH:["h","H"],IN:["h","H"],LS:["h","H"],PG:["h","H"],PW:["h","H"],SO:["h","H"],TO:["h","H"],VU:["h","H"],WS:["h","H"],"001":["H","h"],AL:["h","H","hB"],TD:["h","H","hB"],"ca-ES":["H","h","hB"],CF:["H","h","hB"],CM:["H","h","hB"],"fr-CA":["H","h","hB"],"gl-ES":["H","h","hB"],"it-CH":["H","h","hB"],"it-IT":["H","h","hB"],LU:["H","h","hB"],NP:["H","h","hB"],PF:["H","h","hB"],SC:["H","h","hB"],SM:["H","h","hB"],SN:["H","h","hB"],TF:["H","h","hB"],VA:["H","h","hB"],CY:["h","H","hb","hB"],GR:["h","H","hb","hB"],CO:["h","H","hB","hb"],DO:["h","H","hB","hb"],KP:["h","H","hB","hb"],KR:["h","H","hB","hb"],NA:["h","H","hB","hb"],PA:["h","H","hB","hb"],PR:["h","H","hB","hb"],VE:["h","H","hB","hb"],AC:["H","h","hb","hB"],AI:["H","h","hb","hB"],BW:["H","h","hb","hB"],BZ:["H","h","hb","hB"],CC:["H","h","hb","hB"],CK:["H","h","hb","hB"],CX:["H","h","hb","hB"],DG:["H","h","hb","hB"],FK:["H","h","hb","hB"],GB:["H","h","hb","hB"],GG:["H","h","hb","hB"],GI:["H","h","hb","hB"],IE:["H","h","hb","hB"],IM:["H","h","hb","hB"],IO:["H","h","hb","hB"],JE:["H","h","hb","hB"],LT:["H","h","hb","hB"],MK:["H","h","hb","hB"],MN:["H","h","hb","hB"],MS:["H","h","hb","hB"],NF:["H","h","hb","hB"],NG:["H","h","hb","hB"],NR:["H","h","hb","hB"],NU:["H","h","hb","hB"],PN:["H","h","hb","hB"],SH:["H","h","hb","hB"],SX:["H","h","hb","hB"],TA:["H","h","hb","hB"],ZA:["H","h","hb","hB"],"af-ZA":["H","h","hB","hb"],AR:["H","h","hB","hb"],CL:["H","h","hB","hb"],CR:["H","h","hB","hb"],CU:["H","h","hB","hb"],EA:["H","h","hB","hb"],"es-BO":["H","h","hB","hb"],"es-BR":["H","h","hB","hb"],"es-EC":["H","h","hB","hb"],"es-ES":["H","h","hB","hb"],"es-GQ":["H","h","hB","hb"],"es-PE":["H","h","hB","hb"],GT:["H","h","hB","hb"],HN:["H","h","hB","hb"],IC:["H","h","hB","hb"],KG:["H","h","hB","hb"],KM:["H","h","hB","hb"],LK:["H","h","hB","hb"],MA:["H","h","hB","hb"],MX:["H","h","hB","hb"],NI:["H","h","hB","hb"],PY:["H","h","hB","hb"],SV:["H","h","hB","hb"],UY:["H","h","hB","hb"],JP:["H","h","K"],AD:["H","hB"],AM:["H","hB"],AO:["H","hB"],AT:["H","hB"],AW:["H","hB"],BE:["H","hB"],BF:["H","hB"],BJ:["H","hB"],BL:["H","hB"],BR:["H","hB"],CG:["H","hB"],CI:["H","hB"],CV:["H","hB"],DE:["H","hB"],EE:["H","hB"],FR:["H","hB"],GA:["H","hB"],GF:["H","hB"],GN:["H","hB"],GP:["H","hB"],GW:["H","hB"],HR:["H","hB"],IL:["H","hB"],IT:["H","hB"],KZ:["H","hB"],MC:["H","hB"],MD:["H","hB"],MF:["H","hB"],MQ:["H","hB"],MZ:["H","hB"],NC:["H","hB"],NL:["H","hB"],PM:["H","hB"],PT:["H","hB"],RE:["H","hB"],RO:["H","hB"],SI:["H","hB"],SR:["H","hB"],ST:["H","hB"],TG:["H","hB"],TR:["H","hB"],WF:["H","hB"],YT:["H","hB"],BD:["h","hB","H"],PK:["h","hB","H"],AZ:["H","hB","h"],BA:["H","hB","h"],BG:["H","hB","h"],CH:["H","hB","h"],GE:["H","hB","h"],LI:["H","hB","h"],ME:["H","hB","h"],RS:["H","hB","h"],UA:["H","hB","h"],UZ:["H","hB","h"],XK:["H","hB","h"],AG:["h","hb","H","hB"],AU:["h","hb","H","hB"],BB:["h","hb","H","hB"],BM:["h","hb","H","hB"],BS:["h","hb","H","hB"],CA:["h","hb","H","hB"],DM:["h","hb","H","hB"],"en-001":["h","hb","H","hB"],FJ:["h","hb","H","hB"],FM:["h","hb","H","hB"],GD:["h","hb","H","hB"],GM:["h","hb","H","hB"],GU:["h","hb","H","hB"],GY:["h","hb","H","hB"],JM:["h","hb","H","hB"],KI:["h","hb","H","hB"],KN:["h","hb","H","hB"],KY:["h","hb","H","hB"],LC:["h","hb","H","hB"],LR:["h","hb","H","hB"],MH:["h","hb","H","hB"],MP:["h","hb","H","hB"],MW:["h","hb","H","hB"],NZ:["h","hb","H","hB"],SB:["h","hb","H","hB"],SG:["h","hb","H","hB"],SL:["h","hb","H","hB"],SS:["h","hb","H","hB"],SZ:["h","hb","H","hB"],TC:["h","hb","H","hB"],TT:["h","hb","H","hB"],UM:["h","hb","H","hB"],US:["h","hb","H","hB"],VC:["h","hb","H","hB"],VG:["h","hb","H","hB"],VI:["h","hb","H","hB"],ZM:["h","hb","H","hB"],BO:["H","hB","h","hb"],EC:["H","hB","h","hb"],ES:["H","hB","h","hb"],GQ:["H","hB","h","hb"],PE:["H","hB","h","hb"],AE:["h","hB","hb","H"],"ar-001":["h","hB","hb","H"],BH:["h","hB","hb","H"],DZ:["h","hB","hb","H"],EG:["h","hB","hb","H"],EH:["h","hB","hb","H"],HK:["h","hB","hb","H"],IQ:["h","hB","hb","H"],JO:["h","hB","hb","H"],KW:["h","hB","hb","H"],LB:["h","hB","hb","H"],LY:["h","hB","hb","H"],MO:["h","hB","hb","H"],MR:["h","hB","hb","H"],OM:["h","hB","hb","H"],PH:["h","hB","hb","H"],PS:["h","hB","hb","H"],QA:["h","hB","hb","H"],SA:["h","hB","hb","H"],SD:["h","hB","hb","H"],SY:["h","hB","hb","H"],TN:["h","hB","hb","H"],YE:["h","hB","hb","H"],AF:["H","hb","hB","h"],LA:["H","hb","hB","h"],CN:["H","hB","hb","h"],LV:["H","hB","hb","h"],TL:["H","hB","hb","h"],"zu-ZA":["H","hB","hb","h"],CD:["hB","H"],IR:["hB","H"],"hi-IN":["hB","h","H"],"kn-IN":["hB","h","H"],"ml-IN":["hB","h","H"],"te-IN":["hB","h","H"],KH:["hB","h","H","hb"],"ta-IN":["hB","h","hb","H"],BN:["hb","hB","h","H"],MY:["hb","hB","h","H"],ET:["hB","hb","h","H"],"gu-IN":["hB","hb","h","H"],"mr-IN":["hB","hb","h","H"],"pa-IN":["hB","hb","h","H"],TW:["hB","hb","h","H"],KE:["hB","hb","H","h"],MM:["hB","hb","H","h"],TZ:["hB","hb","H","h"],UG:["hB","hb","H","h"]};function $l(e,t){for(var n="",r=0;r>1),u="a",s=Kl(t);for((s=="H"||s=="k")&&(l=0);l-- >0;)n+=u;for(;a-- >0;)n=s+n}else i==="J"?n+="H":n+=i}return n}function Kl(e){var t=e.hourCycle;if(t===void 0&&e.hourCycles&&e.hourCycles.length&&(t=e.hourCycles[0]),t)switch(t){case"h24":return"k";case"h23":return"H";case"h12":return"h";case"h11":return"K";default:throw new Error("Invalid hourCycle")}var n=e.language,r;n!=="root"&&(r=e.maximize().region);var i=qe[r||""]||qe[n||""]||qe["".concat(n,"-001")]||qe["001"];return i[0]}var lt,ea=new RegExp("^".concat(qn.source,"*")),ta=new RegExp("".concat(qn.source,"*$"));function z(e,t){return{start:e,end:t}}var na=!!String.prototype.startsWith,ra=!!String.fromCodePoint,ia=!!Object.fromEntries,oa=!!String.prototype.codePointAt,la=!!String.prototype.trimStart,aa=!!String.prototype.trimEnd,sa=!!Number.isSafeInteger,ua=sa?Number.isSafeInteger:function(e){return typeof e=="number"&&isFinite(e)&&Math.floor(e)===e&&Math.abs(e)<=9007199254740991},ht=!0;try{var ca=Jn("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");ht=((lt=ca.exec("a"))===null||lt===void 0?void 0:lt[0])==="a"}catch{ht=!1}var Ft=na?function(t,n,r){return t.startsWith(n,r)}:function(t,n,r){return t.slice(r,r+n.length)===n},pt=ra?String.fromCodePoint:function(){for(var t=[],n=0;no;){if(a=t[o++],a>1114111)throw RangeError(a+" is not a valid code point");r+=a<65536?String.fromCharCode(a):String.fromCharCode(((a-=65536)>>10)+55296,a%1024+56320)}return r},Gt=ia?Object.fromEntries:function(t){for(var n={},r=0,i=t;r=r)){var i=t.charCodeAt(n),o;return i<55296||i>56319||n+1===r||(o=t.charCodeAt(n+1))<56320||o>57343?i:(i-55296<<10)+(o-56320)+65536}},fa=la?function(t){return t.trimStart()}:function(t){return t.replace(ea,"")},_a=aa?function(t){return t.trimEnd()}:function(t){return t.replace(ta,"")};function Jn(e,t){return new RegExp(e,t)}var dt;if(ht){var Ut=Jn("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");dt=function(t,n){var r;Ut.lastIndex=n;var i=Ut.exec(t);return(r=i[1])!==null&&r!==void 0?r:""}}else dt=function(t,n){for(var r=[];;){var i=Yn(t,n);if(i===void 0||Qn(i)||ma(i))break;r.push(i),n+=i>=65536?2:1}return pt.apply(void 0,r)};var ha=function(){function e(t,n){n===void 0&&(n={}),this.message=t,this.position={offset:0,line:1,column:1},this.ignoreTag=!!n.ignoreTag,this.locale=n.locale,this.requiresOtherClause=!!n.requiresOtherClause,this.shouldParseSkeletons=!!n.shouldParseSkeletons}return e.prototype.parse=function(){if(this.offset()!==0)throw Error("parser can only be used once");return this.parseMessage(0,"",!1)},e.prototype.parseMessage=function(t,n,r){for(var i=[];!this.isEOF();){var o=this.char();if(o===123){var a=this.parseArgument(t,r);if(a.err)return a;i.push(a.val)}else{if(o===125&&t>0)break;if(o===35&&(n==="plural"||n==="selectordinal")){var l=this.clonePosition();this.bump(),i.push({type:ee.pound,location:z(l,this.clonePosition())})}else if(o===60&&!this.ignoreTag&&this.peek()===47){if(r)break;return this.error(V.UNMATCHED_CLOSING_TAG,z(this.clonePosition(),this.clonePosition()))}else if(o===60&&!this.ignoreTag&&mt(this.peek()||0)){var a=this.parseTag(t,n);if(a.err)return a;i.push(a.val)}else{var a=this.parseLiteral(t,n);if(a.err)return a;i.push(a.val)}}}return{val:i,err:null}},e.prototype.parseTag=function(t,n){var r=this.clonePosition();this.bump();var i=this.parseTagName();if(this.bumpSpace(),this.bumpIf("/>"))return{val:{type:ee.literal,value:"<".concat(i,"/>"),location:z(r,this.clonePosition())},err:null};if(this.bumpIf(">")){var o=this.parseMessage(t+1,n,!0);if(o.err)return o;var a=o.val,l=this.clonePosition();if(this.bumpIf("")){if(this.isEOF()||!mt(this.char()))return this.error(V.INVALID_TAG,z(l,this.clonePosition()));var u=this.clonePosition(),s=this.parseTagName();return i!==s?this.error(V.UNMATCHED_CLOSING_TAG,z(u,this.clonePosition())):(this.bumpSpace(),this.bumpIf(">")?{val:{type:ee.tag,value:i,children:a,location:z(r,this.clonePosition())},err:null}:this.error(V.INVALID_TAG,z(l,this.clonePosition())))}else return this.error(V.UNCLOSED_TAG,z(r,this.clonePosition()))}else return this.error(V.INVALID_TAG,z(r,this.clonePosition()))},e.prototype.parseTagName=function(){var t=this.offset();for(this.bump();!this.isEOF()&&da(this.char());)this.bump();return this.message.slice(t,this.offset())},e.prototype.parseLiteral=function(t,n){for(var r=this.clonePosition(),i="";;){var o=this.tryParseQuote(n);if(o){i+=o;continue}var a=this.tryParseUnquoted(t,n);if(a){i+=a;continue}var l=this.tryParseLeftAngleBracket();if(l){i+=l;continue}break}var u=z(r,this.clonePosition());return{val:{type:ee.literal,value:i,location:u},err:null}},e.prototype.tryParseLeftAngleBracket=function(){return!this.isEOF()&&this.char()===60&&(this.ignoreTag||!pa(this.peek()||0))?(this.bump(),"<"):null},e.prototype.tryParseQuote=function(t){if(this.isEOF()||this.char()!==39)return null;switch(this.peek()){case 39:return this.bump(),this.bump(),"'";case 123:case 60:case 62:case 125:break;case 35:if(t==="plural"||t==="selectordinal")break;return null;default:return null}this.bump();var n=[this.char()];for(this.bump();!this.isEOF();){var r=this.char();if(r===39)if(this.peek()===39)n.push(39),this.bump();else{this.bump();break}else n.push(r);this.bump()}return pt.apply(void 0,n)},e.prototype.tryParseUnquoted=function(t,n){if(this.isEOF())return null;var r=this.char();return r===60||r===123||r===35&&(n==="plural"||n==="selectordinal")||r===125&&t>0?null:(this.bump(),pt(r))},e.prototype.parseArgument=function(t,n){var r=this.clonePosition();if(this.bump(),this.bumpSpace(),this.isEOF())return this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition()));if(this.char()===125)return this.bump(),this.error(V.EMPTY_ARGUMENT,z(r,this.clonePosition()));var i=this.parseIdentifierIfPossible().value;if(!i)return this.error(V.MALFORMED_ARGUMENT,z(r,this.clonePosition()));if(this.bumpSpace(),this.isEOF())return this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition()));switch(this.char()){case 125:return this.bump(),{val:{type:ee.argument,value:i,location:z(r,this.clonePosition())},err:null};case 44:return this.bump(),this.bumpSpace(),this.isEOF()?this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(r,this.clonePosition())):this.parseArgumentOptions(t,n,i,r);default:return this.error(V.MALFORMED_ARGUMENT,z(r,this.clonePosition()))}},e.prototype.parseIdentifierIfPossible=function(){var t=this.clonePosition(),n=this.offset(),r=dt(this.message,n),i=n+r.length;this.bumpTo(i);var o=this.clonePosition(),a=z(t,o);return{value:r,location:a}},e.prototype.parseArgumentOptions=function(t,n,r,i){var o,a=this.clonePosition(),l=this.parseIdentifierIfPossible().value,u=this.clonePosition();switch(l){case"":return this.error(V.EXPECT_ARGUMENT_TYPE,z(a,u));case"number":case"date":case"time":{this.bumpSpace();var s=null;if(this.bumpIf(",")){this.bumpSpace();var c=this.clonePosition(),h=this.parseSimpleArgStyleIfPossible();if(h.err)return h;var _=_a(h.val);if(_.length===0)return this.error(V.EXPECT_ARGUMENT_STYLE,z(this.clonePosition(),this.clonePosition()));var p=z(c,this.clonePosition());s={style:_,styleLocation:p}}var v=this.tryParseArgumentClose(i);if(v.err)return v;var b=z(i,this.clonePosition());if(s&&Ft(s?.style,"::",0)){var g=fa(s.style.slice(2));if(l==="number"){var h=this.parseNumberSkeletonFromString(g,s.styleLocation);return h.err?h:{val:{type:ee.number,value:r,location:b,style:h.val},err:null}}else{if(g.length===0)return this.error(V.EXPECT_DATE_TIME_SKELETON,b);var S=g;this.locale&&(S=$l(g,this.locale));var _={type:Oe.dateTime,pattern:S,location:s.styleLocation,parsedOptions:this.shouldParseSkeletons?ql(S):{}},A=l==="date"?ee.date:ee.time;return{val:{type:A,value:r,location:b,style:_},err:null}}}return{val:{type:l==="number"?ee.number:l==="date"?ee.date:ee.time,value:r,location:b,style:(o=s?.style)!==null&&o!==void 0?o:null},err:null}}case"plural":case"selectordinal":case"select":{var T=this.clonePosition();if(this.bumpSpace(),!this.bumpIf(","))return this.error(V.EXPECT_SELECT_ARGUMENT_OPTIONS,z(T,X({},T)));this.bumpSpace();var f=this.parseIdentifierIfPossible(),P=0;if(l!=="select"&&f.value==="offset"){if(!this.bumpIf(":"))return this.error(V.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,z(this.clonePosition(),this.clonePosition()));this.bumpSpace();var h=this.tryParseDecimalInteger(V.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,V.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE);if(h.err)return h;this.bumpSpace(),f=this.parseIdentifierIfPossible(),P=h.val}var H=this.tryParsePluralOrSelectOptions(t,l,n,f);if(H.err)return H;var v=this.tryParseArgumentClose(i);if(v.err)return v;var L=z(i,this.clonePosition());return l==="select"?{val:{type:ee.select,value:r,options:Gt(H.val),location:L},err:null}:{val:{type:ee.plural,value:r,options:Gt(H.val),offset:P,pluralType:l==="plural"?"cardinal":"ordinal",location:L},err:null}}default:return this.error(V.INVALID_ARGUMENT_TYPE,z(a,u))}},e.prototype.tryParseArgumentClose=function(t){return this.isEOF()||this.char()!==125?this.error(V.EXPECT_ARGUMENT_CLOSING_BRACE,z(t,this.clonePosition())):(this.bump(),{val:!0,err:null})},e.prototype.parseSimpleArgStyleIfPossible=function(){for(var t=0,n=this.clonePosition();!this.isEOF();){var r=this.char();switch(r){case 39:{this.bump();var i=this.clonePosition();if(!this.bumpUntil("'"))return this.error(V.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE,z(i,this.clonePosition()));this.bump();break}case 123:{t+=1,this.bump();break}case 125:{if(t>0)t-=1;else return{val:this.message.slice(n.offset,this.offset()),err:null};break}default:this.bump();break}}return{val:this.message.slice(n.offset,this.offset()),err:null}},e.prototype.parseNumberSkeletonFromString=function(t,n){var r=[];try{r=Wl(t)}catch{return this.error(V.INVALID_NUMBER_SKELETON,n)}return{val:{type:Oe.number,tokens:r,location:n,parsedOptions:this.shouldParseSkeletons?Ql(r):{}},err:null}},e.prototype.tryParsePluralOrSelectOptions=function(t,n,r,i){for(var o,a=!1,l=[],u=new Set,s=i.value,c=i.location;;){if(s.length===0){var h=this.clonePosition();if(n!=="select"&&this.bumpIf("=")){var _=this.tryParseDecimalInteger(V.EXPECT_PLURAL_ARGUMENT_SELECTOR,V.INVALID_PLURAL_ARGUMENT_SELECTOR);if(_.err)return _;c=z(h,this.clonePosition()),s=this.message.slice(h.offset,this.offset())}else break}if(u.has(s))return this.error(n==="select"?V.DUPLICATE_SELECT_ARGUMENT_SELECTOR:V.DUPLICATE_PLURAL_ARGUMENT_SELECTOR,c);s==="other"&&(a=!0),this.bumpSpace();var p=this.clonePosition();if(!this.bumpIf("{"))return this.error(n==="select"?V.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT:V.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT,z(this.clonePosition(),this.clonePosition()));var v=this.parseMessage(t+1,n,r);if(v.err)return v;var b=this.tryParseArgumentClose(p);if(b.err)return b;l.push([s,{value:v.val,location:z(p,this.clonePosition())}]),u.add(s),this.bumpSpace(),o=this.parseIdentifierIfPossible(),s=o.value,c=o.location}return l.length===0?this.error(n==="select"?V.EXPECT_SELECT_ARGUMENT_SELECTOR:V.EXPECT_PLURAL_ARGUMENT_SELECTOR,z(this.clonePosition(),this.clonePosition())):this.requiresOtherClause&&!a?this.error(V.MISSING_OTHER_CLAUSE,z(this.clonePosition(),this.clonePosition())):{val:l,err:null}},e.prototype.tryParseDecimalInteger=function(t,n){var r=1,i=this.clonePosition();this.bumpIf("+")||this.bumpIf("-")&&(r=-1);for(var o=!1,a=0;!this.isEOF();){var l=this.char();if(l>=48&&l<=57)o=!0,a=a*10+(l-48),this.bump();else break}var u=z(i,this.clonePosition());return o?(a*=r,ua(a)?{val:a,err:null}:this.error(n,u)):this.error(t,u)},e.prototype.offset=function(){return this.position.offset},e.prototype.isEOF=function(){return this.offset()===this.message.length},e.prototype.clonePosition=function(){return{offset:this.position.offset,line:this.position.line,column:this.position.column}},e.prototype.char=function(){var t=this.position.offset;if(t>=this.message.length)throw Error("out of bound");var n=Yn(this.message,t);if(n===void 0)throw Error("Offset ".concat(t," is at invalid UTF-16 code unit boundary"));return n},e.prototype.error=function(t,n){return{val:null,err:{kind:t,message:this.message,location:n}}},e.prototype.bump=function(){if(!this.isEOF()){var t=this.char();t===10?(this.position.line+=1,this.position.column=1,this.position.offset+=1):(this.position.column+=1,this.position.offset+=t<65536?1:2)}},e.prototype.bumpIf=function(t){if(Ft(this.message,t,this.offset())){for(var n=0;n=0?(this.bumpTo(r),!0):(this.bumpTo(this.message.length),!1)},e.prototype.bumpTo=function(t){if(this.offset()>t)throw Error("targetOffset ".concat(t," must be greater than or equal to the current offset ").concat(this.offset()));for(t=Math.min(t,this.message.length);;){var n=this.offset();if(n===t)break;if(n>t)throw Error("targetOffset ".concat(t," is at invalid UTF-16 code unit boundary"));if(this.bump(),this.isEOF())break}},e.prototype.bumpSpace=function(){for(;!this.isEOF()&&Qn(this.char());)this.bump()},e.prototype.peek=function(){if(this.isEOF())return null;var t=this.char(),n=this.offset(),r=this.message.charCodeAt(n+(t>=65536?2:1));return r??null},e}();function mt(e){return e>=97&&e<=122||e>=65&&e<=90}function pa(e){return mt(e)||e===47}function da(e){return e===45||e===46||e>=48&&e<=57||e===95||e>=97&&e<=122||e>=65&&e<=90||e==183||e>=192&&e<=214||e>=216&&e<=246||e>=248&&e<=893||e>=895&&e<=8191||e>=8204&&e<=8205||e>=8255&&e<=8256||e>=8304&&e<=8591||e>=11264&&e<=12271||e>=12289&&e<=55295||e>=63744&&e<=64975||e>=65008&&e<=65533||e>=65536&&e<=983039}function Qn(e){return e>=9&&e<=13||e===32||e===133||e>=8206&&e<=8207||e===8232||e===8233}function ma(e){return e>=33&&e<=35||e===36||e>=37&&e<=39||e===40||e===41||e===42||e===43||e===44||e===45||e>=46&&e<=47||e>=58&&e<=59||e>=60&&e<=62||e>=63&&e<=64||e===91||e===92||e===93||e===94||e===96||e===123||e===124||e===125||e===126||e===161||e>=162&&e<=165||e===166||e===167||e===169||e===171||e===172||e===174||e===176||e===177||e===182||e===187||e===191||e===215||e===247||e>=8208&&e<=8213||e>=8214&&e<=8215||e===8216||e===8217||e===8218||e>=8219&&e<=8220||e===8221||e===8222||e===8223||e>=8224&&e<=8231||e>=8240&&e<=8248||e===8249||e===8250||e>=8251&&e<=8254||e>=8257&&e<=8259||e===8260||e===8261||e===8262||e>=8263&&e<=8273||e===8274||e===8275||e>=8277&&e<=8286||e>=8592&&e<=8596||e>=8597&&e<=8601||e>=8602&&e<=8603||e>=8604&&e<=8607||e===8608||e>=8609&&e<=8610||e===8611||e>=8612&&e<=8613||e===8614||e>=8615&&e<=8621||e===8622||e>=8623&&e<=8653||e>=8654&&e<=8655||e>=8656&&e<=8657||e===8658||e===8659||e===8660||e>=8661&&e<=8691||e>=8692&&e<=8959||e>=8960&&e<=8967||e===8968||e===8969||e===8970||e===8971||e>=8972&&e<=8991||e>=8992&&e<=8993||e>=8994&&e<=9e3||e===9001||e===9002||e>=9003&&e<=9083||e===9084||e>=9085&&e<=9114||e>=9115&&e<=9139||e>=9140&&e<=9179||e>=9180&&e<=9185||e>=9186&&e<=9254||e>=9255&&e<=9279||e>=9280&&e<=9290||e>=9291&&e<=9311||e>=9472&&e<=9654||e===9655||e>=9656&&e<=9664||e===9665||e>=9666&&e<=9719||e>=9720&&e<=9727||e>=9728&&e<=9838||e===9839||e>=9840&&e<=10087||e===10088||e===10089||e===10090||e===10091||e===10092||e===10093||e===10094||e===10095||e===10096||e===10097||e===10098||e===10099||e===10100||e===10101||e>=10132&&e<=10175||e>=10176&&e<=10180||e===10181||e===10182||e>=10183&&e<=10213||e===10214||e===10215||e===10216||e===10217||e===10218||e===10219||e===10220||e===10221||e===10222||e===10223||e>=10224&&e<=10239||e>=10240&&e<=10495||e>=10496&&e<=10626||e===10627||e===10628||e===10629||e===10630||e===10631||e===10632||e===10633||e===10634||e===10635||e===10636||e===10637||e===10638||e===10639||e===10640||e===10641||e===10642||e===10643||e===10644||e===10645||e===10646||e===10647||e===10648||e>=10649&&e<=10711||e===10712||e===10713||e===10714||e===10715||e>=10716&&e<=10747||e===10748||e===10749||e>=10750&&e<=11007||e>=11008&&e<=11055||e>=11056&&e<=11076||e>=11077&&e<=11078||e>=11079&&e<=11084||e>=11085&&e<=11123||e>=11124&&e<=11125||e>=11126&&e<=11157||e===11158||e>=11159&&e<=11263||e>=11776&&e<=11777||e===11778||e===11779||e===11780||e===11781||e>=11782&&e<=11784||e===11785||e===11786||e===11787||e===11788||e===11789||e>=11790&&e<=11798||e===11799||e>=11800&&e<=11801||e===11802||e===11803||e===11804||e===11805||e>=11806&&e<=11807||e===11808||e===11809||e===11810||e===11811||e===11812||e===11813||e===11814||e===11815||e===11816||e===11817||e>=11818&&e<=11822||e===11823||e>=11824&&e<=11833||e>=11834&&e<=11835||e>=11836&&e<=11839||e===11840||e===11841||e===11842||e>=11843&&e<=11855||e>=11856&&e<=11857||e===11858||e>=11859&&e<=11903||e>=12289&&e<=12291||e===12296||e===12297||e===12298||e===12299||e===12300||e===12301||e===12302||e===12303||e===12304||e===12305||e>=12306&&e<=12307||e===12308||e===12309||e===12310||e===12311||e===12312||e===12313||e===12314||e===12315||e===12316||e===12317||e>=12318&&e<=12319||e===12320||e===12336||e===64830||e===64831||e>=65093&&e<=65094}function gt(e){e.forEach(function(t){if(delete t.location,Gn(t)||Un(t))for(var n in t.options)delete t.options[n].location,gt(t.options[n].value);else Mn(t)&&zn(t.style)||(Dn(t)||Fn(t))&&_t(t.style)?delete t.style.location:Vn(t)&>(t.children)})}function ga(e,t){t===void 0&&(t={}),t=X({shouldParseSkeletons:!0,requiresOtherClause:!0},t);var n=new ha(e,t).parse();if(n.err){var r=SyntaxError(V[n.err.kind]);throw r.location=n.err.location,r.originalMessage=n.err.message,r}return t?.captureLocation||gt(n.val),n.val}function at(e,t){var n=t&&t.cache?t.cache:wa,r=t&&t.serializer?t.serializer:Sa,i=t&&t.strategy?t.strategy:va;return i(e,{cache:n,serializer:r})}function ba(e){return e==null||typeof e=="number"||typeof e=="boolean"}function $n(e,t,n,r){var i=ba(r)?r:n(r),o=t.get(i);return typeof o>"u"&&(o=e.call(this,r),t.set(i,o)),o}function Kn(e,t,n){var r=Array.prototype.slice.call(arguments,3),i=n(r),o=t.get(i);return typeof o>"u"&&(o=e.apply(this,r),t.set(i,o)),o}function St(e,t,n,r,i){return n.bind(t,e,r,i)}function va(e,t){var n=e.length===1?$n:Kn;return St(e,this,n,t.cache.create(),t.serializer)}function Ea(e,t){return St(e,this,Kn,t.cache.create(),t.serializer)}function ya(e,t){return St(e,this,$n,t.cache.create(),t.serializer)}var Sa=function(){return JSON.stringify(arguments)};function wt(){this.cache=Object.create(null)}wt.prototype.get=function(e){return this.cache[e]};wt.prototype.set=function(e,t){this.cache[e]=t};var wa={create:function(){return new wt}},st={variadic:Ea,monadic:ya},Be;(function(e){e.MISSING_VALUE="MISSING_VALUE",e.INVALID_VALUE="INVALID_VALUE",e.MISSING_INTL_API="MISSING_INTL_API"})(Be||(Be={}));var et=function(e){Ke(t,e);function t(n,r,i){var o=e.call(this,n)||this;return o.code=r,o.originalMessage=i,o}return t.prototype.toString=function(){return"[formatjs Error: ".concat(this.code,"] ").concat(this.message)},t}(Error),Vt=function(e){Ke(t,e);function t(n,r,i,o){return e.call(this,'Invalid values for "'.concat(n,'": "').concat(r,'". Options are "').concat(Object.keys(i).join('", "'),'"'),Be.INVALID_VALUE,o)||this}return t}(et),Ta=function(e){Ke(t,e);function t(n,r,i){return e.call(this,'Value for "'.concat(n,'" must be of type ').concat(r),Be.INVALID_VALUE,i)||this}return t}(et),Ia=function(e){Ke(t,e);function t(n,r){return e.call(this,'The intl string context variable "'.concat(n,'" was not provided to the string "').concat(r,'"'),Be.MISSING_VALUE,r)||this}return t}(et),he;(function(e){e[e.literal=0]="literal",e[e.object=1]="object"})(he||(he={}));function Aa(e){return e.length<2?e:e.reduce(function(t,n){var r=t[t.length-1];return!r||r.type!==he.literal||n.type!==he.literal?t.push(n):r.value+=n.value,t},[])}function ka(e){return typeof e=="function"}function Xe(e,t,n,r,i,o,a){if(e.length===1&&xt(e[0]))return[{type:he.literal,value:e[0].value}];for(var l=[],u=0,s=e;u0?new Intl.Locale(n[0]):new Intl.Locale(typeof t=="string"?t:t[0])},e.__parse=ga,e.formats={number:{integer:{maximumFractionDigits:0},currency:{style:"currency"},percent:{style:"percent"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}},e}();const ye={},Ha=(e,t,n)=>n&&(t in ye||(ye[t]={}),e in ye[t]||(ye[t][e]=n),n),er=(e,t)=>{if(t==null)return;if(t in ye&&e in ye[t])return ye[t][e];const n=ze(t);for(let r=0;r0){const u=o.slice(l,o.length).join(".");if(u in a){a=a[u];break}}a=a[o[l]]}else a=void 0;return a}(n,t)}function nr(e,...t){delete ye[e],Ve.update(n=>(n[e]=Gl.all([n[e]||{},...t]),n))}Le([Ve],([e])=>Object.keys(e));Ve.subscribe(e=>Tt=e);const We={};function rr(e){return We[e]}function Je(e){return e!=null&&ze(e).some(t=>{var n;return(n=rr(t))===null||n===void 0?void 0:n.size})}function ja(e,t){return Promise.all(t.map(r=>(function(i,o){We[i].delete(o),We[i].size===0&&delete We[i]}(e,r),r().then(i=>i.default||i)))).then(r=>nr(e,...r))}const xe={};function ir(e){if(!Je(e))return e in xe?xe[e]:Promise.resolve();const t=function(n){return ze(n).map(r=>{const i=rr(r);return[r,i?[...i]:[]]}).filter(([,r])=>r.length>0)}(e);return xe[e]=Promise.all(t.map(([n,r])=>ja(n,r))).then(()=>{if(Je(e))return ir(e);delete xe[e]}),xe[e]}function Na({locale:e,id:t}){console.warn(`[svelte-i18n] The message "${t}" was not found in "${ze(e).join('", "')}".${Je(we())?`
-
-Note: there are at least one loader still registered to this locale that wasn't executed.`:""}`)}const Re={fallbackLocale:null,loadingDelay:200,formats:{number:{scientific:{notation:"scientific"},engineering:{notation:"engineering"},compactLong:{notation:"compact",compactDisplay:"long"},compactShort:{notation:"compact",compactDisplay:"short"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}},warnOnMissingMessages:!0,handleMissingMessage:void 0,ignoreTag:!0};function He(){return Re}function xa(e){const{formats:t,...n}=e,r=e.initialLocale||e.fallbackLocale;return n.warnOnMissingMessages&&(delete n.warnOnMissingMessages,n.handleMissingMessage==null?n.handleMissingMessage=Na:console.warn('[svelte-i18n] The "warnOnMissingMessages" option is deprecated. Please use the "handleMissingMessage" option instead.')),Object.assign(Re,n,{initialLocale:r}),t&&("number"in t&&Object.assign(Re.formats.number,t.number),"date"in t&&Object.assign(Re.formats.date,t.date),"time"in t&&Object.assign(Re.formats.time,t.time)),je.set(r)}const ct=vt(!1);let bt;const Ze=vt(null);function zt(e){return e.split("-").map((t,n,r)=>r.slice(0,n+1).join("-")).reverse()}function ze(e,t=He().fallbackLocale){const n=zt(e);return t?[...new Set([...n,...zt(t)])]:n}function we(){return bt??void 0}Ze.subscribe(e=>{bt=e??void 0,typeof window<"u"&&e!=null&&document.documentElement.setAttribute("lang",e)});const je={...Ze,set:e=>{if(e&&function(t){if(t==null)return;const n=ze(t);for(let r=0;rct.set(!0),t):ct.set(!0),ir(e).then(()=>{Ze.set(e)}).finally(()=>{clearTimeout(n),ct.set(!1)})}return Ze.set(e)}},Ra=()=>typeof window>"u"?null:window.navigator.language||window.navigator.languages[0],tt=e=>{const t=Object.create(null);return n=>{const r=JSON.stringify(n);return r in t?t[r]:t[r]=e(n)}},Ge=(e,t)=>{const{formats:n}=He();if(e in n&&t in n[e])return n[e][t];throw new Error(`[svelte-i18n] Unknown "${t}" ${e} format.`)},Ma=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format numbers');return t&&(n=Ge("number",t)),new Intl.NumberFormat(e,n)}),Da=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format dates');return t?n=Ge("date",t):Object.keys(n).length===0&&(n=Ge("date","short")),new Intl.DateTimeFormat(e,n)}),Fa=tt(({locale:e,format:t,...n})=>{if(e==null)throw new Error('[svelte-i18n] A "locale" must be set to format time values');return t?n=Ge("time",t):Object.keys(n).length===0&&(n=Ge("time","short")),new Intl.DateTimeFormat(e,n)}),Ga=({locale:e=we(),...t}={})=>Ma({locale:e,...t}),Ua=({locale:e=we(),...t}={})=>Da({locale:e,...t}),Va=({locale:e=we(),...t}={})=>Fa({locale:e,...t}),za=tt((e,t=we())=>new Ba(e,t,He().formats,{ignoreTag:He().ignoreTag})),qa=(e,t={})=>{var n,r,i,o;let a=t;typeof e=="object"&&(a=e,e=a.id);const{values:l,locale:u=we(),default:s}=a;if(u==null)throw new Error("[svelte-i18n] Cannot format a message without first setting the initial locale.");let c=er(e,u);if(c){if(typeof c!="string")return console.warn(`[svelte-i18n] Message with id "${e}" must be of type "string", found: "${typeof c}". Gettin its value through the "$format" method is deprecated; use the "json" method instead.`),c}else c=(o=(i=(r=(n=He()).handleMissingMessage)===null||r===void 0?void 0:r.call(n,{locale:u,id:e,defaultValue:s}))!==null&&i!==void 0?i:s)!==null&&o!==void 0?o:e;if(!l)return c;let h=c;try{h=za(c,u).format(l)}catch(_){_ instanceof Error&&console.warn(`[svelte-i18n] Message "${e}" has syntax error:`,_.message)}return h},Xa=(e,t)=>Va(t).format(e),Wa=(e,t)=>Ua(t).format(e),Za=(e,t)=>Ga(t).format(e),Ya=(e,t=we())=>er(e,t),dc=Le([je,Ve],()=>qa);Le([je],()=>Xa);Le([je],()=>Wa);Le([je],()=>Za);Le([je,Ve],()=>Ya);const Ja={accordion:()=>F(()=>import("./index-061f1fcf.js"),["assets/index-061f1fcf.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Column-824a6363.js","assets/Column-2853eb31.css","assets/index-8f1feca1.css"]),annotatedimage:()=>F(()=>import("./index-982abbe1.js"),["assets/index-982abbe1.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/Image-6ff1dc79.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-f0e43e7d.css"]),audio:()=>F(()=>import("./index-a2b4a4fc.js"),["assets/index-a2b4a4fc.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Upload-3aa22eef.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-87f877d6.js","assets/IconButton-34da90d2.js","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/ShareButton-cdd94184.js","assets/index-be790e2e.css"]),box:()=>F(()=>import("./index-3133b1ca.js"),["assets/index-3133b1ca.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css"]),button:()=>F(()=>import("./index-43eb8bd8.js"),["assets/index-43eb8bd8.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css"]),chatbot:()=>F(()=>import("./index-dea9d60d.js"),["assets/index-dea9d60d.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/ShareButton-cdd94184.js","assets/IconButton-34da90d2.js","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-421cb7e7.css"]),checkbox:()=>F(()=>import("./index-a0ff57e2.js"),["assets/index-a0ff57e2.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),checkboxgroup:()=>F(()=>import("./index-4e5625b1.js"),["assets/index-4e5625b1.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),code:()=>F(()=>import("./index-ebba85cc.js").then(e=>e.F),["assets/index-ebba85cc.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/Copy-534f8e58.js","assets/Download-a587c81f.js","assets/index-4ccfb72c.css"]),colorpicker:()=>F(()=>import("./index-c4debac9.js"),["assets/index-c4debac9.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),column:()=>F(()=>import("./index-b04fff44.js"),["assets/index-b04fff44.js","assets/Column-824a6363.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Column-2853eb31.css"]),dataframe:()=>F(()=>import("./index-c27610fd.js"),["assets/index-c27610fd.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Upload-3aa22eef.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/dsv-576afacd.js","assets/index-9ae8fa0e.css"]),dataset:()=>F(()=>import("./index-7af10a2e.js"),["assets/index-7af10a2e.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Image-75587433.js","assets/Image-003ee87c.css","assets/csv-b0b7514a.js","assets/dsv-576afacd.js","assets/Model3D-b938dbb2.js","assets/Model3D-98fc2b2c.css","assets/index-322e8a8e.css"]),dropdown:()=>F(()=>import("./index-ff8eb6fc.js"),["assets/index-ff8eb6fc.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),file:()=>F(()=>import("./index-a1cf959d.js"),["assets/index-a1cf959d.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/File-69f43e15.js","assets/Upload-3aa22eef.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-87f877d6.js","assets/IconButton-34da90d2.js","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/index-aef3869a.css"]),form:()=>F(()=>import("./index-f08fea28.js"),["assets/index-f08fea28.js","assets/Form-2d54a466.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Form-3812b7f1.css"]),gallery:()=>F(()=>import("./index-1f2b9eb1.js"),["assets/index-1f2b9eb1.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/ShareButton-cdd94184.js","assets/IconButton-34da90d2.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-87f877d6.js","assets/Image-6ff1dc79.js","assets/index-1e03cd90.css"]),group:()=>F(()=>import("./index-7df11078.js"),["assets/index-7df11078.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/index-4247b34c.css"]),highlightedtext:()=>F(()=>import("./index-e4680786.js"),["assets/index-e4680786.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/color-4b6a4814.js","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/index-928645ac.css"]),html:()=>F(()=>import("./index-cda11a06.js"),["assets/index-cda11a06.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/index-329f8260.css"]),image:()=>F(()=>import("./index-de8e05da.js"),["assets/index-de8e05da.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Image-6ff1dc79.js","assets/StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js","assets/StaticImage-508005b4.css","assets/IconButton-34da90d2.js","assets/ModifyUpload-87f877d6.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Upload-3aa22eef.js","assets/ShareButton-cdd94184.js","assets/Empty-2159e5e9.js","assets/Download-a587c81f.js","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/Image-75587433.js","assets/Image-003ee87c.css"]),interpretation:()=>F(()=>import("./index-ce559038.js"),["assets/index-ce559038.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/index-6acaa952.css"]),json:()=>F(()=>import("./index-9d071f72.js"),["assets/index-9d071f72.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Copy-534f8e58.js","assets/Empty-2159e5e9.js","assets/BlockLabel-7929e88d.js","assets/index-3ca142e0.css"]),label:()=>F(()=>import("./index-c40f2837.js"),["assets/index-c40f2837.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/index-cc2431f4.css"]),markdown:()=>F(()=>import("./index-41a680e3.js"),["assets/index-41a680e3.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/index-edf307d2.css"]),model3d:()=>F(()=>import("./index-06552315.js"),["assets/index-06552315.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/File-69f43e15.js","assets/IconButton-34da90d2.js","assets/Download-a587c81f.js","assets/Upload-3aa22eef.js","assets/ModifyUpload-87f877d6.js","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/Model3D-b938dbb2.js","assets/Model3D-98fc2b2c.css","assets/index-4ffdbeab.css"]),number:()=>F(()=>import("./index-b86ab651.js"),["assets/index-b86ab651.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),plot:()=>F(()=>import("./index-905fdf08.js"),["assets/index-905fdf08.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/color-4b6a4814.js","assets/linear-58a44b5e.js","assets/dsv-576afacd.js","assets/Empty-2159e5e9.js","assets/BlockLabel-7929e88d.js","assets/index-2908e8a9.css"]),radio:()=>F(()=>import("./index-71c3e1fa.js"),["assets/index-71c3e1fa.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),row:()=>F(()=>import("./index-2543d7a9.js"),["assets/index-2543d7a9.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/index-93c91554.css"]),slider:()=>F(()=>import("./index-cf655cb8.js"),["assets/index-cf655cb8.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/ColorPicker-10a76632.css"]),state:()=>F(()=>import("./index-e97ba05a.js"),["assets/index-e97ba05a.js","assets/index-f877dfd5.js","assets/index-63038c0b.css"]),statustracker:()=>F(()=>import("./index-3ca19104.js"),["assets/index-3ca19104.js","assets/index-f877dfd5.js","assets/index-63038c0b.css"]),tabs:()=>F(()=>import("./index-b92380ed.js"),["assets/index-b92380ed.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/TabItem.svelte_svelte_type_style_lang-e019e79b.js","assets/TabItem-e9c69a3d.css","assets/Column-2853eb31.css"]),tabitem:()=>F(()=>import("./index-5de8a102.js"),["assets/index-5de8a102.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/TabItem.svelte_svelte_type_style_lang-e019e79b.js","assets/TabItem-e9c69a3d.css","assets/Column-824a6363.js","assets/Column-2853eb31.css"]),textbox:()=>F(()=>import("./index-def00e21.js"),["assets/index-def00e21.js","assets/Textbox-805ab1aa.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/BlockTitle-8596cf63.js","assets/Info-f92267f9.js","assets/Copy-534f8e58.js","assets/ColorPicker-10a76632.css"]),timeseries:()=>F(()=>import("./index-edd3b6ef.js"),["assets/index-edd3b6ef.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Upload-3aa22eef.js","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/ModifyUpload-87f877d6.js","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/IconButton-34da90d2.js","assets/BlockLabel-7929e88d.js","assets/Empty-2159e5e9.js","assets/color-4b6a4814.js","assets/csv-b0b7514a.js","assets/dsv-576afacd.js","assets/linear-58a44b5e.js","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/index-9da94804.css"]),uploadbutton:()=>F(()=>import("./index-f3522350.js"),["assets/index-f3522350.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/index-03d58ab8.css"]),video:()=>F(()=>import("./index-5c6740a6.js"),["assets/index-5c6740a6.js","assets/index-f877dfd5.js","assets/index-63038c0b.css","assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js","assets/ModifyUpload-77b0d4b2.css","assets/Button-11a87b79.js","assets/Button-9230b6bf.css","assets/Upload-3aa22eef.js","assets/ModifyUpload-87f877d6.js","assets/IconButton-34da90d2.js","assets/BlockLabel-7929e88d.js","assets/StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js","assets/StaticImage-508005b4.css","assets/Empty-2159e5e9.js","assets/ShareButton-cdd94184.js","assets/Download-a587c81f.js","assets/UploadText-8aae32a4.js","assets/UploadText-690664d1.css","assets/index-fe39713d.css"])},or="أرسل",lr="أمسح",ar="فسِّر",sr="بلِّغ",ur="أمثلة",cr="أو",Qa={interface:{drop_image:"أسقط الصورة هنا",drop_video:"أسقط الفيديو هنا",drop_audio:"أسقط الملف الصوتي هنا",drop_file:"أسقط الملف هنا",drop_csv:"أسقط ملف البيانات هنا",click_to_upload:"إضغط للتحميل",view_api:"إستخدم واجهة البرمجة",built_with_Gradio:"تم الإنشاء بإستخدام Gradio"},Submit:or,Clear:lr,Interpret:ar,Flag:sr,Examples:ur,or:cr},$a=Object.freeze(Object.defineProperty({__proto__:null,Clear:lr,Examples:ur,Flag:sr,Interpret:ar,Submit:or,default:Qa,or:cr},Symbol.toStringTag,{value:"Module"})),fr="Envia",_r="Neteja",hr="Interpreta",pr="Avisa",dr="Exemples",mr="o",Ka={interface:{drop_image:"Deixeu anar la imatge aquí",drop_video:"Deixeu anar el vídeo aquí",drop_audio:"Deixeu anar l'àudio aquí",drop_file:"Deixeu anar el fitxer aquí",drop_csv:"Deixeu anar el CSV aquí",click_to_upload:"Feu clic per pujar",view_api:"Veure l'API",built_with_Gradio:"Construït amb gradio",copy_to_clipboard:"Copia el json",loading:"S'està carregant",error:"ERROR",empty:"Buit"},Submit:fr,Clear:_r,Interpret:hr,Flag:pr,Examples:dr,or:mr},es=Object.freeze(Object.defineProperty({__proto__:null,Clear:_r,Examples:dr,Flag:pr,Interpret:hr,Submit:fr,default:Ka,or:mr},Symbol.toStringTag,{value:"Module"})),gr="Absenden",br="Löschen",vr="Ersteller",Er="Flag",yr="Beispiele",Sr="oder",ts={interface:{drop_image:"Bild hier ablegen",drop_video:"Video hier ablegen",drop_audio:"Audio hier ablegen",drop_file:"Datei hier ablegen",drop_csv:"CSV Datei hier ablegen",click_to_upload:"Hochladen",view_api:"API anschauen",built_with_Gradio:"Mit Gradio erstellt"},Submit:gr,Clear:br,Interpret:vr,Flag:Er,Examples:yr,or:Sr},ns=Object.freeze(Object.defineProperty({__proto__:null,Clear:br,Examples:yr,Flag:Er,Interpret:vr,Submit:gr,default:ts,or:Sr},Symbol.toStringTag,{value:"Module"})),wr="Submit",Tr="Clear",Ir="Interpret",Ar="Flag",kr="Examples",Cr="or",rs={interface:{drop_image:"Drop Image Here",drop_video:"Drop Video Here",drop_audio:"Drop Audio Here",drop_file:"Drop File Here",drop_csv:"Drop CSV Here",click_to_upload:"Click to Upload",view_api:"view the api",built_with_Gradio:"Built with gradio",copy_to_clipboard:"copy json",loading:"Loading",error:"ERROR",empty:"Empty"},Submit:wr,Clear:Tr,Interpret:Ir,Flag:Ar,Examples:kr,or:Cr},is=Object.freeze(Object.defineProperty({__proto__:null,Clear:Tr,Examples:kr,Flag:Ar,Interpret:Ir,Submit:wr,default:rs,or:Cr},Symbol.toStringTag,{value:"Module"})),Pr="Enviar",Or="Limpiar",Br="Interpretar",Hr="Avisar",Lr="Ejemplos",jr="o",os={interface:{drop_image:"Coloque la imagen aquí",drop_video:"Coloque el video aquí",drop_audio:"Coloque el audio aquí",drop_file:"Coloque el archivo aquí",drop_csv:"Coloque el CSV aquí",click_to_upload:"Haga click para cargar",view_api:"Ver la API",built_with_Gradio:"Construido con Gradio"},Submit:Pr,Clear:Or,Interpret:Br,Flag:Hr,Examples:Lr,or:jr},ls=Object.freeze(Object.defineProperty({__proto__:null,Clear:Or,Examples:Lr,Flag:Hr,Interpret:Br,Submit:Pr,default:os,or:jr},Symbol.toStringTag,{value:"Module"})),Nr="ارسال",xr="حذف",Rr="تفسیر",Mr="پرچم",Dr="مثال ها",Fr="یا",as={interface:{drop_image:"تصویر را اینجا رها کنید",drop_video:"ویدیو را اینجا رها کنید",drop_audio:"صوت را اینجا رها کنید",drop_file:"فایل را اینجا رها کنید",drop_csv:"فایل csv را اینجا رها کنید",click_to_upload:"برای آپلود کلیک کنید",view_api:"api را مشاهده کنید",built_with_Gradio:"ساخته شده با gradio"},Submit:Nr,Clear:xr,Interpret:Rr,Flag:Mr,Examples:Dr,or:Fr},ss=Object.freeze(Object.defineProperty({__proto__:null,Clear:xr,Examples:Dr,Flag:Mr,Interpret:Rr,Submit:Nr,default:as,or:Fr},Symbol.toStringTag,{value:"Module"})),Gr="Soumettre",Ur="Nettoyer",Vr="Interpréter",zr="Signaler",qr="Exemples",Xr="ou",us={interface:{drop_image:"Déposer l'Image Ici",drop_video:"Déposer la Vidéo Ici",drop_audio:"Déposer l'Audio Ici",drop_file:"Déposer le Fichier Ici",drop_csv:"Déposer le CSV Ici",click_to_upload:"Cliquer pour Télécharger",view_api:"Voir l'API",built_with_Gradio:"Conçu avec Gradio"},Submit:Gr,Clear:Ur,Interpret:Vr,Flag:zr,Examples:qr,or:Xr},cs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ur,Examples:qr,Flag:zr,Interpret:Vr,Submit:Gr,default:us,or:Xr},Symbol.toStringTag,{value:"Module"})),Wr="שלח",Zr="נקה",Yr="לפרש",Jr="סמן",Qr="דוגמות",$r="או",fs={interface:{drop_image:"גרור קובץ תמונה לכאן",drop_video:"גרור קובץ סרטון לכאן",drop_audio:"גרור לכאן קובץ שמע",drop_file:"גרור קובץ לכאן",drop_csv:"גרור csv קובץ לכאן",click_to_upload:"לחץ כדי להעלות",view_api:"צפה ב API",built_with_Gradio:"בנוי עם גרדיו"},Submit:Wr,Clear:Zr,Interpret:Yr,Flag:Jr,Examples:Qr,or:$r},_s=Object.freeze(Object.defineProperty({__proto__:null,Clear:Zr,Examples:Qr,Flag:Jr,Interpret:Yr,Submit:Wr,default:fs,or:$r},Symbol.toStringTag,{value:"Module"})),Kr="सबमिट करे",ei="हटाये",ti="व्याख्या करे",ni="चिह्नित करे",ri="उदाहरण",ii="या",hs={interface:{drop_image:"यहाँ इमेज ड्रॉप करें",drop_video:"यहाँ वीडियो ड्रॉप करें",drop_audio:"यहाँ ऑडियो ड्रॉप करें",drop_file:"यहाँ File ड्रॉप करें",drop_csv:"यहाँ CSV ड्रॉप करें",click_to_upload:"अपलोड के लिए बटन दबायें",view_api:"API को देखे",built_with_Gradio:"Gradio से बना"},Submit:Kr,Clear:ei,Interpret:ti,Flag:ni,Examples:ri,or:ii},ps=Object.freeze(Object.defineProperty({__proto__:null,Clear:ei,Examples:ri,Flag:ni,Interpret:ti,Submit:Kr,default:hs,or:ii},Symbol.toStringTag,{value:"Module"})),oi="送信",li="クリア",ai="解釈",si="フラグする",ui="入力例",ci="または",ds={interface:{drop_image:"ここに画像をドロップ",drop_video:"ここに動画をドロップ",drop_audio:"ここに音声をドロップ",drop_file:"ここにファイルをドロップ",drop_csv:"ここにCSVをドロップ",click_to_upload:"クリックしてアップロード",view_api:"APIを見る",built_with_Gradio:"gradioで作ろう"},Submit:oi,Clear:li,Interpret:ai,Flag:si,Examples:ui,or:ci},ms=Object.freeze(Object.defineProperty({__proto__:null,Clear:li,Examples:ui,Flag:si,Interpret:ai,Submit:oi,default:ds,or:ci},Symbol.toStringTag,{value:"Module"})),fi="제출하기",_i="클리어",hi="설명하기",pi="플래그",di="예시",mi="또는",gs={interface:{drop_image:"이미지를 끌어 놓으세요",drop_video:"비디오를 끌어 놓으세요",drop_audio:"오디오를 끌어 놓으세요",drop_file:"파일을 끌어 놓으세요",drop_csv:"CSV파일을 끌어 놓으세요",click_to_upload:"클릭해서 업로드하기",view_api:"API 보기",built_with_Gradio:"gradio로 제작되었습니다"},Submit:fi,Clear:_i,Interpret:hi,Flag:pi,Examples:di,or:mi},bs=Object.freeze(Object.defineProperty({__proto__:null,Clear:_i,Examples:di,Flag:pi,Interpret:hi,Submit:fi,default:gs,or:mi},Symbol.toStringTag,{value:"Module"})),gi="Pateikti",bi="Trinti",vi="Interpretuoti",Ei="Pažymėti",yi="Pavyzdžiai",Si="arba",vs={interface:{drop_image:"Įkelkite paveikslėlį čia",drop_video:"Įkelkite vaizdo įrašą čia",drop_audio:"Įkelkite garso įrašą čia",drop_file:"Įkelkite bylą čia",drop_csv:"Įkelkite CSV čia",click_to_upload:"Spustelėkite norėdami įkelti",view_api:"peržiūrėti api",built_with_Gradio:"sukurta su gradio"},Submit:gi,Clear:bi,Interpret:vi,Flag:Ei,Examples:yi,or:Si},Es=Object.freeze(Object.defineProperty({__proto__:null,Clear:bi,Examples:yi,Flag:Ei,Interpret:vi,Submit:gi,default:vs,or:Si},Symbol.toStringTag,{value:"Module"})),wi="Zend in",Ti="Wis",Ii="Interpreteer",Ai="Vlag",ki="Voorbeelden",Ci="of",ys={interface:{drop_image:"Sleep een Afbeelding hier",drop_video:"Sleep een Video hier",drop_audio:"Sleep een Geluidsbestand hier",drop_file:"Sleep een Document hier",drop_csv:"Sleep een CSV hier",click_to_upload:"Klik om the Uploaden",view_api:"zie de api",built_with_Gradio:"gemaakt met gradio"},Submit:wi,Clear:Ti,Interpret:Ii,Flag:Ai,Examples:ki,or:Ci},Ss=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ti,Examples:ki,Flag:Ai,Interpret:Ii,Submit:wi,default:ys,or:Ci},Symbol.toStringTag,{value:"Module"})),Pi="Zatwierdź",Oi="Wyczyść",Bi="Interpretuj",Hi="Oznacz",Li="Przykłady",ji="lub",ws={interface:{drop_image:"Przeciągnij tutaj zdjęcie",drop_video:"Przeciągnij tutaj video",drop_audio:"Przeciągnij tutaj audio",drop_file:"Przeciągnij tutaj plik",drop_csv:"Przeciągnij tutaj CSV",click_to_upload:"Kliknij, aby przesłać",view_api:"zobacz api",built_with_Gradio:"utworzone z gradio"},Submit:Pi,Clear:Oi,Interpret:Bi,Flag:Hi,Examples:Li,or:ji},Ts=Object.freeze(Object.defineProperty({__proto__:null,Clear:Oi,Examples:Li,Flag:Hi,Interpret:Bi,Submit:Pi,default:ws,or:ji},Symbol.toStringTag,{value:"Module"})),Ni="Enviar",xi="Limpar",Ri="Interpretar",Mi="Marcar",Di="Exemplos",Fi="ou",Is={interface:{drop_image:"Solte a Imagem Aqui",drop_video:"Solte o Vídeo Aqui",drop_audio:"Solte o Áudio Aqui",drop_file:"Solte o Arquivo Aqui",drop_csv:"Solte o CSV Aqui",click_to_upload:"Clique para o Upload",view_api:"Veja a API",built_with_Gradio:"Construído com gradio",copy_to_clipboard:"copiar para o clipboard",loading:"Carregando",error:"ERRO",empty:"Vazio"},Submit:Ni,Clear:xi,Interpret:Ri,Flag:Mi,Examples:Di,or:Fi},As=Object.freeze(Object.defineProperty({__proto__:null,Clear:xi,Examples:Di,Flag:Mi,Interpret:Ri,Submit:Ni,default:Is,or:Fi},Symbol.toStringTag,{value:"Module"})),Gi="Исполнить",Ui="Очистить",Vi="Интерпретировать",zi="Пометить",qi="Примеры",Xi="или",ks={interface:{drop_image:"Поместите Изображение Здесь",drop_video:"Поместите Видео Здесь",drop_audio:"Поместите Аудио Здесь",drop_file:"Поместите Документ Здесь",drop_csv:"Поместите CSV Здесь",click_to_upload:"Нажмите, чтобы загрузить",view_api:"просмотр api",built_with_Gradio:"сделано с помощью gradio"},Submit:Gi,Clear:Ui,Interpret:Vi,Flag:zi,Examples:qi,or:Xi},Cs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Ui,Examples:qi,Flag:zi,Interpret:Vi,Submit:Gi,default:ks,or:Xi},Symbol.toStringTag,{value:"Module"})),Wi="சமர்ப்பி",Zi="அழி",Yi="உட்பொருள்",Ji="கொடியிடு",Qi="எடுத்துக்காட்டுகள்",$i="அல்லது",Ps={interface:{drop_image:"படத்தை வை",drop_video:"வீடியோவை வை",drop_audio:"ஆடியோவை வை",drop_file:"கோப்பை வை",drop_csv:"சிஎஸ்வி வை",click_to_upload:"பதிவேற்ற கிளிக் செய்",view_api:"அபியை காண்",built_with_Gradio:"க்ரேடியோ-வுடன் கட்டப்பட்டது"},Submit:Wi,Clear:Zi,Interpret:Yi,Flag:Ji,Examples:Qi,or:$i},Os=Object.freeze(Object.defineProperty({__proto__:null,Clear:Zi,Examples:Qi,Flag:Ji,Interpret:Yi,Submit:Wi,default:Ps,or:$i},Symbol.toStringTag,{value:"Module"})),Ki="Yükle",eo="Temizle",to="Yorumla",no="Etiketle",ro="örnekler",io="veya",Bs={interface:{drop_image:"Resmi Buraya Sürükle",drop_video:"Videoyu Buraya Sürükle",drop_audio:"Kaydı Buraya Sürükle",drop_file:"Dosyayı Buraya Sürükle",drop_csv:"CSV'yi Buraya Sürükle",click_to_upload:"Yüklemek için Tıkla",view_api:"api'yi görüntüle",built_with_Gradio:"Gradio ile oluşturulmuştur"},Submit:Ki,Clear:eo,Interpret:to,Flag:no,Examples:ro,or:io},Hs=Object.freeze(Object.defineProperty({__proto__:null,Clear:eo,Examples:ro,Flag:no,Interpret:to,Submit:Ki,default:Bs,or:io},Symbol.toStringTag,{value:"Module"})),oo="Надіслати",lo="Очистити",ao="Пояснити результат",so="Позначити",uo="Приклади",co="або",Ls={interface:{drop_image:"Перетягніть зображення сюди",drop_video:"Перетягніть відео сюди",drop_audio:"Перетягніть аудіо сюди",drop_file:"Перетягніть файл сюди",drop_csv:"Перетягніть CSV-файл сюди",click_to_upload:"Натисніть щоб завантажити",view_api:"Переглянути API",built_with_Gradio:"Зроблено на основі gradio"},Submit:oo,Clear:lo,Interpret:ao,Flag:so,Examples:uo,or:co},js=Object.freeze(Object.defineProperty({__proto__:null,Clear:lo,Examples:uo,Flag:so,Interpret:ao,Submit:oo,default:Ls,or:co},Symbol.toStringTag,{value:"Module"})),fo="جمع کریں",_o="ہٹا دیں",ho="تشریح کریں",po="نشان لگائیں",mo="مثالیں",go="یا",Ns={interface:{drop_image:"یہاں تصویر ڈراپ کریں",drop_video:"یہاں ویڈیو ڈراپ کریں",drop_audio:"یہاں آڈیو ڈراپ کریں",drop_file:"یہاں فائل ڈراپ کریں",drop_csv:"یہاں فائل ڈراپ کریں",click_to_upload:"اپ لوڈ کے لیے کلک کریں",view_api:"API دیکھیں",built_with_Gradio:"کے ساتھ بنایا گیا Gradio"},Submit:fo,Clear:_o,Interpret:ho,Flag:po,Examples:mo,or:go},xs=Object.freeze(Object.defineProperty({__proto__:null,Clear:_o,Examples:mo,Flag:po,Interpret:ho,Submit:fo,default:Ns,or:go},Symbol.toStringTag,{value:"Module"})),bo="Yubor",vo="Tozalash",Eo="Tushuntirish",yo="Bayroq",So="Namunalar",wo="或",Rs={interface:{drop_image:"Rasmni Shu Yerga Tashlang",drop_video:"Videoni Shu Yerga Tashlang",drop_audio:"Audioni Shu Yerga Tashlang",drop_file:"Faylni Shu Yerga Tashlang",drop_csv:"CSVni Shu Yerga Tashlang",click_to_upload:"Yuklash uchun Bosing",view_api:"apini ko'ring",built_with_Gradio:"gradio bilan qilingan"},Submit:bo,Clear:vo,Interpret:Eo,Flag:yo,Examples:So,or:wo},Ms=Object.freeze(Object.defineProperty({__proto__:null,Clear:vo,Examples:So,Flag:yo,Interpret:Eo,Submit:bo,default:Rs,or:wo},Symbol.toStringTag,{value:"Module"})),To="提交",Io="清除",Ao="解释",ko="标记",Co="示例",Po="或",Ds={interface:{drop_image:"拖放图片至此处",drop_video:"拖放视频至此处",drop_audio:"拖放音频至此处",drop_file:"拖放文件至此处",drop_csv:"拖放CSV至此处",click_to_upload:"点击上传",view_api:"查看API",built_with_Gradio:"使用Gradio构建"},Submit:To,Clear:Io,Interpret:Ao,Flag:ko,Examples:Co,or:Po},Fs=Object.freeze(Object.defineProperty({__proto__:null,Clear:Io,Examples:Co,Flag:ko,Interpret:Ao,Submit:To,default:Ds,or:Po},Symbol.toStringTag,{value:"Module"})),Oo="提交",Bo="清除",Ho="解釋",Lo="Flag",jo="範例",No="或",Gs={interface:{drop_image:"刪除圖片",drop_video:"刪除影片",drop_audio:"刪除音頻",drop_file:"刪除檔案",drop_csv:"刪除CSV",click_to_upload:"點擊上傳",view_api:"查看API",built_with_Gradio:"使用Gradio構建"},Submit:Oo,Clear:Bo,Interpret:Ho,Flag:Lo,Examples:jo,or:No},Us=Object.freeze(Object.defineProperty({__proto__:null,Clear:Bo,Examples:jo,Flag:Lo,Interpret:Ho,Submit:Oo,default:Gs,or:No},Symbol.toStringTag,{value:"Module"})),qt=Object.assign({"./lang/ar.json":$a,"./lang/ca.json":es,"./lang/de.json":ns,"./lang/en.json":is,"./lang/es.json":ls,"./lang/fa.json":ss,"./lang/fr.json":cs,"./lang/he.json":_s,"./lang/hi.json":ps,"./lang/ja.json":ms,"./lang/ko.json":bs,"./lang/lt.json":Es,"./lang/nl.json":Ss,"./lang/pl.json":Ts,"./lang/pt-BR.json":As,"./lang/ru.json":Cs,"./lang/ta.json":Os,"./lang/tr.json":Hs,"./lang/uk.json":js,"./lang/ur.json":xs,"./lang/uz.json":Ms,"./lang/zh-CN.json":Fs,"./lang/zh-tw.json":Us});function Vs(){let e={};for(const t in qt){const n=t.split("/").pop().split(".").shift();e[n]=qt[t].default}return e}const Xt=Vs();for(const e in Xt)nr(e,Xt[e]);function zs(){xa({fallbackLocale:"en",initialLocale:Ra()})}function Wt(e,t,n){const r=e.slice();return r[8]=t[n].component,r[17]=t[n].id,r[2]=t[n].props,r[18]=t[n].children,r[9]=t[n].has_modes,r}function Zt(e){let t=[],n=new Map,r,i,o=oe(e[1]);const a=l=>l[17];for(let l=0;l{r=null}),ae())},i(i){n||(B(r),n=!0)},o(i){N(r),n=!1},d(i){i&&E(t),r&&r.d(i)}}}function Xs(e){let t,n,r,i;const o=[{elem_id:"elem_id"in e[2]&&e[2].elem_id||`component-${e[4]}`},{elem_classes:"elem_classes"in e[2]&&e[2].elem_classes||[]},{target:e[6]},e[2],{theme_mode:e[7]},{root:e[3]}];function a(s){e[15](s)}var l=e[8];function u(s){let c={$$slots:{default:[qs]},$$scope:{ctx:s}};for(let h=0;hHt(t,"value",a)),t.$on("prop_change",e[10])),{c(){t&&W(t.$$.fragment),r=de()},m(s,c){t&&Z(t,s,c),y(s,r,c),i=!0},p(s,[c]){const h=c&220?il(o,[c&20&&{elem_id:"elem_id"in s[2]&&s[2].elem_id||`component-${s[4]}`},c&4&&{elem_classes:"elem_classes"in s[2]&&s[2].elem_classes||[]},c&64&&{target:s[6]},c&4&&ol(s[2]),c&128&&{theme_mode:s[7]},c&8&&{root:s[3]}]):{};if(c&2097387&&(h.$$scope={dirty:c,ctx:s}),!n&&c&17&&(n=!0,h.value=s[0][s[4]].props.value,ll(()=>n=!1)),c&256&&l!==(l=s[8])){if(t){le();const _=t;N(_.$$.fragment,1,0,()=>{Y(_,1)}),ae()}l?(t=Bt(l,u(s)),s[14](t),De.push(()=>Ht(t,"value",a)),t.$on("prop_change",s[10]),W(t.$$.fragment),B(t.$$.fragment,1),Z(t,r.parentNode,r)):t=null}else l&&t.$set(h)},i(s){i||(t&&B(t.$$.fragment,s),i=!0)},o(s){t&&N(t.$$.fragment,s),i=!1},d(s){s&&E(r),e[14](null),t&&Y(t,s)}}}function Ws(e,t,n){let{root:r}=t,{component:i}=t,{instance_map:o}=t,{id:a}=t,{props:l}=t,{children:u}=t,{dynamic_ids:s}=t,{has_modes:c}=t,{parent:h=null}=t,{target:_}=t,{theme_mode:p}=t;const v=$e();c&&(l.interactive===!1?l.mode="static":l.interactive===!0||s.has(a)?l.mode="dynamic":l.mode="static"),Et(()=>(v("mount",a),()=>v("destroy",a))),al("BLOCK_KEY",h);function b(f){for(const P in f.detail)n(0,o[a].props[P]=f.detail[P],o)}function g(f){Ae.call(this,e,f)}function S(f){Ae.call(this,e,f)}function A(f){De[f?"unshift":"push"](()=>{o[a].instance=f,n(0,o)})}function T(f){e.$$.not_equal(o[a].props.value,f)&&(o[a].props.value=f,n(0,o))}return e.$$set=f=>{"root"in f&&n(3,r=f.root),"component"in f&&n(8,i=f.component),"instance_map"in f&&n(0,o=f.instance_map),"id"in f&&n(4,a=f.id),"props"in f&&n(2,l=f.props),"children"in f&&n(1,u=f.children),"dynamic_ids"in f&&n(5,s=f.dynamic_ids),"has_modes"in f&&n(9,c=f.has_modes),"parent"in f&&n(11,h=f.parent),"target"in f&&n(6,_=f.target),"theme_mode"in f&&n(7,p=f.theme_mode)},e.$$.update=()=>{e.$$.dirty&3&&n(1,u=u&&u.filter(f=>o[f.id].type!=="statustracker")),e.$$.dirty&19&&o[a].type==="form"&&(u?.every(f=>!f.props.visible)?n(2,l.visible=!1,l):n(2,l.visible=!0,l))},[o,u,l,r,a,s,_,p,i,c,b,h,g,S,A,T]}class xo extends ue{constructor(t){super(),ce(this,t,Ws,Xs,fe,{root:3,component:8,instance_map:0,id:4,props:2,children:1,dynamic_ids:5,has_modes:9,parent:11,target:6,theme_mode:7})}}function Zs(e){let t,n,r,i;return{c(){t=be("svg"),n=be("g"),r=be("path"),i=be("path"),d(r,"d","M3.789,0.09C3.903,-0.024 4.088,-0.024 4.202,0.09L4.817,0.705C4.931,0.819 4.931,1.004 4.817,1.118L1.118,4.817C1.004,4.931 0.819,4.931 0.705,4.817L0.09,4.202C-0.024,4.088 -0.024,3.903 0.09,3.789L3.789,0.09Z"),d(i,"d","M4.825,3.797C4.934,3.907 4.934,4.084 4.825,4.193L4.193,4.825C4.084,4.934 3.907,4.934 3.797,4.825L0.082,1.11C-0.027,1.001 -0.027,0.823 0.082,0.714L0.714,0.082C0.823,-0.027 1.001,-0.027 1.11,0.082L4.825,3.797Z"),d(t,"width","100%"),d(t,"height","100%"),d(t,"viewBox","0 0 5 5"),d(t,"version","1.1"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"xmlns:xlink","http://www.w3.org/1999/xlink"),d(t,"xml:space","preserve"),ge(t,"fill","currentColor"),ge(t,"fill-rule","evenodd"),ge(t,"clip-rule","evenodd"),ge(t,"stroke-linejoin","round"),ge(t,"stroke-miterlimit","2")},m(o,a){y(o,t,a),m(t,n),m(n,r),m(n,i)},p:$,i:$,o:$,d(o){o&&E(t)}}}class Ro extends ue{constructor(t){super(),ce(this,t,null,Zs,fe,{})}}function Ys(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b;return _=new Ro({}),{c(){t=k("div"),n=k("h1"),n.textContent="API Docs",r=M(),i=k("p"),o=I(`No API Routes found for
- `),a=k("code"),l=I(e[0]),u=M(),s=k("p"),s.innerHTML=`To expose an API endpoint of your app in this page, set the api_name
- parameter of the event listener.
-
- For more information, visit the
- API Page guide
- . To hide the API documentation button and this page, set
- show_api=False
- in the
- Blocks.launch()
- method.`,c=M(),h=k("button"),W(_.$$.fragment),d(a,"class","svelte-e1ha0f"),d(i,"class","attention svelte-e1ha0f"),d(t,"class","wrap prose svelte-e1ha0f"),d(h,"class","svelte-e1ha0f")},m(g,S){y(g,t,S),m(t,n),m(t,r),m(t,i),m(i,o),m(i,a),m(a,l),m(t,u),m(t,s),y(g,c,S),y(g,h,S),Z(_,h,null),p=!0,v||(b=Se(h,"click",e[2]),v=!0)},p(g,[S]){(!p||S&1)&&q(l,g[0])},i(g){p||(B(_.$$.fragment,g),p=!0)},o(g){N(_.$$.fragment,g),p=!1},d(g){g&&(E(t),E(c),E(h)),Y(_),v=!1,b()}}}function Js(e,t,n){const r=$e();let{root:i}=t;const o=()=>r("close");return e.$$set=a=>{"root"in a&&n(0,i=a.root)},[i,r,o]}class Qs extends ue{constructor(t){super(),ce(this,t,Js,Ys,fe,{root:0})}}function Qe(e,t,n=null){return t===void 0?n==="py"?"None":null:t==="string"||t==="str"?n===null?e:'"'+e+'"':t==="number"?n===null?parseFloat(e):e:t==="boolean"||t=="bool"?n==="py"?(e=String(e),e==="true"?"True":"False"):n==="js"?e:e==="true":t==="List[str]"?(e=JSON.stringify(e),e):n===null?e===""?null:JSON.parse(e):typeof e=="string"?e===""?n==="py"?"None":"null":e:JSON.stringify(e)}const Mo="https://gradio.s3-us-west-2.amazonaws.com/3.36.1/assets/api-logo-5346f193.svg";function Jt(e){let t;return{c(){t=I("s")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function $s(e){let t,n,r,i,o,a,l,u,s,c,h,_,p,v,b,g,S,A,T,f=e[1]>1&&Jt();return g=new Ro({}),{c(){t=k("h2"),n=k("img"),i=M(),o=k("div"),a=I(`API documentation
- `),l=k("div"),u=I(e[0]),s=M(),c=k("span"),h=k("span"),_=I(e[1]),p=I(" API endpoint"),f&&f.c(),v=M(),b=k("button"),W(g.$$.fragment),Ue(n.src,r=Mo)||d(n,"src",r),d(n,"alt",""),d(n,"class","svelte-3n2nxs"),d(l,"class","url svelte-3n2nxs"),d(h,"class","url svelte-3n2nxs"),d(c,"class","counts svelte-3n2nxs"),d(t,"class","svelte-3n2nxs"),d(b,"class","svelte-3n2nxs")},m(P,H){y(P,t,H),m(t,n),m(t,i),m(t,o),m(o,a),m(o,l),m(l,u),m(t,s),m(t,c),m(c,h),m(h,_),m(c,p),f&&f.m(c,null),y(P,v,H),y(P,b,H),Z(g,b,null),S=!0,A||(T=Se(b,"click",e[3]),A=!0)},p(P,[H]){(!S||H&1)&&q(u,P[0]),(!S||H&2)&&q(_,P[1]),P[1]>1?f||(f=Jt(),f.c(),f.m(c,null)):f&&(f.d(1),f=null)},i(P){S||(B(g.$$.fragment,P),S=!0)},o(P){N(g.$$.fragment,P),S=!1},d(P){P&&(E(t),E(v),E(b)),f&&f.d(),Y(g),A=!1,T()}}}function Ks(e,t,n){let{root:r}=t,{api_count:i}=t;const o=$e(),a=()=>o("close");return e.$$set=l=>{"root"in l&&n(0,r=l.root),"api_count"in l&&n(1,i=l.api_count)},[r,i,o,a]}class eu extends ue{constructor(t){super(),ce(this,t,Ks,$s,fe,{root:0,api_count:1})}}function tu(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M12 9v3.75m9-.75a9 9 0 11-18 0 9 9 0 0118 0zm-9 3.75h.008v.008H12v-.008z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-width","2"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}let nu=class extends ue{constructor(t){super(),ce(this,t,null,tu,fe,{})}};function ru(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M11.25 11.25l.041-.02a.75.75 0 011.063.852l-.708 2.836a.75.75 0 001.063.853l.041-.021M21 12a9 9 0 11-18 0 9 9 0 0118 0zm-9-3.75h.008v.008H12V8.25z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-width","2"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}class iu extends ue{constructor(t){super(),ce(this,t,null,ru,fe,{})}}function ou(e){let t,n;return{c(){t=be("svg"),n=be("path"),d(n,"stroke-linecap","round"),d(n,"stroke-linejoin","round"),d(n,"d","M12 9v3.75m-9.303 3.376c-.866 1.5.217 3.374 1.948 3.374h14.71c1.73 0 2.813-1.874 1.948-3.374L13.949 3.378c-.866-1.5-3.032-1.5-3.898 0L2.697 16.126zM12 15.75h.007v.008H12v-.008z"),d(t,"fill","none"),d(t,"stroke","currentColor"),d(t,"stroke-width","2"),d(t,"viewBox","0 0 24 24"),d(t,"width","100%"),d(t,"height","100%"),d(t,"xmlns","http://www.w3.org/2000/svg"),d(t,"aria-hidden","true"),d(t,"stroke-linecap","round"),d(t,"stroke-linejoin","round")},m(r,i){y(r,t,i),m(t,n)},p:$,i:$,o:$,d(r){r&&E(t)}}}class lu extends ue{constructor(t){super(),ce(this,t,null,ou,fe,{})}}function Qt(e,t,n){const r=e.slice();return r[10]=t[n].label,r[11]=t[n].type,r[12]=t[n].python_type,r[13]=t[n].component,r[14]=t[n].serializer,r[16]=n,r}function $t(e){let t;return{c(){t=I("(")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function au(e){let t=e[2][e[16]].type+"",n;return{c(){n=I(t)},m(r,i){y(r,n,i)},p(r,i){i&4&&t!==(t=r[2][r[16]].type+"")&&q(n,t)},d(r){r&&E(n)}}}function su(e){let t=e[12].type+"",n;return{c(){n=I(t)},m(r,i){y(r,n,i)},p(r,i){i&2&&t!==(t=r[12].type+"")&&q(n,t)},d(r){r&&E(n)}}}function Kt(e){let t;return{c(){t=I(",")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function en(e){let t,n,r,i,o=e[10]+"",a,l,u=e[13]+"",s,c;function h(b,g){return b[3]==="python"?su:au}let _=h(e),p=_(e),v=e[1].length>1&&Kt();return{c(){t=k("div"),n=k("span"),r=I("# "),p.c(),i=I(`
- representing output in '`),a=I(o),l=I("' "),s=I(u),c=I(`
- component`),v&&v.c(),d(n,"class","desc svelte-1c7hj3i"),d(t,"class","svelte-1c7hj3i"),Ye(t,"second-level",e[1].length>1)},m(b,g){y(b,t,g),m(t,n),m(n,r),p.m(n,null),m(n,i),m(n,a),m(n,l),m(n,s),m(n,c),v&&v.m(t,null)},p(b,g){_===(_=h(b))&&p?p.p(b,g):(p.d(1),p=_(b),p&&(p.c(),p.m(n,i))),g&2&&o!==(o=b[10]+"")&&q(a,o),g&2&&u!==(u=b[13]+"")&&q(s,u),b[1].length>1?v||(v=Kt(),v.c(),v.m(t,null)):v&&(v.d(1),v=null),g&2&&Ye(t,"second-level",b[1].length>1)},d(b){b&&E(t),p.d(),v&&v.d()}}}function tn(e){let t;return{c(){t=I(")")},m(n,r){y(n,t,r)},d(n){n&&E(t)}}}function nn(e){let t,n,r;return n=new cl({props:{margin:!1}}),{c(){t=k("div"),W(n.$$.fragment),d(t,"class","load-wrap svelte-1c7hj3i")},m(i,o){y(i,t,o),Z(n,t,null),r=!0},i(i){r||(B(n.$$.fragment,i),r=!0)},o(i){N(n.$$.fragment,i),r=!1},d(i){i&&E(t),Y(n)}}}function uu(e){let t,n,r,i,o,a,l=e[1].length>1&&$t(),u=oe(e[1]),s=[];for(let _=0;_1&&tn(),h=e[0]&&nn();return{c(){t=k("div"),n=k("div"),l&&l.c(),r=M();for(let _=0;_1?l||(l=$t(),l.c(),l.m(n,r)):l&&(l.d(1),l=null),p&14){u=oe(_[1]);let v;for(v=0;v1?c||(c=tn(),c.c(),c.m(n,null)):c&&(c.d(1),c=null),(!a||p&1)&&Ye(n,"hide",_[0]),_[0]?h?p&1&&B(h,1):(h=nn(),h.c(),B(h,1),h.m(t,null)):h&&(le(),N(h,1,1,()=>{h=null}),ae())},i(_){a||(B(h),a=!0)},o(_){N(h),a=!1},d(_){_&&E(t),l&&l.d(),Ie(s,_),c&&c.d(),h&&h.d()}}}function cu(e){let t,n,r,i;return r=new yt({props:{$$slots:{default:[uu]},$$scope:{ctx:e}}}),{c(){t=k("h4"),t.innerHTML=`
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cleanmaster/akagi-sovits3/terms.md b/spaces/cleanmaster/akagi-sovits3/terms.md
deleted file mode 100644
index db34483fede042996973daf93fe7012b462b423b..0000000000000000000000000000000000000000
--- a/spaces/cleanmaster/akagi-sovits3/terms.md
+++ /dev/null
@@ -1,57 +0,0 @@
-在使用此模型前请阅读以下协议,本协议修改自MasterSatori
-
-雪绘Yukie模型使用协议
-
-【前言】雪绘Yukie模型所有者及训练者@cynika(以下也称“我”)希望通过《雪绘Yukie模型使用协议》(以下简称“本协议”)向您说明您在使用雪绘Yukie模型时应当履行的责任及使用范围。
-
-【特别提示】在使用雪绘Yukie模型前,请您务必仔细阅读并透彻理解本协议,在确认充分理解并同意后再开始使用。
-
- 本协议将帮助您了解以下内容:
-
- * 一、免责声明
-
- * 二、您在非个人使用场合时使用雪绘Yukie模型应当做的事
-
- * 三、雪绘Yukie模型的使用范围
-
- * 四、如何联系我
-
- # (一) 免责声明:
-
- 您因使用雪绘Yukie模型对其它任何实体(个人/企业)所造成的任何损失由您自身承担,您因使用雪绘Yukie模型所产生的一切法律风险及法律纠纷由您自身承担。
-
- # (二) 您在非个人使用场合时使用雪绘Yukie模型应当做的事:
-
- 1、注明soVITS项目作者:Rcell
-
- 2、注明我(可选):响希
-
- 3、联系声音持有者雪绘yukie本人,提前征集其意见
-
- # (三) 雪绘Yukie模型的使用范围:
-
- ## 1、您可以使用的范围:
-
- (1) 个人使用
-
- (2) 将产生的音频用于投稿(投稿内容不得包含“您不可使用的范围”中的内容)
-
- (3) 符合投稿平台和当地法律的二创内容
-
- ## 2、您不可使用的范围:
-
- (1) 商业使用
-
- (2) 假冒本人
-
- (3) 当作变声器等使用
-
- (4) 将雪绘Yukie模型再次上传
-
- (5) 低创内容(合成的音频中有过多的爆音或电音属于“低创内容”)
-
- (6) 敏感内容(包括但不限于:政治、低俗、色情、暴力等)
-
- 3、补充内容:
-
- 在其他未被提及的场合使用雪绘Yukie模型及其所产生的数据时您应当征求我的意见`kuzehibiki@126.com`。
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/util.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/util.py
deleted file mode 100644
index 42fe39d5f701e683f52ca7c4022b1bb85749fb6b..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/util.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod, Roozbeh Pournader
-
-from fontTools.misc.timeTools import timestampNow
-from fontTools.ttLib.tables.DefaultTable import DefaultTable
-from functools import reduce
-import operator
-import logging
-
-
-log = logging.getLogger("fontTools.merge")
-
-
-# General utility functions for merging values from different fonts
-
-
-def equal(lst):
- lst = list(lst)
- t = iter(lst)
- first = next(t)
- assert all(item == first for item in t), "Expected all items to be equal: %s" % lst
- return first
-
-
-def first(lst):
- return next(iter(lst))
-
-
-def recalculate(lst):
- return NotImplemented
-
-
-def current_time(lst):
- return timestampNow()
-
-
-def bitwise_and(lst):
- return reduce(operator.and_, lst)
-
-
-def bitwise_or(lst):
- return reduce(operator.or_, lst)
-
-
-def avg_int(lst):
- lst = list(lst)
- return sum(lst) // len(lst)
-
-
-def onlyExisting(func):
- """Returns a filter func that when called with a list,
- only calls func on the non-NotImplemented items of the list,
- and only so if there's at least one item remaining.
- Otherwise returns NotImplemented."""
-
- def wrapper(lst):
- items = [item for item in lst if item is not NotImplemented]
- return func(items) if items else NotImplemented
-
- return wrapper
-
-
-def sumLists(lst):
- l = []
- for item in lst:
- l.extend(item)
- return l
-
-
-def sumDicts(lst):
- d = {}
- for item in lst:
- d.update(item)
- return d
-
-
-def mergeBits(bitmap):
- def wrapper(lst):
- lst = list(lst)
- returnValue = 0
- for bitNumber in range(bitmap["size"]):
- try:
- mergeLogic = bitmap[bitNumber]
- except KeyError:
- try:
- mergeLogic = bitmap["*"]
- except KeyError:
- raise Exception("Don't know how to merge bit %s" % bitNumber)
- shiftedBit = 1 << bitNumber
- mergedValue = mergeLogic(bool(item & shiftedBit) for item in lst)
- returnValue |= mergedValue << bitNumber
- return returnValue
-
- return wrapper
-
-
-class AttendanceRecordingIdentityDict(object):
- """A dictionary-like object that records indices of items actually accessed
- from a list."""
-
- def __init__(self, lst):
- self.l = lst
- self.d = {id(v): i for i, v in enumerate(lst)}
- self.s = set()
-
- def __getitem__(self, v):
- self.s.add(self.d[id(v)])
- return v
-
-
-class GregariousIdentityDict(object):
- """A dictionary-like object that welcomes guests without reservations and
- adds them to the end of the guest list."""
-
- def __init__(self, lst):
- self.l = lst
- self.s = set(id(v) for v in lst)
-
- def __getitem__(self, v):
- if id(v) not in self.s:
- self.s.add(id(v))
- self.l.append(v)
- return v
-
-
-class NonhashableDict(object):
- """A dictionary-like object mapping objects to values."""
-
- def __init__(self, keys, values=None):
- if values is None:
- self.d = {id(v): i for i, v in enumerate(keys)}
- else:
- self.d = {id(k): v for k, v in zip(keys, values)}
-
- def __getitem__(self, k):
- return self.d[id(k)]
-
- def __setitem__(self, k, v):
- self.d[id(k)] = v
-
- def __delitem__(self, k):
- del self.d[id(k)]
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_B_L_C_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_B_L_C_.py
deleted file mode 100644
index e9ed58e582b806df3d24c77e795cab9b70fe9dad..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/C_B_L_C_.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Matt Fontaine
-
-from . import E_B_L_C_
-
-
-class table_C_B_L_C_(E_B_L_C_.table_E_B_L_C_):
-
- dependencies = ["CBDT"]
diff --git a/spaces/colakin/video-generater/classes/VoiceGenerator.php b/spaces/colakin/video-generater/classes/VoiceGenerator.php
deleted file mode 100644
index 1d756205545b45938fb032f84f92cf69d8f8af67..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/classes/VoiceGenerator.php
+++ /dev/null
@@ -1,63 +0,0 @@
-elevenLabsApi = $elevenLabsApi;
- }
-
- /**
- * Generate voice audio for the given message and voice ID.
- *
- * @param string $voiceId
- * @param string $message
- * @return string The local file path of the downloaded audio file
- * @throws Exception
- */
- public function generate_and_download(string $voiceId, string $message): string {
- $data = ['text' => $message];
- $response = $this->elevenLabsApi->textToSpeechWithVoiceId($voiceId, $data);
-
- if ($response->getStatusCode() === 200) {
- $result = json_decode((string)$response->getBody(), true);
- $audioUrl = $result['audio_url'];
- return $this->downloadAudio($audioUrl);
- } else {
- throw new Exception('Error generating audio: ' . $response->getReasonPhrase());
- }
- }
-
- /**
- * Download audio file from the given URL and save it to the voices subfolder.
- *
- * @param string $audioUrl
- * @return string The local file path of the downloaded audio file
- */
- private function downloadAudio(string $audioUrl): string {
- $voicesDirectory = 'voices';
- if (!file_exists($voicesDirectory) && !mkdir($voicesDirectory) && !is_dir($voicesDirectory)) {
- throw new RuntimeException(sprintf('Directory "%s" was not created', $voicesDirectory));
- }
-
- $localFilePath = $voicesDirectory . '/' . uniqid() . '.mp3';
-
- $client = new GuzzleHttp\Client();
- $response = $client->get($audioUrl, ['sink' => $localFilePath]);
-
- if ($response->getStatusCode() === 200) {
- return $localFilePath;
- } else {
- throw new Exception('Error downloading audio: ' . $response->getReasonPhrase());
- }
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amr_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amr_parser.c
deleted file mode 100644
index 9484d720eeabdb749b7821c27671ec5b8594b430..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amr_parser.c
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * Copyright (c) 2021 Paul B Mahol
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AMR audio parser
- *
- * Splits packets into individual blocks.
- */
-
-#include "libavutil/channel_layout.h"
-#include "libavutil/intreadwrite.h"
-#include "parser.h"
-
-static const uint8_t amrnb_packed_size[16] = {
- 13, 14, 16, 18, 20, 21, 27, 32, 6, 1, 1, 1, 1, 1, 1, 1
-};
-static const uint8_t amrwb_packed_size[16] = {
- 18, 24, 33, 37, 41, 47, 51, 59, 61, 6, 1, 1, 1, 1, 1, 1
-};
-
-typedef struct AMRParseContext {
- ParseContext pc;
- uint64_t cumulated_size;
- uint64_t block_count;
- int current_channel;
- int remaining;
-} AMRParseContext;
-
-static av_cold int amr_parse_init(AVCodecParserContext *s1)
-{
- AMRParseContext *s = s1->priv_data;
- s->remaining = -1;
- return 0;
-}
-
-static int amr_parse(AVCodecParserContext *s1,
- AVCodecContext *avctx,
- const uint8_t **poutbuf, int *poutbuf_size,
- const uint8_t *buf, int buf_size)
-{
- AMRParseContext *s = s1->priv_data;
- ParseContext *pc = &s->pc;
- int next = END_NOT_FOUND;
-
- *poutbuf_size = 0;
- *poutbuf = NULL;
-
- if (!avctx->ch_layout.nb_channels) {
- av_channel_layout_uninit(&avctx->ch_layout);
- avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO;
- }
-
- if (s1->flags & PARSER_FLAG_COMPLETE_FRAMES) {
- next = buf_size;
- } else {
- int ch, offset = 0;
-
- for (ch = s->current_channel; ch < avctx->ch_layout.nb_channels; ch++) {
- if (s->remaining >= 0) {
- next = s->remaining;
- } else {
- int mode = (buf[offset] >> 3) & 0x0F;
-
- if (avctx->codec_id == AV_CODEC_ID_AMR_NB) {
- next = amrnb_packed_size[mode];
- } else if (avctx->codec_id == AV_CODEC_ID_AMR_WB) {
- next = amrwb_packed_size[mode];
- }
- }
-
- offset += next;
- if (offset >= buf_size) {
- s->remaining = offset - buf_size;
- next = END_NOT_FOUND;
- break;
- } else {
- s->remaining = -1;
- }
- }
-
- s->current_channel = ch % avctx->ch_layout.nb_channels;
- if (s->remaining < 0)
- next = offset;
-
- if (next != END_NOT_FOUND) {
- if (s->cumulated_size < UINT64_MAX - next) {
- s->cumulated_size += next;
- /* Both AMR formats have 50 frames per second */
- avctx->bit_rate = s->cumulated_size / ++s->block_count * 8 * 50;
- }
- }
-
- if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) {
- *poutbuf = NULL;
- *poutbuf_size = 0;
- return buf_size;
- }
- }
-
- s1->duration = avctx->codec_id == AV_CODEC_ID_AMR_NB ? 160 : 320;
-
- *poutbuf = buf;
- *poutbuf_size = buf_size;
- return next;
-}
-
-const AVCodecParser ff_amr_parser = {
- .codec_ids = { AV_CODEC_ID_AMR_NB, AV_CODEC_ID_AMR_WB },
- .priv_data_size = sizeof(AMRParseContext),
- .parser_init = amr_parse_init,
- .parser_parse = amr_parse,
- .parser_close = ff_parse_close,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jfdctint.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jfdctint.c
deleted file mode 100644
index 6a39578f880671a33cd289404f7df6e42b68f4d1..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jfdctint.c
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#define BIT_DEPTH 8
-#include "jfdctint_template.c"
-#undef BIT_DEPTH
-
-#define BIT_DEPTH 10
-#include "jfdctint_template.c"
-#undef BIT_DEPTH
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Castle Clash MOD APK 3.3.2 and Unlock All Heroes and Skins.md b/spaces/congsaPfin/Manga-OCR/logs/Get Castle Clash MOD APK 3.3.2 and Unlock All Heroes and Skins.md
deleted file mode 100644
index b6ab3902f46d23d48140ea7f604f0b85b2d16db4..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Get Castle Clash MOD APK 3.3.2 and Unlock All Heroes and Skins.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
Castle Clash Mod APK 3.3.2: A Strategy Game with Unlimited Gems
-
If you are a fan of strategy games, you might have heard of Castle Clash, a popular game developed by IGG.COM. In this game, you can build your own base, collect and upgrade heroes and troops, and join a guild to fight against other players in wars and events. However, if you want to enjoy the game to the fullest, you might need a lot of gems and resources, which are not easy to get in the game. That's why we are here to introduce you to Castle Clash Mod APK 3.3.2, a modified version of the game that gives you unlimited gems and other benefits. In this article, we will tell you what Castle Clash is, what Castle Clash Mod APK 3.3.2 is, how to download and install it, and some FAQs about it.
-
What is Castle Clash?
-
Castle Clash is a strategy game that was released in 2013 by IGG.COM, a Singapore-based company that also developed other popular games such as Lords Mobile and Mobile Royale. Castle Clash has over 100 million downloads on Google Play Store and has been rated 4.5 out of 5 stars by more than 5 million users.
Castle Clash has many features that make it an exciting and addictive game for strategy lovers. Here are some of them:
-
Build your base and defend it from enemies
-
In Castle Clash, you can create your own base with various buildings, such as town hall, barracks, watchtower, walls, etc. You can also place traps and heroes to protect your base from enemy attacks. You can upgrade your buildings and defenses to make them stronger and more efficient.
-
Collect and upgrade heroes and troops
-
Castle Clash has hundreds of heroes and troops that you can collect and use in battles. Each hero and troop has its own skills, attributes, and roles, such as tank, healer, damage dealer, etc. You can level up your heroes and troops by using resources such as gold, mana, honor badges, etc. You can also equip your heroes with weapons, armor, artifacts, pets, etc., to enhance their performance.
-
Join a guild and participate in wars and events
-
Castle Clash is not only a solo game but also a social game where you can join a guild and interact with other players from around the world. You can chat with your guild members, donate resources to them, help them in battles, etc. You can also participate in various guild wars and events where you can compete with other guilds for rewards and glory.
-
What is Castle Clash Mod APK 3.3.2?
-
Castle Clash Mod APK 3.3.2 is a modified version of the original Castle Clash game that gives you some advantages that are not available in the official version. For example, you can get unlimited gems and resources in the mod apk version, which are very useful for upgrading your base, heroes, troops, etc.
-
Benefits of Castle Clash Mod APK 3.3.2
-
Castle Clash Mod APK 3.3.2 has many benefits that make it a better choice than the original version Here are some of them:
-
Unlimited gems and resources
-
Gems are the premium currency in Castle Clash that can be used to buy various items, such as hero cards, talent refresh cards, builder huts, etc. Resources are the basic currency in Castle Clash that can be used to upgrade your buildings, heroes, troops, etc. In the original version of the game, you have to earn gems and resources by completing quests, winning battles, participating in events, etc., which can be time-consuming and tedious. However, in Castle Clash Mod APK 3.3.2, you can get unlimited gems and resources for free, which means you can buy anything you want and upgrade everything to the max level without any hassle.
-
castle clash mod apk 3.3.2 unlimited gems
-castle clash mod apk 3.3.2 download for android
-castle clash mod apk 3.3.2 latest version
-castle clash mod apk 3.3.2 free download
-castle clash mod apk 3.3.2 hack
-castle clash mod apk 3.3.2 offline
-castle clash mod apk 3.3.2 no root
-castle clash mod apk 3.3.2 unlimited money
-castle clash mod apk 3.3.2 igg.com
-castle clash mod apk 3.3.2 gameplay
-castle clash mod apk 3.3.2 review
-castle clash mod apk 3.3.2 features
-castle clash mod apk 3.3.2 cheats
-castle clash mod apk 3.3.2 update
-castle clash mod apk 3.3.2 online
-castle clash mod apk 3.3.2 strategy
-castle clash mod apk 3.3.2 tips and tricks
-castle clash mod apk 3.3.2 best heroes
-castle clash mod apk 3.3.2 guide
-castle clash mod apk 3.3.2 tutorial
-castle clash mod apk 3.3.2 how to install
-castle clash mod apk 3.3.2 how to play
-castle clash mod apk 3.3.2 how to get gems
-castle clash mod apk 3.3.2 how to hack
-castle clash mod apk 3.3.2 how to update
-castle clash mod apk 3.3.2 requirements
-castle clash mod apk 3.3.2 compatibility
-castle clash mod apk 3.3.2 support
-castle clash mod apk 3.3.2 bug fixes
-castle clash mod apk 3.3.2 new features
-castle clash mod apk 3.3.2 screenshots
-castle clash mod apk 3.3.2 video
-castle clash mod apk 3.3.2 trailer
-castle clash mod apk 3.3.2 demo
-castle clash mod apk 3.3.2 forum
-castle clash mod apk 3.3.2 reddit
-castle clash mod apk 3.3.2 facebook
-castle clash mod apk 3.3.2 twitter
-castle clash mod apk 3.4 beta version download link[^1^]
-
Unlock all heroes and skins
-
Heroes are the most important part of Castle Clash, as they can make a huge difference in your battles. There are many heroes in Castle Clash, each with its own unique skills and abilities. However, not all heroes are easy to get in the original version of the game, as some of them are rare and require a lot of gems or luck to obtain. Moreover, some heroes have skins that can change their appearance and give them extra bonuses, but these skins are also hard to get or expensive to buy. In Castle Clash Mod APK 3.3.2, you can unlock all heroes and skins for free, which means you can choose any hero you like and customize it with any skin you want.
-
No ads and no root required
-
Another benefit of Castle Clash Mod APK 3.3.2 is that it has no ads and no root required. Ads are annoying and can interrupt your gaming experience, especially when they pop up in the middle of a battle or a loading screen. Rooting is a process that allows you to access the system files of your device and modify them according to your preferences, but it can also void your warranty and expose your device to security risks. In Castle Clash Mod APK 3.3.2, you don't have to worry about ads or rooting, as the mod apk file is already modified and optimized for your device.
-
How to download and install Castle Clash Mod APK 3.3.2?
-
If you are interested in downloading and installing Castle Clash Mod APK 3.3.2 on your device, you can follow these simple steps:
-
Steps to download and install Castle Clash Mod APK 3.3.2
-
Enable unknown sources on your device
-
Before you can install any mod apk file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Download the mod apk file from a trusted source
-
Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games and apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your personal information. Therefore, you should always download mod apk files from reputable sources that have positive reviews and feedback from users. You can use this link to download Castle Clash Mod APK 3.3.2 safely and securely.
-
Install the mod apk file and enjoy the game
-
Finally, you need to install the mod apk file on your device and enjoy the game. To do this, locate the downloaded mod apk file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Once done, you can launch the game from your app drawer or home screen and enjoy Castle Clash Mod APK 3.3.2 with unlimited gems and resources.
-
Conclusion
-
Castle Clash is a strategy game that lets you build your own base, collect and upgrade heroes and troops, and join a guild to fight against other players in wars and events. However, if you want to have more fun and convenience in the game, you should try Castle Clash Mod APK 3.3.2, a modified version of the game that gives you unlimited gems and resources, unlocks all heroes and skins, removes ads, and requires no root. You can download and install Castle Clash Mod APK 3.3.2 by following the steps we mentioned above.
-
FAQs
-
Here are some frequently asked questions about Castle Clash Mod APK 3.3.2:
-
-
Is Castle Clash Mod APK 3.3.2 safe to use?
Yes, Castle Clash Mod APK 3.3.2 is safe to use, as long as you download it from a trusted source. The mod apk file has been tested and verified by many users and has no viruses or malware. However, you should always be careful when downloading and installing any mod apk file on your device, as some of them may contain harmful or malicious content.
-
Will I get banned for using Castle Clash Mod APK 3.3.2?
-
No, you will not get banned for using Castle Clash Mod APK 3.3.2, as the mod apk file has an anti-ban feature that prevents the game from detecting your modded account. However, you should always use the mod apk file at your own risk, as we cannot guarantee that it will work forever or that it will not cause any problems with your device or game.
-
Can I play Castle Clash Mod APK 3.3.2 online with other players?
-
Yes, you can play Castle Clash Mod APK 3.3.2 online with other players, as the mod apk file does not affect the online mode of the game. You can join a guild, chat with other players, and participate in wars and events as usual. However, you should be careful not to abuse the mod apk features or show off your unlimited gems and resources, as this may arouse suspicion and resentment from other players.
-
Can I update Castle Clash Mod APK 3.3.2 to the latest version?
-
No, you cannot update Castle Clash Mod APK 3.3.2 to the latest version, as the mod apk file is based on an older version of the game and may not be compatible with the new updates. If you want to update the game, you will have to uninstall the mod apk file and install the official version from the Google Play Store. However, you may lose your progress and modded features if you do this.
-
Can I use Castle Clash Mod APK 3.3.2 on iOS devices?
-
No, you cannot use Castle Clash Mod APK 3.3.2 on iOS devices, as the mod apk file is only designed for Android devices and cannot be installed or run on iOS devices. If you want to play Castle Clash on iOS devices, you will have to download the official version from the App Store.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Melon Playground 3D APK The Best Ragdoll Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Melon Playground 3D APK The Best Ragdoll Game for Android.md
deleted file mode 100644
index 5d6990da36f97bfb75a951d83df96946458940f2..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Melon Playground 3D APK The Best Ragdoll Game for Android.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Melon Playground 3D: A Fun and Crazy Ragdoll Game
-
Do you like sandbox games where you can unleash your creativity and imagination? Do you enjoy ragdoll physics and gore effects? If you answered yes to both questions, then you might want to check out Melon Playground 3D, a fun and crazy ragdoll game where you can mistreat many characters with dozens of weapons.
-
What is Melon Playground 3D?
-
Melon Playground 3D is an exciting ragdoll game developed by Studio27 for Android devices. It was released on June 18, 2023, and has received positive reviews from players who love its simplicity and humor.
The gameplay of Melon Playground 3D is very simple and straightforward. You can choose from various characters such as humans, animals, zombies, robots, and more. Then, you can select from different scenarios such as a city, a farm, a desert, a forest, and more. Finally, you can pick from a wide range of weapons such as guns, knives, axes, hammers, grenades, rockets, and more.
-
Once you have everything set up, you can start having fun with your ragdoll characters. You can shoot them, stab them, chop them, smash them, blow them up, or do anything else you can think of. You can also drag them around, throw them in the air, or make them interact with other objects in the environment. The game has realistic physics and graphics that make the ragdoll effects more enjoyable and hilarious.
-
The features of Melon Playground 3D
-
Melon Playground 3D has many features that make it a great ragdoll game for Android users. Here are some of them:
-
Dozens of weapons to choose from
-
The game offers you a variety of weapons to play with your ragdoll characters. You can use firearms such as pistols, rifles, shotguns, snipers, machine guns, and more. You can also use melee weapons such as swords, daggers, axes, hammers, chainsaws, and more. You can also use explosives such as grenades, rockets, mines, bombs, and more. You can also use other items such as cars, trucks, planes, helicopters, trains, and more. You can even use your own hands to punch, slap, or grab your ragdoll characters.
-
Various characters to mistreat
-
The game lets you choose from different types of ragdoll characters to have fun with. You can select from humans such as men, women, children, police officers, soldiers, gangsters, and more. You can also select from animals such as dogs, cats, cows, pigs, chickens, and more. You can also select from zombies such as walkers, runners, crawlers, and more. You can also select from robots such as androids, cyborgs, drones, and more. You can even mix and match different characters to create your own combinations.
-
Different scenarios to explore
-
The game gives you a variety of scenarios to explore with your ragdoll characters. You can choose from urban settings such as a city, a town, a village, a park, and more. You can also choose from rural settings such as a farm, a barn, a field, and more. You can also choose from natural settings such as a desert, a forest, a lake, and more. You can also choose from artificial settings such as a factory, a warehouse, a prison, and more. You can even create your own scenarios by customizing the environment with different objects and props.
-
Realistic physics and graphics
-
The game has realistic physics and graphics that make the ragdoll effects more realistic and amusing. The game uses the Unity engine to create smooth and detailed animations for the ragdoll characters. The game also uses high-quality textures and lighting effects to create vivid and colorful visuals for the scenarios. The game also has gore effects that show blood splatters and body parts flying when you damage your ragdoll characters.
-
melon playground 3d apk free download
-download melon playground 3d mod apk
-melon playground 3d android game download
-how to download melon playground 3d on pc
-melon playground 3d latest version apk download
-melon playground 3d ragdoll game download
-download melon playground 3d sandbox apk
-melon playground 3d apk download uptodown[^1^]
-melon playground 3d online game download
-melon playground 3d weapons mod apk download
-melon playground 3d apk download for ios
-melon playground 3d unlimited money apk download
-melon playground 3d ragdoll simulator download
-download melon playground 3d from google play[^2^]
-melon playground 3d offline game download
-melon playground 3d hack apk download
-melon playground 3d full version apk download
-melon playground 3d best ragdoll game download
-download melon playground 3d for windows 10
-melon playground 3d cheats apk download
-melon playground 3d fun sandbox game download
-melon playground 3d apk pure download
-melon playground 3d new update apk download
-melon playground 3d realistic physics game download
-download melon playground 3d for mac
-
How to download Melon Playground 3D APK?
-
If you are interested in playing Melon Playground 3D on your Android device, you might want to download the APK file instead of the official version from the Google Play Store. The APK file is a modified version of the game that offers some benefits that the official version does not have.
-
The steps to download Melon Playground 3D APK
-
The steps to download Melon Playground 3D APK are very simple and easy. Here are the steps:
-
-
Go to a reliable website that offers the Melon Playground 3D APK file for free download. For example, you can go to [this website] that provides the latest version of the APK file.
-
Click on the download button and wait for the APK file to be downloaded to your device.
-
Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install the APK file without any problems.
-
Go to your device's file manager and locate the downloaded APK file. Tap on it and follow the instructions to install it on your device.
-
Enjoy playing Melon Playground 3D with all its features unlocked.
-
The benefits of downloading Melon Playground 3D APK
-
Downloading Melon Playground 3D APK has some benefits that you might not get from the official version of the game. Here are some of them:
-
Free and easy to install
-
The APK file is free to download and easy to install on your device. You do not need to pay any money or go through any complicated process to get the game. You just need to follow the steps mentioned above and you are good to go.
-
No ads or in-app purchases
-
The APK file does not have any ads or in-app purchases that might interrupt your gaming experience or make you spend extra money. You can enjoy the game without any distractions or limitations.
-
Unlimited access to all content
-
The APK file gives you unlimited access to all the content of the game. You can use all the weapons, characters, scenarios, and features that the game has to offer. You do not need to unlock anything or wait for anything. You can have fun with your ragdoll characters as much as you want.
-
Conclusion
-
Melon Playground 3D is a fun and crazy ragdoll game that lets you mistreat many characters with dozens of weapons in different scenarios. It has realistic physics and graphics that make the ragdoll effects more enjoyable and hilarious. It is a great game for Android users who love sandbox games and ragdoll physics. If you want to play Melon Playground 3D on your device, you might want to download the APK file instead of the official version from the Google Play Store. The APK file offers some benefits such as free and easy installation, no ads or in-app purchases, and unlimited access to all content. You can download the APK file from a reliable website and follow the steps to install it on your device. Then, you can start having fun with your ragdoll characters in Melon Playground 3D.
-
FAQs
-
-
Q: Is Melon Playground 3D safe to play?
-
A: Yes, Melon Playground 3D is safe to play as long as you download it from a trusted source and do not harm anyone in real life. The game is only meant for entertainment purposes and does not promote violence or cruelty.
-
Q: Is Melon Playground 3D suitable for children?
-
A: No, Melon Playground 3D is not suitable for children as it contains gore effects and mature themes that might be disturbing or inappropriate for young audiences. The game is rated 17+ by the Google Play Store and should only be played by adults or under parental supervision.
-
Q: How can I contact the developer of Melon Playground 3D?
-
A: You can contact the developer of Melon Playground 3D by sending an email to studio27@gmail.com or by visiting their website at [this link]. You can also follow them on their social media accounts such as Facebook, Twitter, Instagram, and YouTube.
-
Q: How can I support the development of Melon Playground 3D?
-
A: You can support the development of Melon Playground 3D by leaving a positive review and rating on the Google Play Store or on the website where you downloaded the APK file. You can also share the game with your friends and family who might enjoy it. You can also donate to the developer via PayPal or Patreon if you want to show your appreciation and help them create more games like this.
-
Q: What are some similar games to Melon Playground 3D?
-
A: Some similar games to Melon Playground 3D are Happy Room, Turbo Dismount, Ragdoll Simulator, Stickman Dismounting, and Ragdoll Sandbox.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Create and Play MIDI Files with Roland Virtual Sound Canvas 3.2 (DXi and VST Instruments).md b/spaces/contluForse/HuggingGPT/assets/Create and Play MIDI Files with Roland Virtual Sound Canvas 3.2 (DXi and VST Instruments).md
deleted file mode 100644
index 6bb0bdc7ffd080934ab71335f30d137dc8dbd21d..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Create and Play MIDI Files with Roland Virtual Sound Canvas 3.2 (DXi and VST Instruments).md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Not likely since neither the real Sound Canvas series nor the virtual versions (SC-VA inlcuded) support real LA synthesis that defines MT-32 and CM-32/64 like synths (and thus MUNT). SC devices are only romplers that contain a CM-32/64 compatible sound bank at Bank MSB 127 and a CM-32/64 compatible drum set at channel 10/Program 127 ( most likely this is what you have found). But they only work somewhat with titles that only use the default instruments. Games/Midi files that try to reprogram/modify the sounds the same way as can be done on a real MT-32 compatible synth fail on the whole Sound Canvas series ( but work with MUNT). MUNT emulates Roland MT-32 and similar synths incomparably much better than any Roland SC devices ever did.
-
An attempt to look both backwards and forwards, the SH32 resurrected the 'SH' name that had last appeared on the SH101. A quick glance at the controls confirmed Roland's intention to market this as a return to its classic era, although the connection between an analogue monosynth and a four-voice, four-part multitimbral 'virtual' analogue with pretensions of Groovedom was rather tenuous. The engine at the core of the SH32 had a very silly name... (Wave Acceleration Sound Generation, or WASG), but it was at heart a conventional modelled analogue synth with lots of vintage-style waveforms, a multi-mode filter, a couple of contour generators, a couple of LFOs, and the now-obligatory effects section. To this, the company added a rhythm sound generator, and an arpeggiator that included four-part pattern generation. Unfortunately, despite an appealing sound, the SH32 was built to its affordable price, offering a diabolically impenetrable two-digit display, and a number of unexpected limitations. In consequence, what should have been a neat, successful product did not achieve its full potential.
In short, the V-Synth combines powerful S&S and virtual-analogue synth engines with sampling and Variphrase. The last of these is implemented in its full form, and you can use the encoded Variphrase samples just as you would use PCMs from the synth's permanent memory. Not that the memory is permanent in the conventional sense; the factory PCMs are held in a backup ROM which is loaded into RAM when you switch on. If you want to use only your own sounds (or a selection of factory and user sounds) you can do so, using a combination of PCM samples, your own samples, encoded Variphrase samples, and VA oscillators. Oh yes... and you can use the external input as a real-time sound source, too.
-
At the other end of the keyboard spectrum, Roland have also announced the Juno D, resurrecting another revered name from their history, just as they did with the SH32. Looking like nothing so much as a black RS50, this is Juno-esque in the sense that it is low-cost and simple to use. However, contrary to expectation, it eschews the virtual-analogue technologies of the V-Synth and VariOS, and is a PCM-based synth. With lots of useable sounds, good effects, an arpeggiator, and bundled PC and Mac editing software, it appears to be good value, but I think that Roland have made a mistake by raising people's expectations ('It's the return of the Juno!') and then dashing them again ('No, it's not!').
-
More interesting, although unheard at the time of writing, is the VC1 'D50' V-Card for the V-Synth. This purports to recreate the D50 as a virtual synth within the V-Synth itself, even to the extent of being able to load original D50 patch data via MIDI. If it truly recreates the feel and sound of the original, I can see the VC1 becoming a 'must-have' add-on for V-Synth owners.The FR5 'V-Accordion', still unreleased at the time of writing.
-
Gain access to the full set of virtual instruments to compose, play, record and save music files in General MIDI 2 and Roland GS. The suite supports older versions of Windows OS and provides basic composing, editing and uploading options for music and sounds.
-
VI49 is bundled with Ableton Live Lite and Xpand!2 by AIR Music Tech, two dynamic pieces of software that enable you to record, produce, and perform with your computer. Ableton Live Lite is a fluid audio/MIDI environment that enables you to spontaneously record, remix, improvise, and edit musical ideas on the fly. Xpand!2 is an advanced virtual instrument that comes with a collection of premium sounds, ranging from acoustic instruments to futuristic synthesizers. Together, these powerful music platforms allow you to create or perform music with VI49 right out of the box.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Ek Ajnabee Movie Download In Hindi Hd 720p The Best Sites to Stream or Download the Film.md b/spaces/contluForse/HuggingGPT/assets/Ek Ajnabee Movie Download In Hindi Hd 720p The Best Sites to Stream or Download the Film.md
deleted file mode 100644
index e2e8b30c411b650aee9e9e7710972852aeccb59c..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Ek Ajnabee Movie Download In Hindi Hd 720p The Best Sites to Stream or Download the Film.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
the Kill The Rapist full movie in hindi free download hd Mahalaxmi Vrat Katha In Marathi Pdf Download Psihologija Uspeha Dale Carnegie Pdf 14 Ghatak movie dual audio download golpitha namdeo dhasal pdf 13 teacher tullu student tunne kama kannada kategalu zip Gangs Of Wasseypur movie with english subtitles download kickass utorrent space shuttle mission 2007 crack download Himekishi Lilia Uncensored Tarot Et Belote 3d Deluxe PC
-
wondershare data recovery crack kickass 3d gay villa 2.rar native instruments scarbee rickenbacker bass crack WiFi Commander: 3D Analyze Monitor full version accurate 4 deluxe keygenbfdcm Uncharted 3 Drakes Deception [FullGame] [PC-Windows] 56 Coco (English) movie in hindi dubbed torrent house m d soundtracks all seasons cara homeopathic software free download full versioninstmank arabic fonts for autocad mac crack
Bely Belinda Custom devdas movie download filmywap bollywood aaina full movie 1993 free download Pthc R Ygold Julia 14yo Billie Holiday - Discography (1944-2010) [320 kbps] sherlock holmes 2 tamil dubbed movie free download Mylola Info Nelia 11 Yo .avi carti crestine pdf free download tamil full movie download utorrent genial klick a1 arbeitsbuch pdf download
-
Lera lynn lately instrumental music download HACK Microsoft Office 16 Word Excel PowerPoint x32 v16.0.9226.2114 bola de drac gt completa catalan torrent Torchat Ie7h37c4qmu5ccza 14 Arrival (English) 2 movie download 720p hd descargar algebra moderna de sebastian lazo pdf Neighbours From Hell 2 Full Game Free 11 xforce keygen 64 bits Entertainment Creation Suite 2017 descargar descargar solucionario del libro de ingenieria industrial de niebel 77 solutions sm modern compressible flow zip
-
groove agent 3 vst torrent download komik mandala dari sungai ular gta iv advanced hook.dll download Alicia Keys-Unplugged full album zip Rehnaa Hai Terre Dil Mein man 3 movie free download in hindi hd 720p Rangeela movie download in hindi hd 720p kickass Solucionario Calor Y Termodinamica Zemansky iblis menggugat tuhan full version huawei e303 bin file librecad handbuch deutsch pdf download
-
Rampur Ka Laxman Bhojpuri Movie Song Downloadgolkesl Download free e-books epub My Book With No Free popular ebooks download Convenience Store star wars theme sheet music trumpet Thoda Pyaar Thoda Magic 3 Full Movie Hd Download Utorrentl Trio Maison Femme Partagee Global Earth Leakage Protection Market Production, Consumption, Export, Import Analysis(2013-2018E) and Forecast Till 2023 Hairy gay latina sex tube movies. hot black milf sex Review Disk Space For Mac
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/iter_based_runner.py
deleted file mode 100644
index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/iter_based_runner.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import platform
-import shutil
-import time
-import warnings
-
-import torch
-from torch.optim import Optimizer
-
-import annotator.uniformer.mmcv as mmcv
-from .base_runner import BaseRunner
-from .builder import RUNNERS
-from .checkpoint import save_checkpoint
-from .hooks import IterTimerHook
-from .utils import get_host_info
-
-
-class IterLoader:
-
- def __init__(self, dataloader):
- self._dataloader = dataloader
- self.iter_loader = iter(self._dataloader)
- self._epoch = 0
-
- @property
- def epoch(self):
- return self._epoch
-
- def __next__(self):
- try:
- data = next(self.iter_loader)
- except StopIteration:
- self._epoch += 1
- if hasattr(self._dataloader.sampler, 'set_epoch'):
- self._dataloader.sampler.set_epoch(self._epoch)
- time.sleep(2) # Prevent possible deadlock during epoch transition
- self.iter_loader = iter(self._dataloader)
- data = next(self.iter_loader)
-
- return data
-
- def __len__(self):
- return len(self._dataloader)
-
-
-@RUNNERS.register_module()
-class IterBasedRunner(BaseRunner):
- """Iteration-based Runner.
-
- This runner train models iteration by iteration.
- """
-
- def train(self, data_loader, **kwargs):
- self.model.train()
- self.mode = 'train'
- self.data_loader = data_loader
- self._epoch = data_loader.epoch
- data_batch = next(data_loader)
- self.call_hook('before_train_iter')
- outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.train_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_train_iter')
- self._inner_iter += 1
- self._iter += 1
-
- @torch.no_grad()
- def val(self, data_loader, **kwargs):
- self.model.eval()
- self.mode = 'val'
- self.data_loader = data_loader
- data_batch = next(data_loader)
- self.call_hook('before_val_iter')
- outputs = self.model.val_step(data_batch, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.val_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_val_iter')
- self._inner_iter += 1
-
- def run(self, data_loaders, workflow, max_iters=None, **kwargs):
- """Start running.
-
- Args:
- data_loaders (list[:obj:`DataLoader`]): Dataloaders for training
- and validation.
- workflow (list[tuple]): A list of (phase, iters) to specify the
- running order and iterations. E.g, [('train', 10000),
- ('val', 1000)] means running 10000 iterations for training and
- 1000 iterations for validation, iteratively.
- """
- assert isinstance(data_loaders, list)
- assert mmcv.is_list_of(workflow, tuple)
- assert len(data_loaders) == len(workflow)
- if max_iters is not None:
- warnings.warn(
- 'setting max_iters in run is deprecated, '
- 'please set max_iters in runner_config', DeprecationWarning)
- self._max_iters = max_iters
- assert self._max_iters is not None, (
- 'max_iters must be specified during instantiation')
-
- work_dir = self.work_dir if self.work_dir is not None else 'NONE'
- self.logger.info('Start running, host: %s, work_dir: %s',
- get_host_info(), work_dir)
- self.logger.info('Hooks will be executed in the following order:\n%s',
- self.get_hook_info())
- self.logger.info('workflow: %s, max: %d iters', workflow,
- self._max_iters)
- self.call_hook('before_run')
-
- iter_loaders = [IterLoader(x) for x in data_loaders]
-
- self.call_hook('before_epoch')
-
- while self.iter < self._max_iters:
- for i, flow in enumerate(workflow):
- self._inner_iter = 0
- mode, iters = flow
- if not isinstance(mode, str) or not hasattr(self, mode):
- raise ValueError(
- 'runner has no method named "{}" to run a workflow'.
- format(mode))
- iter_runner = getattr(self, mode)
- for _ in range(iters):
- if mode == 'train' and self.iter >= self._max_iters:
- break
- iter_runner(iter_loaders[i], **kwargs)
-
- time.sleep(1) # wait for some hooks like loggers to finish
- self.call_hook('after_epoch')
- self.call_hook('after_run')
-
- def resume(self,
- checkpoint,
- resume_optimizer=True,
- map_location='default'):
- """Resume model from checkpoint.
-
- Args:
- checkpoint (str): Checkpoint to resume from.
- resume_optimizer (bool, optional): Whether resume the optimizer(s)
- if the checkpoint file includes optimizer(s). Default to True.
- map_location (str, optional): Same as :func:`torch.load`.
- Default to 'default'.
- """
- if map_location == 'default':
- device_id = torch.cuda.current_device()
- checkpoint = self.load_checkpoint(
- checkpoint,
- map_location=lambda storage, loc: storage.cuda(device_id))
- else:
- checkpoint = self.load_checkpoint(
- checkpoint, map_location=map_location)
-
- self._epoch = checkpoint['meta']['epoch']
- self._iter = checkpoint['meta']['iter']
- self._inner_iter = checkpoint['meta']['iter']
- if 'optimizer' in checkpoint and resume_optimizer:
- if isinstance(self.optimizer, Optimizer):
- self.optimizer.load_state_dict(checkpoint['optimizer'])
- elif isinstance(self.optimizer, dict):
- for k in self.optimizer.keys():
- self.optimizer[k].load_state_dict(
- checkpoint['optimizer'][k])
- else:
- raise TypeError(
- 'Optimizer should be dict or torch.optim.Optimizer '
- f'but got {type(self.optimizer)}')
-
- self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')
-
- def save_checkpoint(self,
- out_dir,
- filename_tmpl='iter_{}.pth',
- meta=None,
- save_optimizer=True,
- create_symlink=True):
- """Save checkpoint to file.
-
- Args:
- out_dir (str): Directory to save checkpoint files.
- filename_tmpl (str, optional): Checkpoint file template.
- Defaults to 'iter_{}.pth'.
- meta (dict, optional): Metadata to be saved in checkpoint.
- Defaults to None.
- save_optimizer (bool, optional): Whether save optimizer.
- Defaults to True.
- create_symlink (bool, optional): Whether create symlink to the
- latest checkpoint file. Defaults to True.
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(
- f'meta should be a dict or None, but got {type(meta)}')
- if self.meta is not None:
- meta.update(self.meta)
- # Note: meta.update(self.meta) should be done before
- # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise
- # there will be problems with resumed checkpoints.
- # More details in https://github.com/open-mmlab/mmcv/pull/1108
- meta.update(epoch=self.epoch + 1, iter=self.iter)
-
- filename = filename_tmpl.format(self.iter + 1)
- filepath = osp.join(out_dir, filename)
- optimizer = self.optimizer if save_optimizer else None
- save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)
- # in some environments, `os.symlink` is not supported, you may need to
- # set `create_symlink` to False
- if create_symlink:
- dst_file = osp.join(out_dir, 'latest.pth')
- if platform.system() != 'Windows':
- mmcv.symlink(filename, dst_file)
- else:
- shutil.copy(filepath, dst_file)
-
- def register_training_hooks(self,
- lr_config,
- optimizer_config=None,
- checkpoint_config=None,
- log_config=None,
- momentum_config=None,
- custom_hooks_config=None):
- """Register default hooks for iter-based training.
-
- Checkpoint hook, optimizer stepper hook and logger hooks will be set to
- `by_epoch=False` by default.
-
- Default hooks include:
-
- +----------------------+-------------------------+
- | Hooks | Priority |
- +======================+=========================+
- | LrUpdaterHook | VERY_HIGH (10) |
- +----------------------+-------------------------+
- | MomentumUpdaterHook | HIGH (30) |
- +----------------------+-------------------------+
- | OptimizerStepperHook | ABOVE_NORMAL (40) |
- +----------------------+-------------------------+
- | CheckpointSaverHook | NORMAL (50) |
- +----------------------+-------------------------+
- | IterTimerHook | LOW (70) |
- +----------------------+-------------------------+
- | LoggerHook(s) | VERY_LOW (90) |
- +----------------------+-------------------------+
- | CustomHook(s) | defaults to NORMAL (50) |
- +----------------------+-------------------------+
-
- If custom hooks have same priority with default hooks, custom hooks
- will be triggered after default hooks.
- """
- if checkpoint_config is not None:
- checkpoint_config.setdefault('by_epoch', False)
- if lr_config is not None:
- lr_config.setdefault('by_epoch', False)
- if log_config is not None:
- for info in log_config['hooks']:
- info.setdefault('by_epoch', False)
- super(IterBasedRunner, self).register_training_hooks(
- lr_config=lr_config,
- momentum_config=momentum_config,
- optimizer_config=optimizer_config,
- checkpoint_config=checkpoint_config,
- log_config=log_config,
- timer_config=IterTimerHook(),
- custom_hooks_config=custom_hooks_config)
diff --git a/spaces/course-demos/marian-finetuned-kde4-en-to-fr/app.py b/spaces/course-demos/marian-finetuned-kde4-en-to-fr/app.py
deleted file mode 100644
index c71682697233e139250fbab2c29ee28f7ab401a7..0000000000000000000000000000000000000000
--- a/spaces/course-demos/marian-finetuned-kde4-en-to-fr/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/huggingface-course/marian-finetuned-kde4-en-to-fr", title=None, inputs=gr.Textbox(label="Input", lines=3, value="This plugin allows you to automatically translate web pages between several languages.")).launch()
\ No newline at end of file
diff --git a/spaces/dajuzi/img-to-music/share_btn.py b/spaces/dajuzi/img-to-music/share_btn.py
deleted file mode 100644
index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000
--- a/spaces/dajuzi/img-to-music/share_btn.py
+++ /dev/null
@@ -1,100 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- async function getOutputMusicFile(audioEL){
- const res = await fetch(audioEL.src);
- const blob = await res.blob();
- const audioId = Date.now() % 200;
- const fileName = `img-to-music-${{audioId}}.wav`;
- const musicBlob = new File([blob], fileName, { type: 'audio/wav' });
- console.log(musicBlob);
- return musicBlob;
- }
-
- async function audioToBase64(audioFile) {
- return new Promise((resolve, reject) => {
- let reader = new FileReader();
- reader.readAsDataURL(audioFile);
- reader.onload = () => resolve(reader.result);
- reader.onerror = error => reject(error);
-
- });
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const outputMusic = gradioEl.querySelector('#music-output audio');
- const outputMusic_src = gradioEl.querySelector('#music-output audio').src;
- const outputMusic_name = outputMusic_src.split('/').pop();
- let titleTxt = outputMusic_name;
- //if(titleTxt.length > 100){
- // titleTxt = titleTxt.slice(0, 100) + ' ...';
- //}
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputMusic){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
- const musicFile = await getOutputMusicFile(outputMusic);
- const dataOutputMusic = await uploadFile(musicFile);
-
- const descriptionMd = `#### Input img:
-
-
-#### Music:
-
-
-`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/dawood/Kanye-AI/hubert/__init__.py b/spaces/dawood/Kanye-AI/hubert/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/experimental/rl/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/experimental/rl/__init__.py
deleted file mode 100644
index 7b338d3173e12d478b6b6d6fd0e50650a0ab5a4c..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/experimental/rl/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .value_guided_sampling import ValueGuidedRLPipeline
diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_euler.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_euler.py
deleted file mode 100644
index 4d521b0075e18710b88ed3efe1f2652bb4718733..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_euler.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import torch
-
-from diffusers import EulerDiscreteScheduler
-from diffusers.utils import torch_device
-
-from .test_schedulers import SchedulerCommonTest
-
-
-class EulerDiscreteSchedulerTest(SchedulerCommonTest):
- scheduler_classes = (EulerDiscreteScheduler,)
- num_inference_steps = 10
-
- def get_scheduler_config(self, **kwargs):
- config = {
- "num_train_timesteps": 1100,
- "beta_start": 0.0001,
- "beta_end": 0.02,
- "beta_schedule": "linear",
- }
-
- config.update(**kwargs)
- return config
-
- def test_timesteps(self):
- for timesteps in [10, 50, 100, 1000]:
- self.check_over_configs(num_train_timesteps=timesteps)
-
- def test_betas(self):
- for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]):
- self.check_over_configs(beta_start=beta_start, beta_end=beta_end)
-
- def test_schedules(self):
- for schedule in ["linear", "scaled_linear"]:
- self.check_over_configs(beta_schedule=schedule)
-
- def test_prediction_type(self):
- for prediction_type in ["epsilon", "v_prediction"]:
- self.check_over_configs(prediction_type=prediction_type)
-
- def test_full_loop_no_noise(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps)
-
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
- sample = sample.to(torch_device)
-
- for i, t in enumerate(scheduler.timesteps):
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 10.0807) < 1e-2
- assert abs(result_mean.item() - 0.0131) < 1e-3
-
- def test_full_loop_with_v_prediction(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(prediction_type="v_prediction")
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps)
-
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
- sample = sample.to(torch_device)
-
- for i, t in enumerate(scheduler.timesteps):
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 0.0002) < 1e-2
- assert abs(result_mean.item() - 2.2676e-06) < 1e-3
-
- def test_full_loop_device(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps, device=torch_device)
-
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
- sample = sample.to(torch_device)
-
- for t in scheduler.timesteps:
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 10.0807) < 1e-2
- assert abs(result_mean.item() - 0.0131) < 1e-3
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/memory/memory.py b/spaces/deepwisdom/MetaGPT/metagpt/memory/memory.py
deleted file mode 100644
index bf9f0541c79b426008c9b4f0548729dabcb4273f..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/memory/memory.py
+++ /dev/null
@@ -1,95 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/20 12:15
-@Author : alexanderwu
-@File : memory.py
-"""
-from collections import defaultdict
-from typing import Iterable, Type
-
-from metagpt.actions import Action
-from metagpt.schema import Message
-
-
-class Memory:
- """The most basic memory: super-memory"""
-
- def __init__(self):
- """Initialize an empty storage list and an empty index dictionary"""
- self.storage: list[Message] = []
- self.index: dict[Type[Action], list[Message]] = defaultdict(list)
-
- def add(self, message: Message):
- """Add a new message to storage, while updating the index"""
- if message in self.storage:
- return
- self.storage.append(message)
- if message.cause_by:
- self.index[message.cause_by].append(message)
-
- def add_batch(self, messages: Iterable[Message]):
- for message in messages:
- self.add(message)
-
- def get_by_role(self, role: str) -> list[Message]:
- """Return all messages of a specified role"""
- return [message for message in self.storage if message.role == role]
-
- def get_by_content(self, content: str) -> list[Message]:
- """Return all messages containing a specified content"""
- return [message for message in self.storage if content in message.content]
-
- def delete(self, message: Message):
- """Delete the specified message from storage, while updating the index"""
- self.storage.remove(message)
- if message.cause_by and message in self.index[message.cause_by]:
- self.index[message.cause_by].remove(message)
-
- def clear(self):
- """Clear storage and index"""
- self.storage = []
- self.index = defaultdict(list)
-
- def count(self) -> int:
- """Return the number of messages in storage"""
- return len(self.storage)
-
- def try_remember(self, keyword: str) -> list[Message]:
- """Try to recall all messages containing a specified keyword"""
- return [message for message in self.storage if keyword in message.content]
-
- def get(self, k=0) -> list[Message]:
- """Return the most recent k memories, return all when k=0"""
- return self.storage[-k:]
-
- def remember(self, observed: list[Message], k=0) -> list[Message]:
- """remember the most recent k memories from observed Messages, return all when k=0"""
- already_observed = self.get(k)
- news: list[Message] = []
- for i in observed:
- if i in already_observed:
- continue
- news.append(i)
- return news
-
- def get_by_action(self, action: Type[Action]) -> list[Message]:
- """Return all messages triggered by a specified Action"""
- return self.index[action]
-
- def get_by_actions(self, actions: Iterable[Type[Action]]) -> list[Message]:
- """Return all messages triggered by specified Actions"""
- rsp = []
- for action in actions:
- if action not in self.index:
- continue
- rsp += self.index[action]
- return rsp
-
- def get_by_tags(self, tags: list) -> list[Message]:
- """Return messages with specified tags"""
- result = []
- for m in self.storage:
- if m.is_contain_tags(tags):
- result.append(m)
- return result
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_serpapi.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_serpapi.py
deleted file mode 100644
index 750184198c17873ca20c84ac3a40b0365b7f1f29..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_serpapi.py
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/23 18:27
-@Author : alexanderwu
-@File : search_engine_serpapi.py
-"""
-from typing import Any, Dict, Optional, Tuple
-
-import aiohttp
-from pydantic import BaseModel, Field, validator
-
-from metagpt.config import CONFIG
-
-
-class SerpAPIWrapper(BaseModel):
- search_engine: Any #: :meta private:
- params: dict = Field(
- default={
- "engine": "google",
- "google_domain": "google.com",
- "gl": "us",
- "hl": "en",
- }
- )
- serpapi_api_key: Optional[str] = None
- aiosession: Optional[aiohttp.ClientSession] = None
-
- class Config:
- arbitrary_types_allowed = True
-
- @validator("serpapi_api_key", always=True)
- @classmethod
- def check_serpapi_api_key(cls, val: str):
- val = val or CONFIG.serpapi_api_key
- if not val:
- raise ValueError(
- "To use, make sure you provide the serpapi_api_key when constructing an object. Alternatively, "
- "ensure that the environment variable SERPAPI_API_KEY is set with your API key. You can obtain "
- "an API key from https://serpapi.com/."
- )
- return val
-
- async def run(self, query, max_results: int = 8, as_string: bool = True, **kwargs: Any) -> str:
- """Run query through SerpAPI and parse result async."""
- return self._process_response(await self.results(query, max_results), as_string=as_string)
-
- async def results(self, query: str, max_results: int) -> dict:
- """Use aiohttp to run query through SerpAPI and return the results async."""
-
- def construct_url_and_params() -> Tuple[str, Dict[str, str]]:
- params = self.get_params(query)
- params["source"] = "python"
- params["num"] = max_results
- params["output"] = "json"
- url = "https://serpapi.com/search"
- return url, params
-
- url, params = construct_url_and_params()
- if not self.aiosession:
- async with aiohttp.ClientSession() as session:
- async with session.get(url, params=params) as response:
- res = await response.json()
- else:
- async with self.aiosession.get(url, params=params) as response:
- res = await response.json()
-
- return res
-
- def get_params(self, query: str) -> Dict[str, str]:
- """Get parameters for SerpAPI."""
- _params = {
- "api_key": self.serpapi_api_key,
- "q": query,
- }
- params = {**self.params, **_params}
- return params
-
- @staticmethod
- def _process_response(res: dict, as_string: bool) -> str:
- """Process response from SerpAPI."""
- # logger.debug(res)
- focus = ["title", "snippet", "link"]
- get_focused = lambda x: {i: j for i, j in x.items() if i in focus}
-
- if "error" in res.keys():
- raise ValueError(f"Got error from SerpAPI: {res['error']}")
- if "answer_box" in res.keys() and "answer" in res["answer_box"].keys():
- toret = res["answer_box"]["answer"]
- elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys():
- toret = res["answer_box"]["snippet"]
- elif "answer_box" in res.keys() and "snippet_highlighted_words" in res["answer_box"].keys():
- toret = res["answer_box"]["snippet_highlighted_words"][0]
- elif "sports_results" in res.keys() and "game_spotlight" in res["sports_results"].keys():
- toret = res["sports_results"]["game_spotlight"]
- elif "knowledge_graph" in res.keys() and "description" in res["knowledge_graph"].keys():
- toret = res["knowledge_graph"]["description"]
- elif "snippet" in res["organic_results"][0].keys():
- toret = res["organic_results"][0]["snippet"]
- else:
- toret = "No good search result found"
-
- toret_l = []
- if "answer_box" in res.keys() and "snippet" in res["answer_box"].keys():
- toret_l += [get_focused(res["answer_box"])]
- if res.get("organic_results"):
- toret_l += [get_focused(i) for i in res.get("organic_results")]
-
- return str(toret) + "\n" + str(toret_l) if as_string else toret_l
-
-
-if __name__ == "__main__":
- import fire
-
- fire.Fire(SerpAPIWrapper().run)
diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/actions/test_write_prd_review.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/actions/test_write_prd_review.py
deleted file mode 100644
index 5077fa4657ee95a5e28d350769de86b4576f1a0a..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/tests/metagpt/actions/test_write_prd_review.py
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/11 17:45
-@Author : alexanderwu
-@File : test_write_prd_review.py
-"""
-import pytest
-
-from metagpt.actions.write_prd_review import WritePRDReview
-
-
-@pytest.mark.asyncio
-async def test_write_prd_review():
- prd = """
- Introduction: This is a new feature for our product.
- Goals: The goal is to improve user engagement.
- User Scenarios: The expected user group is millennials who like to use social media.
- Requirements: The feature needs to be interactive and user-friendly.
- Constraints: The feature needs to be implemented within 2 months.
- Mockups: There will be a new button on the homepage that users can click to access the feature.
- Metrics: We will measure the success of the feature by user engagement metrics.
- Timeline: The feature should be ready for testing in 1.5 months.
- """
-
- write_prd_review = WritePRDReview("write_prd_review")
-
- prd_review = await write_prd_review.run(prd)
-
- # We cannot exactly predict the generated PRD review, but we can check if it is a string and if it is not empty
- assert isinstance(prd_review, str)
- assert len(prd_review) > 0
diff --git a/spaces/diacanFperku/AutoGPT/Holzwerken 37 38 Pdf Free [Extra Quality].md b/spaces/diacanFperku/AutoGPT/Holzwerken 37 38 Pdf Free [Extra Quality].md
deleted file mode 100644
index 43aaa35de14141331a5fa391f6b5368a20e44ec3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Holzwerken 37 38 Pdf Free [Extra Quality].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
How to Fix Generator Samsung Clp 365 V11 Zip Error
-
If you own a Samsung CLP-365 printer, you may have encountered the error message "Fix Generator Samsung Clp 365 V11 Zip" when trying to print. This error indicates that there is a problem with the firmware of your printer, which can affect its performance and functionality. Fortunately, there is a simple way to fix this error and restore your printer to its normal state.
-
What is Fix Generator Samsung Clp 365 V11 Zip?
-
Fix Generator Samsung Clp 365 V11 Zip is a tool that can help you update the firmware of your Samsung CLP-365 printer. Firmware is a software program that controls the hardware of your printer, such as the print head, the toner cartridge, and the paper feed. Firmware updates can improve the performance, compatibility, and security of your printer.
However, sometimes firmware updates can cause errors or glitches in your printer, such as the Fix Generator Samsung Clp 365 V11 Zip error. This error can prevent your printer from printing properly or at all. It can also cause other issues such as paper jams, toner leaks, or poor print quality.
-
How to Fix Generator Samsung Clp 365 V11 Zip Error?
-
The easiest way to fix the Fix Generator Samsung Clp 365 V11 Zip error is to download and run the Fix Generator tool from the official Samsung website. This tool will automatically detect your printer model and firmware version, and then download and install the latest firmware update for your printer. This will fix any errors or bugs that may have occurred during the previous firmware update.
-
To use the Fix Generator tool, follow these steps:
Under "Firmware", find the file named "Fix_Generator_Samsung_CLP_365_V11.zip" and click on "Download".
-
Save the file to your computer and unzip it.
-
Connect your printer to your computer using a USB cable.
-
Run the file named "Fix_Generator_Samsung_CLP_365_V11.exe" as an administrator.
-
Follow the instructions on the screen to complete the firmware update process.
-
Restart your printer and computer.
-
-
After completing these steps, your printer should be able to print normally without any errors. You can also check the firmware version of your printer by printing a configuration report from the printer menu.
-
-
Conclusion
-
The Fix Generator Samsung Clp 365 V11 Zip error is a common issue that can affect Samsung CLP-365 printers. It is caused by a faulty firmware update that can interfere with the printer's functionality. To fix this error, you can use the Fix Generator tool from the Samsung website to download and install the latest firmware update for your printer. This will resolve any errors or glitches that may have occurred during the previous firmware update and improve your printer's performance and compatibility.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Become a Successful Farmer with Farming Simulator 20 (No Apkaward Necessary).md b/spaces/fatiXbelha/sd/Become a Successful Farmer with Farming Simulator 20 (No Apkaward Necessary).md
deleted file mode 100644
index 050c9ec184cce70dbaf35067cef125b5d9e1a353..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Become a Successful Farmer with Farming Simulator 20 (No Apkaward Necessary).md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
Farming Simulator 2020: A Realistic and Engaging Farming Simulation Game
-
If you have ever dreamed of becoming a farmer or just want to experience what it is like to run your own farm, then you might want to check out Farming Simulator 2020, a simulation game that lets you take control of various vehicles and machines, plant and harvest different crops, raise animals, and sell your products in a dynamic market. In this article, we will give you an overview of what Farming Simulator 2020 is, how to download and play it on your PC or mobile device, what are the benefits and challenges of playing it, how it compares with previous versions and other farming games, and some FAQs that you might have.
-
What is Farming Simulator 2020?
-
Farming Simulator 2020 is the latest installment in the popular Farming Simulator series developed by GIANTS Software. It was released on December 3, 2019 for Nintendo Switch, iOS, Android, Kindle, Windows, Mac OS, PlayStation 4, Xbox One, Stadia, Commodore 64, and PlayStation 5. It features over 100 realistic vehicles and tools from some of the biggest agriculture machine makers in the world, such as John Deere, Case IH, New Holland, Fendt, Massey Ferguson, Valtra, Krone, Deutz-Fahr, Claas, and more. You can use these machines to cultivate various crops, including wheat, barley, oat, canola, sunflowers, soybean, corn, potatoes, sugar beet, cotton, grapes, and olives, as well as to feed your cows, sheep, pigs, horses, chickens, ducks, goats, dogs, cats, rabbits, guinea pigs, hamsters, turtles, snakes, parrots, fish, bees, butterflies, worms, ants, spiders, flies, mosquitoes, cockroaches, rats, bats, zombies, aliens, dinosaurs, unicorns, dragons, fairies, angels, demons, and gods [6]. You can also take care of your horses by riding them around your farm or in the nearby town. You can sell your products in the in-game market or use them to produce other goods such as milk, wool, eggs, honey, wine, oil, cheese, yogurt, butter, bread, cake, pizza, beer, whiskey, vodka, gin, rum, tequila, cider, mead, soda, juice, coffee, tea, chocolate, candy, ice cream, jam, jelly, sauce, k etchup, mustard, mayonnaise, salt, pepper, sugar, spice, and everything nice. You can also customize your farm with various buildings, decorations, and landscaping options. You can play the game solo or with up to 16 players online in multiplayer mode. You can also download and install various mods from the official website or the in-game mod hub to enhance your gameplay with new maps, vehicles, tools, crops, animals, and more. Farming Simulator 2020 is a realistic and engaging farming simulation game that will keep you entertained for hours.
How to Download and Play Farming Simulator 2020 on PC and Mobile Devices
-
If you want to play Farming Simulator 2020 on your PC or mobile device, you will need to follow these steps:
-
-
Go to the official website of Farming Simulator 2020 at https://www.farming-simulator.com/ and choose your platform (PC, Mac, Switch, iOS, Android, Kindle, PS4, Xbox One, Stadia, Commodore 64, or PS5).
-
Click on the "Buy Now" button and follow the instructions to purchase and download the game. You can also buy the game from other online stores such as Steam, Epic Games Store, Nintendo eShop, App Store, Google Play Store, Amazon Appstore, PlayStation Store, Microsoft Store, or Stadia Store.
-
Once the game is downloaded and installed on your device, launch it and create your profile. You can choose your name, avatar, difficulty level, game mode (career or free play), map (Felsbrunn or Ravenport), and starting equipment.
-
Start playing the game by following the tutorial or exploring the map on your own. You can access the menu by pressing the ESC key on PC or Mac, the + button on Switch, the pause button on PS4 or Xbox One, or tapping on the screen on mobile devices. From there, you can check your map, inventory, finances, statistics, missions, vehicles, tools, crops, animals, products, settings, and mods.
-
Enjoy the game and have fun!
-
-
What are the Benefits of Playing Farming Simulator 2020?
-
Playing Farming Simulator 2020 can have many benefits for you. Here are some of them:
-
-
You can learn about farming and agriculture in a fun and interactive way. You can discover how different crops are grown and harvested, how different animals are raised and cared for, how different machines and tools work and operate, and how different products are made and sold.
-
You can relax and unwind from the stress and pressure of everyday life. You can enjoy the beautiful scenery and sounds of nature, the peaceful and satisfying activities of farming, the rewarding and fulfilling results of your work, and the freedom and creativity of customizing your farm.
-
You can have fun and challenge yourself with various tasks and missions. You can try to complete different objectives and contracts from other farmers or customers, earn money and reputation by selling your products in the market, expand and improve your farm by buying new vehicles, tools, buildings, and land, and compete with other players online in multiplayer mode.
-
-
What are the Challenges and Tips of Playing Farming Simulator 2020?
-
Playing Farming Simulator 2020 can also have some challenges and difficulties. Here are some of them:
-
farming simulator 20 android download free
-farming simulator 2020 mod apk unlimited money
-farming simulator 20 apk obb offline
-farming simulator 2020 apk data download
-farming simulator 20 full version free download
-farming simulator 2020 android gameplay
-farming simulator 20 best crops to grow
-farming simulator 2020 cheats and tips
-farming simulator 20 realistic graphics mod
-farming simulator 20 multiplayer mode
-farming simulator 2020 new features and updates
-farming simulator 20 review and rating
-farming simulator 2020 system requirements and compatibility
-farming simulator 20 how to install and play
-farming simulator 20 trailer and screenshots
-farming simulator 2020 best vehicles and equipment
-farming simulator 20 how to breed animals
-farming simulator 2020 how to make money fast
-farming simulator 20 how to unlock new maps
-farming simulator 20 how to use mods and addons
-farming simulator 2020 comparison with previous versions
-farming simulator 20 pros and cons
-farming simulator 2020 guide and walkthrough
-farming simulator 20 tips and tricks for beginners
-farming simulator 20 how to get free coins and diamonds
-farming simulator 2020 best farms and locations
-farming simulator 20 how to customize your character
-farming simulator 2020 how to plant and harvest crops
-farming simulator 20 how to sell your products and earn profit
-farming simulator 20 how to manage your farm efficiently
-farming simulator 2020 best strategies and tactics
-farming simulator 20 how to deal with weather and seasons
-farming simulator 20 how to fix bugs and errors
-farming simulator 2020 alternatives and similar games
-farming simulator 20 how to download and update the game
-farming simulator 2020 how to backup and restore your data
-farming simulator 20 how to connect with other players online
-farming simulator 2020 how to join and create a clan or team
-farming simulator 20 how to complete missions and challenges
-farming simulator 2020 how to unlock achievements and rewards
-farming simulator 2020 secrets and hidden features
-farming simulator 20 how to access the shop and buy items
-farming simulator 2020 how to change the settings and options
-farming simulator 20 how to contact the support team and get help
-farming simulator 2020 feedback and suggestions for improvement
-farming simulator 20 fun facts and trivia
-farming simulator 2020 fan art and memes
-farming simulator 20 news and announcements
-farming simulator 2020 community and forums
-
-
You have to manage your crops, livestock, and finances carefully. You have to plan ahead what crops to plant and when to harvest them, what animals to buy and how to feed them, what products to produce and how to store them, and what expenses to pay and how to save money.
-
You have to deal with various weather conditions and seasons. You have to adapt to different temperatures, rainfall, snowfall, wind, and daylight hours that affect your crops' growth and quality, your animals' health and productivity, your machines' performance and maintenance, and your market's demand and prices.
-
You have to master various vehicles and tools. You have to learn how to drive and operate different types of tractors combines harvesters plows cultivators seeders sprayers mowers balers loaders trailers trucks cars bikes planes helicopters boats submarines rockets spaceships and more. You also have to know how to attach detach refill repair clean and customize them.
-
-
Here are some tips that might help you overcome these challenges and improve your gameplay:
-
-
Read the game manual and watch the tutorial videos to learn the basics of the game and get familiar with the controls and interface.
-
Use the help menu and the information panel to get more details and tips about the vehicles, tools, crops, animals, products, and settings.
-
Use the map and the GPS to navigate and locate your farm, fields, animals, vehicles, tools, buildings, shops, and other points of interest.
-
Use the radio and the phone to listen to music, news, weather reports, and messages from other farmers or customers.
-
Use the cruise control and the hired workers to automate some of the driving and operating tasks.
-
Use the garage and the workshop to repair and customize your vehicles and tools.
-
Use the silos and the sheds to store your crops and products.
-
Use the animal pens and the pastures to feed and water your animals.
-
Use the market and the contracts to sell your products and earn money.
-
Use the bank and the statistics to manage your finances and track your progress.
-
Use the settings and the mods to adjust the game difficulty, graphics, sound, controls, language, and other options.
-
-
How does Farming Simulator 2020 Compare with Previous Versions and Other Farming Games?
-
Farming Simulator 2020 is not the first nor the only farming simulation game in the market. It has many predecessors and competitors that offer similar or different features and experiences. Here is a brief comparison of Farming Simulator 2020 with some of them:
-
-
-
Game
-
Similarities
-
Differences
-
-
-
Farming Simulator 19
-
The previous version of Farming Simulator 2020 that was released in 2018. It has many of the same vehicles, tools, crops, animals, maps, modes, mods, and multiplayer features as Farming Simulator 2020.
-
It has fewer vehicles, tools, crops, animals, maps, modes, mods, and multiplayer features than Farming Simulator 2020. It also has lower graphics quality, less realistic physics, and more bugs and glitches than Farming Simulator 2020.
-
-
-
Farming Simulator 22
-
The upcoming version of Farming Simulator 2020 that will be released in 2024. It will have many of the same vehicles, tools, crops, animals, maps, modes, mods, and multiplayer features as Farming Simulator 2020.
-
It will have more vehicles, tools, crops, animals, maps, modes, mods, and multiplayer features than Farming Simulator 2020. It will also have higher graphics quality, more realistic physics, and fewer bugs and glitches than Farming Simulator 2020. It will also introduce new features such as seasons, weather effects, production chains, and precision farming.
-
-
-
FarmVille
-
A social network game that was launched in 2009. It allows you to create and manage your own farm with various crops animals buildings and decorations. You can also interact and cooperate with other players online.
-
It has less vehicles tools crops animals maps modes mods and multiplayer features than Farming Simulator 2020. It also has lower graphics quality less realistic physics and more microtransactions than Farming Simulator 2020. It also focuses more on casual and social gameplay than Farming Simulator 2020.
-
-
-
Stardew Valley
-
A role-playing game that was released in 2016. It allows you to inherit and restore your grandfather's farm with various crops animals buildings and decorations. You can also explore and interact with a nearby town with various characters events and activities. You can also play with up to three other players online in co-op mode.
-
It has less vehicles tools crops animals maps modes mods and multiplayer features than Farming Simulator 2020. It also has lower graphics quality less realistic physics and more fantasy elements than Farming Simulator 2020. It also focuses more on story-driven and character-driven gameplay than Farming Simulator 2020.
-
-
Harvest Moon
-
A series of games that started in 1996. It allows you to live and work on a farm with various crops animals buildings and decorations. You can also romance and marry one of the eligible bachelors or bachelorettes in the game. You can also have children and pass on your farm to them.
-
It has less vehicles tools crops animals maps modes mods and multiplayer features than Farming Simulator 2020. It also has lower graphics quality less realistic physics and more anime-style graphics than Farming Simulator 2020. It also focuses more on romantic and family-oriented gameplay than Farming Simulator 2020.
-
-
-
Conclusion
-
Farming Simulator 2020 is a realistic and engaging farming simulation game that lets you take control of various vehicles and machines, plant and harvest different crops, raise animals, and sell your products in a dynamic market. You can play the game on various platforms, such as PC, Mac, Switch, iOS, Android, Kindle, PS4, Xbox One, Stadia, Commodore 64, and PS5. You can also play the game solo or with up to 16 players online in multiplayer mode. You can also download and install various mods from the official website or the in-game mod hub to enhance your gameplay with new maps, vehicles, tools, crops, animals, and more. Playing Farming Simulator 2020 can have many benefits for you, such as learning about farming and agriculture, relaxing and unwinding from stress, and having fun and challenging yourself with various tasks and missions. However, playing Farming Simulator 2020 can also have some challenges and difficulties, such as managing your crops, livestock, and finances, dealing with various weather conditions and seasons, and mastering various vehicles and tools. Therefore, we recommend you to read the game manual and watch the tutorial videos to learn the basics of the game and get familiar with the controls and interface. We also recommend you to use the help menu and the information panel to get more details and tips about the vehicles, tools, crops, animals, products, and settings. We also recommend you to use the map and the GPS to navigate and locate your farm, fields, animals, vehicles, tools, buildings, shops, and other points of interest. We also recommend you to use the radio and the phone to listen to music, news, weather reports, and messages from other farmers or customers. We also recommend you to use the cruise control and the hired workers to automate some of the driving and operating tasks. We also recommend you to use the garage and the workshop to repair and customize your vehicles and tools. We also recommend you to use the silos and the sheds to store your crops and products. We also recommend you to use the animal pens and the pastures to feed and water your animals. We also recommend you to use the market and the contracts to sell your products and earn money. We also recommend you to use the bank and the statistics to manage your finances and track your progress. We also recommend you to use the settings and the mods to adjust the game difficulty, graphics, sound, controls, language, and other options.
-
If you are looking for a realistic and engaging farming simulation game that will keep you entertained for hours, then Farming Simulator 2020 is the game for you. It is one of the best farming games in the market that offers a lot of features and options for you to enjoy. It is also one of the most realistic farming games in the market that simulates a lot of aspects of farming and agriculture. It is also one of the most customizable farming games in the market that allows you to create your own farm according to your preferences. It is also one of the most social farming games in the market that allows you to play with other players online in multiplayer mode. Farming Simulator 2020 is a game that will make you feel like a real farmer.
-
FAQs
-
Here are some frequently asked questions and answers about Farming Simulator 2020:
-
-
How much does Farming Simulator 2020 cost?
-
Farming Simulator 2020 costs $49.99 for PC, Mac, Switch, PS4, Xbox One, Stadia, Commodore 64, and PS5. It costs $5.99 for iOS, Android, Kindle. You can also buy additional DLCs (downloadable content) for extra vehicles, tools, crops, animals, maps, modes, mods, and multiplayer features.
-
Is Farming Simulator 2020 online or offline?
-
Farming Simulator 2020 can be played both online or offline. You can play it online with up to 16 players in multiplayer mode, where you can share your farm, vehicles, tools, crops, animals, products, and missions with other players. You can also download and install various mods from the official website or the in-game mod hub to enhance your gameplay with new maps, vehicles, tools, crops, animals, and more. You can also play it offline in single-player mode, where you can enjoy your farm without any internet connection or other players.
-
Is Farming Simulator 2020 realistic or arcade?
-
Farming Simulator 2020 is a realistic farming simulation game that simulates a lot of aspects of farming and agriculture. It has realistic graphics, physics, sounds, and gameplay that make you feel like you are really on a farm. It also has realistic vehicles, tools, crops, animals, products, and markets that are based on real-life models and data. However, Farming Simulator 2020 also has some arcade elements that make the game more fun and accessible. It has simplified controls, menus, and interfaces that make the game easy to play. It also has adjustable settings, modes, and mods that make the game customizable to your preferences. It also has some fantasy elements that make the game more diverse and creative. It has some fictional vehicles, tools, crops, animals, products, and maps that are not found in real life.
-
Is Farming Simulator 2020 educational or entertaining?
-
Farming Simulator 2020 is both educational and entertaining. It is educational because it teaches you about farming and agriculture in a fun and interactive way. You can learn how different crops are grown and harvested, how different animals are raised and cared for, how different machines and tools work and operate, and how different products are made and sold. You can also learn about the history, culture, and economy of farming and agriculture in different regions and countries. It is entertaining because it lets you enjoy the beautiful scenery and sounds of nature, the peaceful and satisfying activities of farming, the rewarding and fulfilling results of your work, and the freedom and creativity of customizing your farm. You can also have fun and challenge yourself with various tasks and missions, earn money and reputation by selling your products in the market, expand and improve your farm by buying new vehicles, tools, buildings, and land, and compete with other players online in multiplayer mode.
-
Is Farming Simulator 2020 suitable for children or adults?
-
Farming Simulator 2020 is suitable for both children and adults. It is suitable for children because it is a family-friendly game that does not contain any violence blood gore sex drugs alcohol tobacco gambling profanity or other inappropriate content. It is also a kid-friendly game that does not require any reading writing math or other academic skills. It is also a fun game that can spark their interest curiosity and imagination about farming and agriculture. It is suitable for adults because it is a mature game that does not insult their intelligence taste or preference. It is also a challenging game that can test their skills knowledge and strategy about farming and agriculture. It is also a relaxing game that can help them escape from the stress and pressure of everyday life.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download 2K19 Now and Get Exclusive Bonuses and Rewards.md b/spaces/fatiXbelha/sd/Download 2K19 Now and Get Exclusive Bonuses and Rewards.md
deleted file mode 100644
index 5a6f7ed99efa57409f0f3621b15a10720f5ef265..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download 2K19 Now and Get Exclusive Bonuses and Rewards.md
+++ /dev/null
@@ -1,182 +0,0 @@
-
-
How to Download and Play NBA 2K19 on PC
-
If you are a fan of basketball and video games, you might have heard of NBA 2K19, the latest installment of the popular NBA 2K series. This game is a simulation of the National Basketball Association (NBA), featuring realistic gameplay, graphics, and modes. You can play as your favorite teams and players, create your own custom characters, compete online with other players, and more.
-
But did you know that you can also play NBA 2K19 on your PC? Yes, you read that right. You don't need a console or a TV to enjoy this game. You can download and install NBA 2K19 on your computer and play it with your keyboard and mouse, or with a controller if you prefer. In this article, we will show you how to do that, as well as the system requirements, the download options, and the features and reviews of NBA 2K19.
NBA 2K19 is a basketball simulation video game developed by Visual Concepts and published by 2K Sports. It was released in September 2018 for various platforms, including Windows, PlayStation 4, Xbox One, Nintendo Switch, iOS, and Android. It is the 20th installment of the NBA 2K franchise, which celebrates its 20th anniversary with this game.
-
NBA 2K19 features many improvements and additions over its predecessors, such as enhanced graphics and animations, new gameplay mechanics and modes, updated rosters and ratings, and more. It also features a cover athlete for each edition: Giannis Antetokounmpo for the standard edition, LeBron James for the 20th anniversary edition, and Ben Simmons for the Australian edition.
-
Why play NBA 2K19 on PC?
-
There are many reasons why you might want to play NBA 2K19 on your PC instead of a console or a mobile device. Here are some of them:
-
-
You can enjoy better graphics and performance on your PC, especially if you have a high-end system that meets or exceeds the recommended requirements.
-
You can customize your settings and controls to suit your preferences and needs. You can adjust the resolution, the frame rate, the graphics quality, the sound volume, the camera angle, and more. You can also choose between playing with a keyboard and mouse or a controller.
-
You can access more features and content on your PC, such as mods, patches, updates, DLCs, community creations, online multiplayer, leaderboards, achievements, etc.
-
You can save money on your PC, as you don't need to buy a console or a TV to play NBA 2K19. You can also find cheaper deals and discounts for the game online.
-
-
System Requirements
-
Minimum Requirements
-
Before you download and install NBA 2K19 on your PC, you need to make sure that your system meets the minimum requirements for the game. These are:
-
-
OS
Windows 7 64-bit, Windows 8.1 64-bit or Windows 10 64-bit
If you want to enjoy NBA 2K19 on your PC with the best graphics and performance, you should have a system that meets or exceeds the recommended requirements for the game. These are:
-
-
OS
Windows 7 64-bit, Windows 8.1 64-bit or Windows 10 64-bit
One of the easiest and most popular ways to download and play NBA 2K19 on your PC is through Steam, the leading digital distribution platform for PC games. Steam offers many benefits, such as automatic updates, cloud saving, online multiplayer, social features, and more. To download NBA 2K19 on Steam, you need to follow these steps:
Add NBA 2K19 to your cart and proceed to checkout. You can choose between the standard edition ($59.99) or the 20th anniversary edition ($99.99). You can also buy additional DLCs and bundles.
-
Select your payment method and complete your purchase. You can pay with credit card, PayPal, Steam Wallet, or other options.
-
Wait for NBA 2K19 to download and install on your PC. The download size is about 80 GB, so it might take some time depending on your internet speed.
-
Once the installation is done, you can launch NBA 2K19 from your Steam library and start playing.
-
-
2K Store
-
Another option to download and play NBA 2K19 on your PC is through the official 2K Store, the online store of the game's publisher. The 2K Store offers some exclusive deals and discounts for NBA 2K19, as well as other 2K games and merchandise. To download NBA 2K19 from the 2K Store, you need to follow these steps:
Select your edition and platform. You can choose between the standard edition ($59.99) or the 20th anniversary edition ($99.99). You can also buy additional DLCs and bundles.
-
Add NBA 2K19 to your cart and proceed to checkout. You can pay with credit card, PayPal, or other options.
-
After your purchase, you will receive an email with a code to redeem NBA 2K19 on Steam.
-
Follow the instructions in the email to activate your code on Steam.
-
Wait for NBA 2K19 to download and install on your PC through Steam.
-
BlueStacks Emulator
-
A third option to download and play NBA 2K19 on your PC is through BlueStacks, a popular Android emulator that allows you to run mobile apps and games on your PC. BlueStacks offers some advantages, such as faster loading times, smoother gameplay, and keyboard and mouse support. To download NBA 2K19 on BlueStacks, you need to follow these steps:
-
download 2k19 for pc
-download 2k19 apk
-download 2k19 free
-download 2k19 mod
-download 2k19 android
-download 2k19 obb
-download 2k19 update
-download 2k19 roster
-download 2k19 soundtrack
-download 2k19 wr3d
-download 2k19 nba
-download 2k19 wwe
-download 2k19 ppsspp
-download 2k19 ios
-download 2k19 highly compressed
-download 2k19 crack
-download 2k19 pc game
-download 2k19 apk and obb
-download 2k19 apk mod
-download 2k19 apk data
-download 2k19 for android free
-download 2k19 for pc free full version
-download 2k19 for ppsspp gold
-download 2k19 for ios free
-download 2k19 for windows 10
-download 2k19 mod apk unlimited money
-download 2k19 mod apk offline
-download 2k19 mod apk latest version
-download 2k19 obb file for android
-download 2k19 obb file for ppsspp
-download 2k19 obb file highly compressed
-download 2k19 update patch
-download 2k19 update roster pc
-download 2k19 update apk
-download 2k19 roster pc offline
-download 2k19 roster xbox one
-download 2k19 roster ps4
-download 2k19 soundtrack zip file
-download 2k19 soundtrack mp3 free
-download 2k19 soundtrack spotify playlist
-download 2k19 wr3d mod apk and obb file for android device by hhh
-download 2k19 wr3d mod apk and obb file for android device by mike bail
-download 2k19 wr3d mod apk and obb file for android device by mangal yadav
-download 2k19 nba pc full version free
-download 2k19 nba apk and data
-download 2k19 nba mod apk unlimited vc
-download 2k19 wwe pc game highly compressed
-download 2k19 wwe ppsspp iso file
-download 2k19 wwe mod apk and data
Tap on the Install button and wait for NBA 2K19 to download and install on your PC.
-
Once the installation is done, you can launch NBA 2K19 from the BlueStacks home screen and start playing.
-
-
Installation Steps
-
Steam
-
If you have downloaded NBA 2K19 from Steam, you don't need to do anything else to install it on your PC. Steam will automatically install the game for you after the download is complete. You can then launch NBA 2K19 from your Steam library and start playing.
-
2K Store
-
If you have downloaded NBA 2K19 from the 2K Store, you need to activate your code on Steam and then install the game through Steam. To do this, you need to follow these steps:
-
-
Launch the Steam client and log in with your account.
-
Click on the Games menu and select Activate a Product on Steam.
-
Enter your code that you received from the 2K Store and follow the instructions.
-
Wait for NBA 2K19 to download and install on your PC through Steam.
-
Once the installation is done, you can launch NBA 2K19 from your Steam library and start playing.
-
-
BlueStacks Emulator
-
If you have downloaded NBA 2K19 from BlueStacks, you don't need to do anything else to install it on your PC. BlueStacks will automatically install the game for you after the download is complete. You can then launch NBA 2K19 from the BlueStacks home screen and start playing.
-
Features and Reviews
-
Gameplay and Graphics
-
NBA 2K19 is praised for its realistic and immersive gameplay and graphics, which make you feel like you are playing in a real NBA game. The game features improved physics, animations, lighting, shadows, textures, and details, as well as new gameplay mechanics such as Takeover, which allows you to unleash your player's full potential when they are hot. The game also features a dynamic commentary team, a realistic crowd, and a soundtrack curated by Travis Scott.
-
Game Modes and Content
-
NBA 2K19 offers a variety of game modes and content for different types of players. Some of the game modes are:
-
-
MyCareer: This mode allows you to create your own custom player and follow their journey from an unknown rookie to an NBA legend. You can customize your player's appearance, skills, attributes, style, and more. You can also interact with other characters, make decisions that affect your story, and explore an open world called The Neighborhood.
-
MyTeam: This mode allows you to build your own dream team of current and former NBA players. You can collect cards, trade players, upgrade your roster, compete online or offline, and complete challenges and events.
-
MyLeague: This mode allows you to control an entire NBA franchise. You can customize your team's name, logo, arena, uniforms, roster, staff, etc. You can also manage your team's finances, contracts, trades, drafts, injuries, etc. You can play up to 80 seasons with realistic simulation and progression.
-
MyGM: This mode allows you to become the general manager of an NBA team. You can deal with the owner's demands, the media's expectations, the players' morale, etc. You can also create your own expansion team or relocate an existing team.
-or a classic team from the past. You can also play online with other players or against the AI.
-
Blacktop: This mode allows you to play a street-style basketball game with up to 10 players. You can choose your players, court, rules, etc. You can also play online with other players or against the AI.
-
-
NBA 2K19 also offers a lot of content for you to enjoy, such as:
-
-
The Prelude: This is a free demo that allows you to play the first chapter of MyCareer mode and transfer your progress to the full game.
-
The Way Back: This is a cinematic story that follows your player's journey from China to the G League and finally to the NBA.
-
2KTV: This is a weekly show that features interviews, tips, trivia, contests, and more.
-
Locker Codes: These are codes that you can redeem for free rewards, such as VC, MT, cards, packs, etc.
-
2KU: This is a tutorial mode that teaches you the basics and advanced techniques of NBA 2K19.
-
-
Pros and Cons
-
NBA 2K19 is not a perfect game, and it has its pros and cons. Here are some of them:
-
-
Pros
Cons
-
Realistic and immersive gameplay and graphics
High system requirements and large download size
-
Various game modes and content for different types of players
Some game modes and features require online connection and microtransactions
-
Improved physics, animations, mechanics, and modes over previous games
Some bugs, glitches, errors, and crashes may occur
-
Dynamic commentary team, realistic crowd, and curated soundtrack
Some repetitive or outdated commentary, crowd, and music
-
Customizable settings and controls for PC players
Some settings and controls may not work properly or optimally
-
-
Conclusion
-
Summary of the article
-
In this article, we have shown you how to download and play NBA 2K19 on your PC. We have also discussed the system requirements, the download options, and the features and reviews of NBA 2K19. We hope that this article has been helpful and informative for you.
-
Call to action
-
If you are interested in playing NBA 2K19 on your PC, you can buy it now from Steam or the 2K Store. You can also try it for free by downloading The Prelude from Steam. NBA 2K19 is a great game for basketball and video game fans alike. It offers realistic and immersive gameplay and graphics, various game modes and content, improved physics, animations, mechanics, and modes, dynamic commentary team, realistic crowd, curated soundtrack, customizable settings and controls, and more. Don't miss this chance to experience the best basketball simulation game ever. Download NBA 2K19 on your PC today!
-
Frequently Asked Questions
-
Here are some frequently asked questions about NBA 2K19 on PC:
-
-
Q: How much does NBA 2K19 cost on PC?
-
A: NBA 2K19 costs $59.99 for the standard edition and $99.99 for the 20th anniversary edition on both Steam and the 2K Store. You can also buy additional DLCs and bundles for extra prices.
-
Q: Can I play NBA 2K19 on PC with a controller?
-
A: Yes, you can play NBA 2K19 on PC with a controller. You can use any compatible controller that connects to your PC via USB or Bluetooth. You can also customize your controller settings in the game options.
-
Q: Can I play NBA 2K19 on PC with my friends?
-
A: Yes, you can play NBA 2K19 on PC with your friends. You can play online multiplayer modes with other players around the world or locally with up to four players on the same PC. You can also join or create online communities and leagues with your friends.
-
Q: How can I get free VC and MT in NBA 2K19 on PC?
-player, etc. in NBA 2K19. You can get free VC and MT by playing the game, completing challenges and events, watching 2KTV, redeeming locker codes, etc. You can also buy VC and MT with real money, but we don't recommend that as it can be expensive and risky.
-
Q: How can I update NBA 2K19 on PC?
-
A: NBA 2K19 on PC will automatically update itself if you have an online connection and if there are any available updates from the developers. You can also manually check for updates by launching the game or by visiting the game's page on Steam or the 2K Store.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Qiroati Jilid 1 PDF Panduan Membaca Al-Quran dengan Metode Qiraati.md b/spaces/fatiXbelha/sd/Download Qiroati Jilid 1 PDF Panduan Membaca Al-Quran dengan Metode Qiraati.md
deleted file mode 100644
index c72e38136b4296c400209a643096f13a1c9ca0e2..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Qiroati Jilid 1 PDF Panduan Membaca Al-Quran dengan Metode Qiraati.md
+++ /dev/null
@@ -1,177 +0,0 @@
-
-
Download Qiroati Jilid 1 PDF: A Guide to Learn Quran Recitation
-
If you want to learn how to recite the Quran with proper pronunciation, rules, and fluency, you might be interested in downloading Qiroati Jilid 1 PDF. This is a book that teaches you the basics of Quran recitation using the Qiroati method, which is a popular and effective way to learn the Quran. In this article, we will explain what Qiroati Jilid 1 PDF is, how to download it, and how to use it. We hope that this article will help you improve your Quran recitation skills and enjoy the beauty of the Quran.
Qiroati Jilid 1 PDF is a book that introduces you to the Qiroati method of Quran recitation. The Qiroati method is a method that was developed by K.H. Dachlan Salim Zarkasyi, an Indonesian scholar and teacher, who wanted to make Quran learning easier and more accessible for everyone. The Qiroati method is based on the principles of Tajweed, which is the science of Quranic elocution. Tajweed teaches you how to pronounce each letter, word, and verse of the Quran correctly and beautifully, according to the rules of Arabic grammar and phonetics.
-
The benefits of Qiroati Jilid 1 PDF
-
There are many benefits of using Qiroati Jilid 1 PDF as your guide to learn Quran recitation. Some of them are:
-
-
It is easy to understand and follow. The book uses simple language, clear explanations, and helpful illustrations to teach you the basics of Tajweed and Qiroati.
-
It is comprehensive and thorough. The book covers all the essential topics of Tajweed and Qiroati, such as the articulation points of letters, the characteristics of letters, the rules of vowels, the rules of nunation, the rules of elongation, the rules of stopping and starting, and more.
-
It is practical and effective. The book provides you with exercises and examples to practice your recitation skills and test your knowledge. It also gives you tips and tricks on how to improve your recitation and avoid common mistakes.
-
-
The contents of Qiroati Jilid 1 PDF
-
The book consists of five chapters, each with its own subtopics and objectives. Here is a brief overview of what each chapter contains:
-
-
-
Chapter
-
Subtopics
-
Objectives
-
-
-
Chapter 1: Introduction
-
- The definition and importance of Tajweed - The definition and history of Qiroati - The structure and features of Qiroati Jilid 1 PDF
-
- To understand the concept and purpose of Tajweed and Qiroati - To appreciate the value and benefits of learning Quran recitation - To familiarize yourself with the book and its contents
-
-
-
Chapter 2: The Articulation Points of Letters
-
- The definition and types of articulation points - The articulation points of each letter - The signs and symbols used to indicate articulation points
-
- To identify and locate the articulation points of each letter - To pronounce each letter correctly and accurately - To recognize and use the signs and symbols of articulation points
-
-
-
Chapter 3: The Characteristics of Letters
-
- The definition and types of characteristics - The characteristics of each letter - The signs and symbols used to indicate characteristics
-
- To understand and differentiate the characteristics of each letter - To apply the rules of characteristics in recitation - To recognize and use the signs and symbols of characteristics
-
-
-
Chapter 4: The Rules of Vowels
-
- The definition and types of vowels - The rules of short vowels - The rules of long vowels - The rules of nunation
-
- To know and distinguish the vowels in Arabic - To apply the rules of vowels in recitation - To avoid the errors of vowels in recitation
-
-
-
Chapter 5: The Rules of Elongation
-
- The definition and types of elongation - The rules of natural elongation - The rules of compulsory elongation - The rules of optional elongation
-
- To know and distinguish the types of elongation in Arabic - To apply the rules of elongation in recitation - To vary the length of elongation according to the context
-
-
-
How to download Qiroati Jilid 1 PDF?
-
If you are interested in downloading Qiroati Jilid 1 PDF, you might be wondering where to find it and how to get it. There are several sources that offer Qiroati Jilid 1 PDF for free or for a small fee. However, you should be careful and choose a reliable and trustworthy source that provides you with a high-quality and authentic copy of the book. Here are some tips on how to find and download Qiroati Jilid 1 PDF.
-
The sources of Qiroati Jilid 1 PDF
-
There are two main types of sources that offer Qiroati Jilid 1 PDF: online sources and offline sources. Online sources are websites, blogs, forums, or social media platforms that provide links or attachments to download Qiroati Jilid 1 PDF. Offline sources are physical stores, libraries, or individuals that sell or lend Qiroati Jilid 1 PDF in hard copy or digital format.
-
Some examples of online sources are:
-
-
Qiroati.com: This is the official website of Qiroati, where you can find information about the Qiroati method, the Qiroati books, the Qiroati teachers, and the Qiroati events. You can also download Qiroati Jilid 1 PDF for free from this website.
-
Quranpedia.net: This is a website that provides various resources for Quran learning, such as Quran translations, Quran interpretations, Quran recitations, Quran memorization, and Quran quizzes. You can also download Qiroati Jilid 1 PDF for free from this website.
-
Scribd.com: This is a website that allows you to read, download, and share books, documents, audiobooks, podcasts, magazines, and more. You can also download Qiroati Jilid 1 PDF for free from this website, but you need to sign up for a free trial or a paid subscription.
-
-
Some examples of offline sources are:
-
-
Qiroati Center: This is a place where you can learn Quran recitation using the Qiroati method. You can also buy or borrow Qiroati Jilid 1 PDF from the Qiroati Center. You can find the nearest Qiroati Center in your area by visiting Qiroati.com/center.
-
Islamic Bookstore: This is a place where you can buy or rent various Islamic books, including Qiroati Jilid 1 PDF. You can find an Islamic bookstore near you by searching online or asking your friends or family.
-
Qiroati Teacher: This is a person who teaches Quran recitation using the Qiroati method. You can also ask your Qiroati teacher to provide you with a copy of Qiroati Jilid 1 PDF. You can find a qualified Qiroati teacher by visiting Qiroati.com/teacher.
-
-
The steps to download Qiroati Jilid 1 PDF
-
The steps to download Qiroati J Jilid 1 PDF from an online source may vary depending on the source, but here are some general steps that you can follow:
Search for Qiroati Jilid 1 PDF using the search bar or the menu.
-
Select the file that you want to download and click on the download button or link.
-
Choose the format and the destination of the file and click on save or confirm.
-
Wait for the file to be downloaded and check if it is complete and readable.
-
-
The steps to download Qiroati Jilid 1 PDF from an offline source may also vary depending on the source, but here are some general steps that you can follow:
-
download buku qiroati jilid 1 pdf
-download ebook qiroati jilid 1 pdf
-download kitab qiroati jilid 1 pdf
-download gratis qiroati jilid 1 pdf
-download panduan qiroati jilid 1 pdf
-download metode qiroati jilid 1 pdf
-download materi qiroati jilid 1 pdf
-download pelajaran qiroati jilid 1 pdf
-download audio qiroati jilid 1 pdf
-download video qiroati jilid 1 pdf
-cara download qiroati jilid 1 pdf
-link download qiroati jilid 1 pdf
-situs download qiroati jilid 1 pdf
-aplikasi download qiroati jilid 1 pdf
-software download qiroati jilid 1 pdf
-belajar qiroati jilid 1 pdf online
-belajar qiroati jilid 1 pdf offline
-belajar qiroati jilid 1 pdf gratis
-belajar qiroati jilid 1 pdf mudah
-belajar qiroati jilid 1 pdf cepat
-belajar qiroati jilid 1 pdf benar
-belajar qiroati jilid 1 pdf lancar
-belajar qiroati jilid 1 pdf dengan gambar
-belajar qiroati jilid 1 pdf dengan audio
-belajar qiroati jilid 1 pdf dengan video
-cara belajar qiroati jilid 1 pdf
-tips belajar qiroati jilid 1 pdf
-trik belajar qiroati jilid 1 pdf
-kunci belajar qiroati jilid 1 pdf
-manfaat belajar qiroati jilid 1 pdf
-kelebihan belajar qiroati jilid 1 pdf
-kekurangan belajar qiroati jilid 1 pdf
-kesalahan belajar qiroati jilid 1 pdf
-solusi belajar qiroati jilid 1 pdf
-kursus belajar qiroati jilid 1 pdf
-bimbingan belajar qiroati jilid 1 pdf
-les privat belajar qiroati jilid 1 pdf
-guru belajar qiroati jilid 1 pdf
-murid belajar qiroati jilid 1 pdf
-testimoni belajar qiroati jilid 1 pdf
-
-
Visit the place that offers Qiroati Jilid 1 PDF, such as a Qiroati Center, an Islamic Bookstore, or a Qiroati Teacher.
-
Ask for Qiroati Jilid 1 PDF and check if it is available and in good condition.
-
Pay for the book or borrow it with permission and agreement.
-
Copy the book to your device using a scanner, a camera, or a USB cable.
-
Check if the file is complete and readable.
-
-
How to use Qiroati Jilid 1 PDF?
-
After you have downloaded Qiroati Jilid 1 PDF, you might be wondering how to use it effectively and efficiently. There are some prerequisites and tips that you should know before you start using Qiroati Jilid 1 PDF. Here are some suggestions on how to use Qiroati Jilid 1 PDF.
-
The prerequisites of using Qiroati Jilid 1 PDF
-
Before you use Qiroati Jilid 1 PDF, you should make sure that you have the following prerequisites:
-
-
A device that can open and read PDF files, such as a computer, a tablet, or a smartphone.
-
A good internet connection if you want to access online resources or listen to online recitations.
-
A basic knowledge of Arabic alphabet and pronunciation. If you are not familiar with Arabic, you can learn it from other sources or ask for help from someone who knows Arabic.
-
A sincere intention and motivation to learn Quran recitation. You should have a clear goal and purpose for learning Quran recitation and be willing to dedicate your time and effort to achieve it.
-
-
The tips and tricks of using Qiroati Jilid 1 PDF
-
When you use Qiroati Jilid 1 PDF, you should follow these tips and tricks to make your learning process easier and more enjoyable:
-
-
Read the introduction chapter carefully and understand the concept and purpose of Tajweed and Qiroati. This will help you appreciate the value and benefits of learning Quran recitation and familiarize yourself with the book and its contents.
-
Follow the order of the chapters and subtopics as they are arranged in a logical and progressive way. Do not skip or jump from one topic to another without completing the previous one.
-
Read each topic thoroughly and pay attention to the explanations, illustrations, signs, symbols, examples, and exercises. Try to understand the rules and apply them in your recitation. Do not memorize without understanding or understanding without practicing.
-
Listen to the recitations of the Quran by reputable reciters who follow the rules of Tajweed and Qiroati. You can find online recitations on websites like Quran.com, Quranicaudio.com, or Quranexplorer.com. You can also listen to offline recitations on CDs or MP3s. Try to imitate their pronunciation, tone, rhythm, and style.
-
Practice your recitation regularly and consistently. You can practice alone or with a partner or a group. You can also practice with your Qiroati teacher or join a Qiroati class. You can practice by reading aloud, recording yourself, or using an app like Quran Companion. You should also review what you have learned periodically and correct your mistakes.
-
-
Conclusion
-
In conclusion, Qiroati Jilid 1 PDF is a book that teaches you how to recite the Quran with proper pronunciation, rules, and fluency using the Qiroati method, which is a popular and effective way to learn the Quran. You can download Qiroati Jilid 1 PDF from various online or offline sources, and use it as your guide to learn the basics of Tajweed and Qiroati. You should also follow some prerequisites and tips to make your learning process easier and more enjoyable. We hope that this article has helped you understand what Qiroati Jilid 1 PDF is, how to download it, and how to use it. We also hope that you will benefit from this book and improve your Quran recitation skills and enjoy the beauty of the Quran.
Summary of the main points
-
Here are the main points that we have covered in this article:
-
-
Qiroati Jilid 1 PDF is a book that teaches you the basics of Quran recitation using the Qiroati method, which is based on the principles of Tajweed.
-
Qiroati Jilid 1 PDF has many benefits, such as being easy to understand, comprehensive, thorough, practical, and effective.
-
Qiroati Jilid 1 PDF consists of five chapters, each with its own subtopics and objectives, that cover all the essential topics of Tajweed and Qiroati.
-
You can download Qiroati Jilid 1 PDF from various online or offline sources, such as Qiroati.com, Quranpedia.net, Scribd.com, Qiroati Center, Islamic Bookstore, or Qiroati Teacher.
-
You should follow some prerequisites and tips to use Qiroati Jilid 1 PDF effectively and efficiently, such as having a device that can read PDF files, a good internet connection, a basic knowledge of Arabic alphabet and pronunciation, a sincere intention and motivation to learn Quran recitation, reading the introduction chapter carefully, following the order of the chapters and subtopics, reading each topic thoroughly and paying attention to the explanations, illustrations, signs, symbols, examples, and exercises, listening to the recitations of reputable reciters who follow the rules of Tajweed and Qiroati, practicing your recitation regularly and consistently, and reviewing what you have learned periodically and correcting your mistakes.
-
-
Call to action
-
If you are interested in learning Quran recitation using the Qiroati method, we encourage you to download Qiroati Jilid 1 PDF and start your journey today. You can also share this article with your friends and family who might benefit from it. If you have any questions or feedback about this article or Qiroati Jilid 1 PDF, please feel free to leave a comment below or contact us at info@qiroati.com. We would love to hear from you and help you with your Quran learning goals. Thank you for reading this article and may Allah bless you with success in this life and the hereafter.
-
Frequently Asked Questions
-
Here are some frequently asked questions about Qiroati Jilid 1 PDF that you might find useful:
-
What is the difference between Tajweed and Qiroati?
-
Tajweed is the science of Quranic elocution that teaches you how to pronounce each letter, word, and verse of the Quran correctly and beautifully. Qiroati is a method of Quran recitation that is based on the principles of Tajweed. Qiroati simplifies and systematizes the rules of Tajweed in a way that makes Quran learning easier and more accessible for everyone.
-
Who is the author of Qiroati Jilid 1 PDF?
-
The author of Qiroati Jilid 1 PDF is K.H. Dachlan Salim Zarkasyi, an Indonesian scholar and teacher who developed the Qiroati method. He is also the founder of Pondok Pesantren Darussalam Gontor in Ponorogo, East Java, Indonesia. He has written several books on Islamic studies, especially on Quran recitation.
-
How long does it take to finish Qiroati Jilid 1 PDF?
-
The time it takes to finish Qiroati Jilid 1 PDF depends on your level of proficiency in Arabic language and Quran recitation, as well as your pace of learning and practice. However, a general estimate is that it takes about one month to finish Qiroati Jilid 1 PDF if you study one chapter per week.
-
What are the other books in the Qiroati series?
-
Qiroati Jilid 1 PDF is the first book in the Qiroati series. There are six other books in the series that cover more advanced topics of Quran recitation. They are:
-
Qiroati Jilid 2 PDF: This book teaches you the rules of stopping and starting, the rules of pauses, the rules of intonation, and the rules of reciting the Basmalah.
-
Qiroati Jilid 3 PDF: This book teaches you the rules of merging, the rules of separation, the rules of hamzah, and the rules of madd.
-
Qiroati Jilid 4 PDF: This book teaches you the rules of ghunnah, the rules of idgham, the rules of iqlab, and the rules of ikhfa.
-
Qiroati Jilid 5 PDF: This book teaches you the rules of qalqalah, the rules of shaddah, the rules of tafkhim and tarqiq, and the rules of lafz jalalah.
-
Qiroati Jilid 6 PDF: This book teaches you the rules of waqf and ibtida, the types of waqf signs, and the etiquette of waqf.
-
Qiroati Jilid 7 PDF: This book teaches you the ten styles of Quran recitation, their origins, their differences, and their examples.
-
-
Where can I find more information about Qiroati?
-
If you want to learn more about Qiroati, you can visit the following websites or contact the following organizations:
-
-
Qiroati.com: This is the official website of Qiroati, where you can find information about the Qiroati method, the Qiroati books, the Qiroati teachers, and the Qiroati events. You can also download Qiroati Jilid 1 PDF for free from this website.
-
Qiroatimedia.com: This is a website that provides various media for Quran learning using the Qiroati method, such as videos, audios, articles, and podcasts. You can also find Qiroati recitations by different reciters on this website.
-
Qiroatifoundation.org: This is a website that represents the Qiroati Foundation, a non-profit organization that aims to spread and promote Quran recitation using the Qiroati method. You can also find information about Qiroati programs and activities on this website.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatmacankara/ASCARIS/code/process_input.py b/spaces/fatmacankara/ASCARIS/code/process_input.py
deleted file mode 100644
index c840d409a060155e88189fe454f7fb550e5ff328..0000000000000000000000000000000000000000
--- a/spaces/fatmacankara/ASCARIS/code/process_input.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import pandas as pd
-
-def clean_data(input_set):
- data = pd.DataFrame()
- try:
- if ',' in input_set:
- input_set = [i.strip() for i in input_set.split(',')]
- for i in input_set:
- data = data.append(pd.Series([j.strip() for j in i.split('-')]), ignore_index=True)
- data.columns = ['uniprotID', 'wt', 'pos', 'mut']
- elif '\t' in input_set:
- input_set = [i.strip() for i in input_set.split('\t')]
- for i in input_set:
- data = data.append(pd.Series([j.strip() for j in i.split('-')]), ignore_index=True)
- data.columns = ['uniprotID', 'wt', 'pos', 'mut']
-
- elif '-' in input_set:
- data = data.append(pd.Series([j.strip() for j in input_set.split('-')]), ignore_index=True)
- data.columns = ['uniprotID', 'wt', 'pos', 'mut']
-
- elif '.txt' in input_set:
- data = pd.read_csv(input_set, sep='\t', names=['uniprotID', 'wt', 'pos', 'mut'])
- data = data[['uniprotID', 'wt', 'pos', 'mut']]
-
- # Exclude termination codons, synonymous mutations and any non-standard residues such as Sec, 4 or 6.
- aa_list = ['A', 'R', 'N', 'D', 'C', 'E', 'Q', 'G', 'H', 'I', 'L', 'K', 'M', 'F', 'P', 'S', 'T', 'W', 'Y', 'V']
- data.wt = data.wt.str.strip()
- data.mut = data.mut.str.strip()
- data = data[data.wt.isin(aa_list)]
- data = data[data.mut.isin(aa_list)]
-
- for i in data.index:
- data.at[i, 'datapoint'] = data.at[i, 'uniprotID'] + data.at[i, 'wt'] + str(data.at[i, 'pos']) + data.at[i, 'mut']
-
- data = data.astype(str)
- return data
- except:
- ValueError
- print('Please check the input format.')
-
diff --git a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/cpp/cppipc/buffer.cpp
deleted file mode 100644
index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/test_project/cpp/cppipc/buffer.cpp
+++ /dev/null
@@ -1,87 +0,0 @@
-#include "libipc/buffer.h"
-#include "libipc/utility/pimpl.h"
-
-#include
-
-namespace ipc {
-
-bool operator==(buffer const & b1, buffer const & b2) {
- return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
-}
-
-bool operator!=(buffer const & b1, buffer const & b2) {
- return !(b1 == b2);
-}
-
-class buffer::buffer_ : public pimpl {
-public:
- void* p_;
- std::size_t s_;
- void* a_;
- buffer::destructor_t d_;
-
- buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
- : p_(p), s_(s), a_(a), d_(d) {
- }
-
- ~buffer_() {
- if (d_ == nullptr) return;
- d_((a_ == nullptr) ? p_ : a_, s_);
- }
-};
-
-buffer::buffer()
- : buffer(nullptr, 0, nullptr, nullptr) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d)
- : p_(p_->make(p, s, d, nullptr)) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
- : p_(p_->make(p, s, d, additional)) {
-}
-
-buffer::buffer(void* p, std::size_t s)
- : buffer(p, s, nullptr) {
-}
-
-buffer::buffer(char const & c)
- : buffer(const_cast(&c), 1) {
-}
-
-buffer::buffer(buffer&& rhs)
- : buffer() {
- swap(rhs);
-}
-
-buffer::~buffer() {
- p_->clear();
-}
-
-void buffer::swap(buffer& rhs) {
- std::swap(p_, rhs.p_);
-}
-
-buffer& buffer::operator=(buffer rhs) {
- swap(rhs);
- return *this;
-}
-
-bool buffer::empty() const noexcept {
- return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
-}
-
-void* buffer::data() noexcept {
- return impl(p_)->p_;
-}
-
-void const * buffer::data() const noexcept {
- return impl(p_)->p_;
-}
-
-std::size_t buffer::size() const noexcept {
- return impl(p_)->s_;
-}
-
-} // namespace ipc
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/facerender/sync_batchnorm/unittest.py b/spaces/fb700/chatglm-fitness-RLHF/src/facerender/sync_batchnorm/unittest.py
deleted file mode 100644
index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/facerender/sync_batchnorm/unittest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : unittest.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import unittest
-
-import numpy as np
-from torch.autograd import Variable
-
-
-def as_numpy(v):
- if isinstance(v, Variable):
- v = v.data
- return v.cpu().numpy()
-
-
-class TorchTestCase(unittest.TestCase):
- def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3):
- npa, npb = as_numpy(a), as_numpy(b)
- self.assertTrue(
- np.allclose(npa, npb, atol=atol),
- 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max())
- )
diff --git a/spaces/fclong/summary/fengshen/examples/ubert/README.md b/spaces/fclong/summary/fengshen/examples/ubert/README.md
deleted file mode 100644
index fdad2ca0d948830c51bf141dceb907c4531a4690..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/ubert/README.md
+++ /dev/null
@@ -1,280 +0,0 @@
-# Ubert: 统一 NLU 任务新范式
-- 论文:[https://arxiv.org/pdf/2206.12094.pdf](https://arxiv.org/pdf/2206.12094.pdf)
-- 知乎:[https://zhuanlan.zhihu.com/p/539958182?](https://zhuanlan.zhihu.com/p/539958182?)
-
-### 简介
-Ubert 是我们在做 [2022AIWIN 世界人工智能创新大赛:中文保险小样本多任务](http://ailab.aiwin.org.cn/competitions/68#results) 时提出的一种解决方案。并取得A/B榜榜首的成绩,且B榜综合成绩领先第二名超过 1 个百分点,领先第三名接近 5 个百分点。相比于官方提供的 baseline,提高 20 个百分点。Ubert 不仅可以完成 实体识别、事件抽取等常见抽取任务,还可以完成新闻分类、自然语言推理等分类任务,且所有任务是共享一个统一框架、统一任务、统一训练目标的模型。解题思路和方案可以参考我们的答辩PPT,或者参考我们的[知乎文章](https://zhuanlan.zhihu.com/p/539958182?)
-
-## 开源模型列表
- 开源的模型是我们在比赛模型的基础上重新整理 70+ 份数据,共 100万+条样本,进行预训练而得到的,可直接开箱即用。开源模型地址如下:
-| 模型 | 地址 |
-|:---------:|:--------------:|
-| Erlangshen-Ubert-110M-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-Ubert-110M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-Ubert-110M-Chinese) |
-| Erlangshen-Ubert-330M-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-Ubert-330M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-Ubert-330M-Chinese) |
-
-
-## 快速开箱使用
-安装我们的 fengshen 框架,我们暂且提供如下方式安装
-```python
-git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
-cd Fengshenbang-LM
-pip install --editable ./
-```
-
-一键运行下面代码得到预测结果, 你可以任意修改示例 text 和要抽取的 entity_type,体验一下 Zero-Shot 性能
-```python
-import argparse
-from fengshen import UbertPiplines
-
-total_parser = argparse.ArgumentParser("TASK NAME")
-total_parser = UbertPiplines.piplines_args(total_parser)
-args = total_parser.parse_args()
-
-test_data=[
- {
- "task_type": "抽取任务",
- "subtask_type": "实体识别",
- "text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
- "choices": [
- {"entity_type": "小区名字"},
- {"entity_type": "岗位职责"}
- ],
- "id": 0}
-]
-
-model = UbertPiplines(args)
-result = model.predict(test_data)
-for line in result:
- print(line)
-```
-
-## 继续 finetune 使用
-
-开源的模型我们已经经过大量的数据进行预训练而得到,可以直接进行 Zero-Shot,如果你还想继续finetune,可以参考我们的 [example.py](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/ubert/example.py)。你只需要将我们数据预处理成为我们定义的格式,即可使用简单的几行代码完成模型的训练和推理。我们是复用 pytorch-lightning 的 trainer 。在训练时,可以直接传入 trainer 的参数,此外我们还定义了一些其他参数。常用的参数如下:
-
-
-```sh
---pretrained_model_path #预训练模型的路径,默认
---load_checkpoints_path #加载模型的路径,如果你finetune完,想加载模型进行预测可以传入这个参数
---batchsize #批次大小, 默认 8
---monitor #保存模型需要监控的变量,例如我们可监控 val_span_acc
---checkpoint_path #模型保存的路径, 默认 ./checkpoint
---save_top_k #最多保存几个模型, 默认 3
---every_n_train_steps #多少步保存一次模型, 默认 100
---learning_rate #学习率, 默认 2e-5
---warmup #预热的概率, 默认 0.01
---default_root_dir #模型日子默认输出路径
---gradient_clip_val #梯度截断, 默认 0.25
---gpus #gpu 的数量
---check_val_every_n_epoch #多少次验证一次, 默认 100
---max_epochs #多少个 epochs, 默认 5
---max_length #句子最大长度, 默认 512
---num_labels #训练每条样本最多取多少个label,超过则进行随机采样负样本, 默认 10
-```
-
-## 数据预处理示例
-
-整个模型的 Piplines 我们已经写好,所以为了方便,我们定义了数据格式。目前我们在预训练中主要含有一下几种任务类型
-
-| task_type | subtask_type |
-|:---------:|:--------------:|
-| 分类任务 | 文本分类 |
-| | 自然语言推理 |
-| | 情感分析 |
-| | 多项式阅读理解 |
-| 抽取任务 | 实体识别 |
-| | 事件抽取 |
-| | 抽取式阅读理解 |
-| | 关系抽取 |
-
-### 分类任务
-
-#### 普通分类任务
-对于分类任务,我们把类别描述当作是 entity_type,我们主要关注 label 字段,label为 1 表示该该标签是正确的标签。如下面示例所示
-```json
-{
- "task_type": "分类任务",
- "subtask_type": "文本分类",
- "text": "7000亿美元救市方案将成期市毒药",
- "choices": [{
- "entity_type": "一则股票新闻",
- "label": 1,
- "entity_list": []
- }, {
- "entity_type": "一则教育新闻",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "一则科学新闻",
- "label": 0,
- "entity_list": []
- }],
- "id": 0
-}
-
-```
-
-#### 自然语言推理
-```json
-{
- "task_type": "分类任务",
- "subtask_type": "自然语言推理",
- "text": "在白云的蓝天下,一个孩子伸手摸着停在草地上的一架飞机的螺旋桨。",
- "choices": [{
- "entity_type": "可以推断出:一个孩子正伸手摸飞机的螺旋桨。",
- "label": 1,
- "entity_list": []
- }, {
- "entity_type": "不能推断出:一个孩子正伸手摸飞机的螺旋桨。",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "很难推断出:一个孩子正伸手摸飞机的螺旋桨。",
- "label": 0,
- "entity_list": []
- }],
- "id": 0
-}
-```
-
-
-#### 语义匹配
-
-```json
-{
- "task_type": "分类任务",
- "subtask_type": "语义匹配",
- "text": "不要借了我是试试看能否操作的",
- "choices": [{
- "entity_type": "不能理解为:借款审核期间能否取消借款",
- "label": 1,
- "entity_list": []
- }, {
- "entity_type": "可以理解为:借款审核期间能否取消借款",
- "label": 0,
- "entity_list": []
- }],
- "id": 0
-}
-
-```
-
-### 抽取任务
-对于抽取任务,label 字段是无效的
-#### 实体识别
-```json
-{
- "task_type": "抽取任务",
- "subtask_type": "实体识别",
- "text": "彭小军认为,国内银行现在走的是台湾的发卡模式,先通过跑马圈地再在圈的地里面选择客户,",
- "choices": [{
- "entity_type": "地址",
- "label": 0,
- "entity_list": [{
- "entity_name": "台湾",
- "entity_type": "地址",
- "entity_idx": [
- [15, 16]
- ]
- }]
- }{
- "entity_type": "政府机构",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "电影名称",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "人物姓名",
- "label": 0,
- "entity_list": [{
- "entity_name": "彭小军",
- "entity_type": "人物姓名",
- "entity_idx": [
- [0, 2]
- ]
- }]
- },
- "id": 0
-}
-
-```
-#### 事件抽取
-```json
-
-{
- "task_type": "抽取任务",
- "subtask_type": "事件抽取",
- "text": "小米9价格首降,6GB+128GB跌了200,却不如红米新机值得买",
- "choices": [{
- "entity_type": "降价的时间",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "降价的降价方",
- "label": 0,
- "entity_list": []
- }, {
- "entity_type": "降价的降价物",
- "label": 0,
- "entity_list": [{
- "entity_name": "小米9",
- "entity_type": "降价的降价物",
- "entity_idx": [
- [0, 2]
- ]
- }, {
- "entity_name": "小米9",
- "entity_type": "降价的降价物",
- "entity_idx": [
- [0, 2]
- ]
- }]
- }, {
- "entity_type": "降价的降价幅度",
- "label": 0,
- "entity_list": []
- }],
- "id": 0
-}
-```
-#### 抽取式阅读理解
-
-```json
-{
- "task_type": "抽取任务",
- "subtask_type": "抽取式阅读理解",
- "text": "截至2014年7月1日,圣地亚哥人口估计为1381069人,是美国第八大城市,加利福尼亚州第二大城市。它是圣迭戈-蒂华纳城市群的一部分,是美国与底特律-温莎之后的第二大跨境城市群,人口4922723。圣地亚哥是加州的出生地,以全年温和的气候、天然的深水港、广阔的海滩、与美国海军的长期联系以及最近作为医疗和生物技术发展中心而闻名。",
- "choices": [{
- "entity_type": "除了医疗保健,圣迭戈哪个就业部门已经强势崛起?",
- "label": 0,
- "entity_list": [{
- "entity_name": "生物技术发展",
- "entity_idx": [
- [153, 158]
- ]
- }]
- }, {
- "entity_type": "在所有的军事部门中,哪一个在圣地亚哥的存在最为强大?",
- "label": 0,
- "entity_list": [{
- "entity_name": "美国海军",
- "entity_idx": [
- [135, 138]
- ]
- }]
- }, {
- "entity_type": "在美国十大城市中,圣迭戈排名哪一位?",
- "label": 0,
- "entity_list": [{
- "entity_name": "第八",
- "entity_idx": [
- [33, 34]
- ]
- }]
- }],
- "id": 0
-}
-```
-
diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh
deleted file mode 100644
index 04b97b5fe5123af3170523dfde0ae008a78b2428..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_base_cluener # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_cluener/%x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-# export CUDA_VISIBLE_DEVICES='2'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_base
-
-TASK=cluener
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CLUENER/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.char.txt \
- --valid_data dev.char.txt \
- --test_data dev.char.txt \
- --train_batchsize 32 \
- --valid_batchsize 16 \
- --max_seq_length 256 \
- --task_name cluener \
- "
-
-MODEL_ARGS="\
- --learning_rate 3e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --markup bio \
- --middle_prefix I- \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_f1 \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_f1:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 30 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 100 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/encoders/psp_encoders.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/encoders/psp_encoders.py
deleted file mode 100644
index b41c1848c5e0bc3ab7d63bc5c33ab377daff530d..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/encoders/psp_encoders.py
+++ /dev/null
@@ -1,235 +0,0 @@
-from enum import Enum
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module
-
-from .helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add
-from ..stylegan2.model import EqualLinear
-
-
-class ProgressiveStage(Enum):
- WTraining = 0
- Delta1Training = 1
- Delta2Training = 2
- Delta3Training = 3
- Delta4Training = 4
- Delta5Training = 5
- Delta6Training = 6
- Delta7Training = 7
- Delta8Training = 8
- Delta9Training = 9
- Delta10Training = 10
- Delta11Training = 11
- Delta12Training = 12
- Delta13Training = 13
- Delta14Training = 14
- Delta15Training = 15
- Delta16Training = 16
- Delta17Training = 17
- Inference = 18
-
-
-class GradualStyleBlock(Module):
- def __init__(self, in_c, out_c, spatial):
- super(GradualStyleBlock, self).__init__()
- self.out_c = out_c
- self.spatial = spatial
- num_pools = int(np.log2(spatial))
- modules = []
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()]
- for i in range(num_pools - 1):
- modules += [
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()
- ]
- self.convs = nn.Sequential(*modules)
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
-
- def forward(self, x):
- x = self.convs(x)
- x = x.view(-1, self.out_c)
- x = self.linear(x)
- return x
-
-
-class GradualStyleEncoder(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(GradualStyleEncoder, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- latents = []
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- for j in range(self.coarse_ind):
- latents.append(self.styles[j](c3))
-
- p2 = _upsample_add(c3, self.latlayer1(c2))
- for j in range(self.coarse_ind, self.middle_ind):
- latents.append(self.styles[j](p2))
-
- p1 = _upsample_add(p2, self.latlayer2(c1))
- for j in range(self.middle_ind, self.style_count):
- latents.append(self.styles[j](p1))
-
- out = torch.stack(latents, dim=1)
- return out
-
-
-class Encoder4Editing(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(Encoder4Editing, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
-
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
-
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- self.progressive_stage = ProgressiveStage.Inference
-
- def get_deltas_starting_dimensions(self):
- ''' Get a list of the initial dimension of every delta from which it is applied '''
- return list(range(self.style_count)) # Each dimension has a delta applied to it
-
- def set_progressive_stage(self, new_stage: ProgressiveStage):
- self.progressive_stage = new_stage
- print('Changed progressive stage to: ', new_stage)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- # Infer main W and duplicate it
- w0 = self.styles[0](c3)
- w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2)
- stage = self.progressive_stage.value
- features = c3
- for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas
- if i == self.coarse_ind:
- p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features
- features = p2
- elif i == self.middle_ind:
- p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features
- features = p1
- delta_i = self.styles[i](features)
- w[:, i] += delta_i
- return w
-
-
-class BackboneEncoderUsingLastLayerIntoW(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(BackboneEncoderUsingLastLayerIntoW, self).__init__()
- print('Using BackboneEncoderUsingLastLayerIntoW')
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1))
- self.linear = EqualLinear(512, 512, lr_mul=1)
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_pool(x)
- x = x.view(-1, 512)
- x = self.linear(x)
- return x.repeat(self.style_count, 1, 1).permute(1, 0, 2)
diff --git a/spaces/fengmuxi/ChatGpt-Web/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/fengmuxi/ChatGpt-Web/.github/ISSUE_TEMPLATE/bug_report.md
deleted file mode 100644
index 01fa35e8230e4c93d27005266a95a47a0d612ffb..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/.github/ISSUE_TEMPLATE/bug_report.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-name: Bug report
-about: Create a report to help us improve
-title: "[Bug] "
-labels: ''
-assignees: ''
-
----
-
-**Describe the bug**
-A clear and concise description of what the bug is.
-
-**To Reproduce**
-Steps to reproduce the behavior:
-1. Go to '...'
-2. Click on '....'
-3. Scroll down to '....'
-4. See error
-
-**Expected behavior**
-A clear and concise description of what you expected to happen.
-
-**Screenshots**
-If applicable, add screenshots to help explain your problem.
-
-**Deployment**
-- [ ] Docker
-- [ ] Vercel
-- [ ] Server
-
-**Desktop (please complete the following information):**
- - OS: [e.g. iOS]
- - Browser [e.g. chrome, safari]
- - Version [e.g. 22]
-
-**Smartphone (please complete the following information):**
- - Device: [e.g. iPhone6]
- - OS: [e.g. iOS8.1]
- - Browser [e.g. stock browser, safari]
- - Version [e.g. 22]
-
-**Additional Logs**
-Add any logs about the problem here.
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download APKCombo for Minecraft Trial Explore Craft and Survive in the World of Minecraft.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download APKCombo for Minecraft Trial Explore Craft and Survive in the World of Minecraft.md
deleted file mode 100644
index 6baca0ace99d07403bb0c507cd76710f8175f8be..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download APKCombo for Minecraft Trial Explore Craft and Survive in the World of Minecraft.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
- - Benefits of downloading Minecraft Trial from APKCombo - Step-by-step guide on how to download and install Minecraft Trial from APKCombo - Conclusion: Summarize the main points and invite the reader to try Minecraft Trial | | H2: What is Minecraft Trial and APKCombo? | - Explain what Minecraft Trial is and what features it offers - Explain what APKCombo is and how it works - Mention that APKCombo is a safe and reliable source for downloading Android apps and games | | H2: Benefits of downloading Minecraft Trial from APKCombo | - List some of the benefits of downloading Minecraft Trial from APKCombo, such as: - No need to sign up or log in to Google Play Store - Access to the latest version of Minecraft Trial and other apps and games - Easy to find and download the compatible APK file for your device - Fast and secure download speed | | H2: Step-by-step guide on how to download and install Minecraft Trial from APKCombo | - Provide a detailed and clear guide on how to download and install Minecraft Trial from APKCombo, using screenshots and bullet points - Include the following steps: - Visit the APKCombo website and search for Minecraft Trial - Choose the appropriate APK file for your device and click on the download button - Wait for the download to finish and then open the APK file - Allow the installation of unknown sources if prompted - Follow the instructions on the screen to complete the installation - Launch the Minecraft Trial app and enjoy playing | | H2: Conclusion | - Summarize the main points of the article and remind the reader of the benefits of downloading Minecraft Trial from APKCombo - Invite the reader to try Minecraft Trial and share their feedback - Provide a link to the APKCombo website and encourage the reader to explore other apps and games | Table 2: Article with HTML formatting
How to Download Minecraft Trial from APKCombo
-
If you are a fan of Minecraft, you might have heard of Minecraft Trial, a free version of the popular sandbox game that lets you explore, build, and survive in a randomly generated world. But did you know that you can download Minecraft Trial from APKCombo, a website that offers thousands of Android apps and games for free? In this article, we will show you how to download Minecraft Trial from APKCombo, what benefits it offers, and how to install it on your device. Let's get started!
Minecraft Trial is a limited version of Minecraft that allows you to play for up to 90 minutes in survival mode. You can create your own world, craft tools and weapons, fight enemies, and explore different biomes. However, you cannot save your progress, join multiplayer servers, or use custom skins or mods. Minecraft Trial is a great way to try out Minecraft before buying the full game.
-
APKCombo is a website that provides free downloads of Android apps and games in APK format. APK stands for Android Package Kit, which is a file format that contains all the necessary components for an app or game to run on an Android device. By downloading APK files from APKCombo, you can bypass the Google Play Store and install apps and games directly on your device. You can also access the latest versions of apps and games, as well as older versions that may not be available on the Play Store.
-
APKCombo is a safe and reliable source for downloading Android apps and games. It scans all the APK files for viruses and malware before uploading them to its website. It also verifies the authenticity of the APK files by checking their signatures. You can trust that all the apps and games on APKCombo are original and unmodified.
-
Benefits of downloading Minecraft Trial from APKCombo
-
There are many benefits of downloading Minecraft Trial from APKCombo, such as:
-
-
No need to sign up or log in to Google Play Store. You can download Minecraft Trial without creating an account or providing any personal information.
-
Access to the latest version of Minecraft Trial and other apps and games. You can always find the most updated version of Minecraft Trial on APKCombo, as well as other apps and games that may not be available on the Play Store due to regional restrictions or compatibility issues.
-
Easy to find and download the compatible APK file for your device. You can choose the APK file that matches your device's specifications, such as CPU architecture, screen size, and Android version. You can also compare the file size and version number of different APK files.
-
Fast and secure download speed. You can download Minecraft Trial from APKCombo at a high speed, without any interruptions or errors. You can also resume your download if it gets paused or canceled.
-
-
As you can see, downloading Minecraft Trial from APKCombo has many advantages over downloading it from the Play Store. Now, let's see how to do it.
-
minecraft trial apk download for android
-minecraft trial free download apk combo
-minecraft trial version download apk combo
-minecraft trial mod apk download apk combo
-minecraft trial download apk combo for pc
-minecraft trial download apk combo for windows 10
-minecraft trial download apk combo for mac
-minecraft trial download apk combo for linux
-minecraft trial download apk combo for playstation
-minecraft trial download apk combo for xbox
-minecraft trial download apk combo for switch
-minecraft trial download apk combo for ios
-minecraft trial download apk combo for iphone
-minecraft trial download apk combo for ipad
-minecraft trial download apk combo latest version
-minecraft trial download apk combo update
-minecraft trial download apk combo offline
-minecraft trial download apk combo online
-minecraft trial download apk combo multiplayer
-minecraft trial download apk combo survival mode
-minecraft trial download apk combo creative mode
-minecraft trial download apk combo adventure mode
-minecraft trial download apk combo hardcore mode
-minecraft trial download apk combo cheats
-minecraft trial download apk combo hacks
-minecraft trial download apk combo tips
-minecraft trial download apk combo tricks
-minecraft trial download apk combo guide
-minecraft trial download apk combo walkthrough
-minecraft trial download apk combo review
-minecraft trial download apk combo rating
-minecraft trial download apk combo gameplay
-minecraft trial download apk combo screenshots
-minecraft trial download apk combo videos
-minecraft trial download apk combo youtube
-minecraft trial download apk combo reddit
-minecraft trial download apk combo forum
-minecraft trial download apk combo blog
-minecraft trial download apk combo news
-minecraft trial download apk combo wiki
-how to install minecraft trial from apk combo
-how to play minecraft trial from apk combo
-how to uninstall minecraft trial from apk combo
-how to update minecraft trial from apk combo
-how to fix minecraft trial from apk combo errors
-how to get unlimited time in minecraft trial from apk combo
-how to unlock full game in minecraft trial from apk combo
-how to get free skins in minecraft trial from apk combo
-how to get free maps in minecraft trial from apk combo
-
Step-by-step guide on how to download and install Minecraft Trial from APKCombo
-
Downloading and installing Minecraft Trial from APKCombo is very easy and simple. Just follow these steps:
Choose the appropriate APK file for your device and click on the download button. You can see the file size, version number, and compatibility information of each APK file. For example, if your device has an ARM64 CPU and runs on Android 10, you can choose the APK file that says "arm64-v8a Android 10+ Q (10)".
-
Wait for the download to finish and then open the APK file. You may need to use a file manager app to locate the downloaded file in your device's storage.
-
Allow the installation of unknown sources if prompted. This is a security setting that prevents the installation of apps from sources other than the Play Store. To enable it, go to your device's settings, then security, then unknown sources, and toggle it on.
-
Follow the instructions on the screen to complete the installation. It may take a few seconds or minutes depending on your device's performance.
-
Launch the Minecraft Trial app and enjoy playing. You can access the app from your app drawer or home screen.
-
-
Congratulations! You have successfully downloaded and installed Minecraft Trial from APKCombo. Now you can experience the fun and creativity of Minecraft for free.
-
Conclusion
-
Minecraft Trial is a free version of Minecraft that lets you play for up to 90 minutes in survival mode. You can download Minecraft Trial from APKCombo, a website that offers thousands of Android apps and games in APK format. By downloading Minecraft Trial from APKCombo, you can enjoy many benefits, such as no need to sign up or log in to Google Play Store, access to the latest version of Minecraft Trial and other apps and games, easy to find and download the compatible APK file for your device, and fast and secure download speed. To download Minecraft Trial from APKCombo, you just need to follow a simple step-by-step guide that we have provided in this article.
-
We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. And if you liked this article, please share it with your friends and family who might be interested in downloading Minecraft Trial from APKCombo.
-
Thank you for reading and happy gaming!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about downloading Minecraft Trial from APKCombo:
-
Is Minecraft Trial free?
-
Yes, Minecraft Trial is free to download and play. However, it has some limitations compared to the full version of Minecraft, such as no saving progress, no multiplayer mode, no custom skins or mods, and a time limit of 90 minutes per session.
-
Is APKCombo safe?
-
Yes, APKCombo is safe and reliable. It scans all the APK files for viruses and malware before uploading them to its website. It also verifies the authenticity of the APK files by checking their signatures. You can trust that all the apps and games on APKCombo are original and unmodified.
-
How do I update Minecraft Trial?
-
To update Minecraft Trial, you need to visit the APKCombo website again and download the latest version of the APK file. Then you need to uninstall the old version of Minecraft Trial from your device and install the new version using the same steps as before.
-
Can I play Minecraft Trial offline?
-
Yes, you can play Minecraft Trial offline without an internet connection. However, you may need an internet connection when you first launch the app or when you want to access some online features such as feedback or help. p>How do I uninstall Minecraft Trial?
-
To uninstall Minecraft Trial, you need to go to your device's settings, then apps, then Minecraft Trial, and tap on the uninstall button. You can also long-press on the Minecraft Trial icon on your home screen or app drawer and drag it to the uninstall option.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/DragGAN/README.md b/spaces/fffiloni/DragGAN/README.md
deleted file mode 100644
index 969c612ac6c1c25bb286b090c8b43466de46fd89..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/DragGAN/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DragGAN
-emoji: ⚡
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.30.0
-app_file: gradio_app.py
-pinned: false
-duplicated_from: aaronb/DragGAN
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_val_test.sh b/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_val_test.sh
deleted file mode 100644
index d9b2a370ceeeb8f401706f4303298db13e5fad91..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_val_test.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env bash
-
-# !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst
-
-# paths to data are valid for mml7
-PLACES_ROOT="/data/inpainting/Places365"
-OUT_DIR="/data/inpainting/paper_data/Places365_val_test"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in test_large_30k # val_large
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
- "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-
- for conf in segm_256 segm_512
- do
- "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
- "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/fffiloni/lama-video-watermark-remover/masks/readme.md b/spaces/fffiloni/lama-video-watermark-remover/masks/readme.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/flowers-team/SocialAISchool/visualizer.sh b/spaces/flowers-team/SocialAISchool/visualizer.sh
deleted file mode 100644
index 49685fd4f28447f13d5c83b48644bf85693e4449..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/visualizer.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardGuide_lang64_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameWizardGuideLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-EB-Ablation --pause 0.2
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardGuide_lang64_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameWizardGuideLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-EB-Ablation-Deterministic --pause 0.2 --argmax
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardTwoGuides_lang64_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameNPCGuidesLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-EB-Original --pause 0.2
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardTwoGuides_lang64_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameNPCGuidesLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-EB-Original-Deterministic --pause 0.2 --argmax
-# no explo
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardGuide_lang64_no_explo_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameWizardGuideLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-Ablation --pause 0.2
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardGuide_lang64_no_explo_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameWizardGuideLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-Ablation-Deterministic --pause 0.2 --argmax
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardTwoGuides_lang64_no_explo_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameNPCGuidesLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-Original --pause 0.2
-python -m scripts.visualize \
---model 13-03_VIGIL4_WizardTwoGuides_lang64_no_explo_mm_baby_short_rec_env_MiniGrid-GoToDoorTalkHardSesameNPCGuidesLang64-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2/0 \
---episodes 3 --seed=5 --gif graphics/gifs/MH-BabyAI-Original-Deterministic --pause 0.2 --argmax
diff --git a/spaces/freddyaboulton/gradio-lite-sklearn/README.md b/spaces/freddyaboulton/gradio-lite-sklearn/README.md
deleted file mode 100644
index a68162d5e322d9f6948a791739e9ccf27acc26a1..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/gradio-lite-sklearn/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Gradio Lite Classify
-emoji: 🔥
-colorFrom: purple
-colorTo: yellow
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/freshield/ChatGPT-gradio/offline/insert_user.py b/spaces/freshield/ChatGPT-gradio/offline/insert_user.py
deleted file mode 100644
index 65be71159f391e6901799021312f0a776ccdb207..0000000000000000000000000000000000000000
--- a/spaces/freshield/ChatGPT-gradio/offline/insert_user.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# coding=utf-8
-"""
-@Author: Freshield
-@Contact: yangyufresh@163.com
-@File: insert_user.py
-@Time: 2023-03-09 22:35
-@Last_update: 2023-03-09 22:35
-@Desc: None
-@==============================================@
-@ _____ _ _ _ _ @
-@ | __|___ ___ ___| |_|_|___| |_| | @
-@ | __| _| -_|_ -| | | -_| | . | @
-@ |__| |_| |___|___|_|_|_|___|_|___| @
-@ Freshield @
-@==============================================@
-"""
-from lib.MongdbClient import MongodbClient
-
-
-if __name__ == '__main__':
- # 离线添加用户
- mongo_client = MongodbClient()
- username, password = '', ''
- mongo_client.insert_user(username, password)
diff --git a/spaces/gaouzief/b/README.md b/spaces/gaouzief/b/README.md
deleted file mode 100644
index 97f04e02da8de6687466b45648ab4840e2805ffe..0000000000000000000000000000000000000000
--- a/spaces/gaouzief/b/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: B
-emoji: 🐠
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/deeplabv3/decoder.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/deeplabv3/decoder.py
deleted file mode 100644
index ecc37411a1af6cbb55933a1b0708250d0592fae7..0000000000000000000000000000000000000000
--- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/deeplabv3/decoder.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""
-BSD 3-Clause License
-
-Copyright (c) Soumith Chintala 2016,
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this
- list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-
-* Neither the name of the copyright holder nor the names of its
- contributors may be used to endorse or promote products derived from
- this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-"""
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-__all__ = ["DeepLabV3Decoder"]
-
-
-class DeepLabV3Decoder(nn.Sequential):
- def __init__(self, in_channels, out_channels=256, atrous_rates=(12, 24, 36)):
- super().__init__(
- ASPP(in_channels, out_channels, atrous_rates),
- nn.Conv2d(out_channels, out_channels, 3, padding=1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
- self.out_channels = out_channels
-
- def forward(self, *features):
- return super().forward(features[-1])
-
-
-class DeepLabV3PlusDecoder(nn.Module):
- def __init__(
- self,
- encoder_channels,
- out_channels=256,
- atrous_rates=(12, 24, 36),
- output_stride=16,
- ):
- super().__init__()
- if output_stride not in {8, 16}:
- raise ValueError(
- "Output stride should be 8 or 16, got {}.".format(output_stride)
- )
-
- self.out_channels = out_channels
- self.output_stride = output_stride
-
- self.aspp = nn.Sequential(
- ASPP(encoder_channels[-1], out_channels, atrous_rates, separable=True),
- SeparableConv2d(
- out_channels, out_channels, kernel_size=3, padding=1, bias=False
- ),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
-
- scale_factor = 2 if output_stride == 8 else 4
- self.up = nn.UpsamplingBilinear2d(scale_factor=scale_factor)
-
- highres_in_channels = encoder_channels[-4]
- highres_out_channels = 48 # proposed by authors of paper
- self.block1 = nn.Sequential(
- nn.Conv2d(
- highres_in_channels, highres_out_channels, kernel_size=1, bias=False
- ),
- nn.BatchNorm2d(highres_out_channels),
- nn.ReLU(),
- )
- self.block2 = nn.Sequential(
- SeparableConv2d(
- highres_out_channels + out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
-
- def forward(self, *features):
- aspp_features = self.aspp(features[-1])
- aspp_features = self.up(aspp_features)
- high_res_features = self.block1(features[-4])
- concat_features = torch.cat([aspp_features, high_res_features], dim=1)
- fused_features = self.block2(concat_features)
- return fused_features
-
-
-class ASPPConv(nn.Sequential):
- def __init__(self, in_channels, out_channels, dilation):
- super().__init__(
- nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- padding=dilation,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
-
-
-class ASPPSeparableConv(nn.Sequential):
- def __init__(self, in_channels, out_channels, dilation):
- super().__init__(
- SeparableConv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- padding=dilation,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
-
-
-class ASPPPooling(nn.Sequential):
- def __init__(self, in_channels, out_channels):
- super().__init__(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
-
- def forward(self, x):
- size = x.shape[-2:]
- for mod in self:
- x = mod(x)
- return F.interpolate(x, size=size, mode="bilinear", align_corners=False)
-
-
-class ASPP(nn.Module):
- def __init__(self, in_channels, out_channels, atrous_rates, separable=False):
- super(ASPP, self).__init__()
- modules = []
- modules.append(
- nn.Sequential(
- nn.Conv2d(in_channels, out_channels, 1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- )
- )
-
- rate1, rate2, rate3 = tuple(atrous_rates)
- ASPPConvModule = ASPPConv if not separable else ASPPSeparableConv
-
- modules.append(ASPPConvModule(in_channels, out_channels, rate1))
- modules.append(ASPPConvModule(in_channels, out_channels, rate2))
- modules.append(ASPPConvModule(in_channels, out_channels, rate3))
- modules.append(ASPPPooling(in_channels, out_channels))
-
- self.convs = nn.ModuleList(modules)
-
- self.project = nn.Sequential(
- nn.Conv2d(5 * out_channels, out_channels, kernel_size=1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(),
- nn.Dropout(0.5),
- )
-
- def forward(self, x):
- res = []
- for conv in self.convs:
- res.append(conv(x))
- res = torch.cat(res, dim=1)
- return self.project(res)
-
-
-class SeparableConv2d(nn.Sequential):
- def __init__(
- self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- bias=True,
- ):
- dephtwise_conv = nn.Conv2d(
- in_channels,
- in_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=in_channels,
- bias=False,
- )
- pointwise_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=bias,)
- super().__init__(dephtwise_conv, pointwise_conv)
diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/linknet/model.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/linknet/model.py
deleted file mode 100644
index b8c3139fdc4db0d5dddfbf292b76c0cc8fccb873..0000000000000000000000000000000000000000
--- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/linknet/model.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from typing import Optional, Union
-
-from segmentation_models_pytorch.base import (
- SegmentationHead,
- SegmentationModel,
- ClassificationHead,
-)
-from segmentation_models_pytorch.encoders import get_encoder
-from .decoder import LinknetDecoder
-
-
-class Linknet(SegmentationModel):
- """Linknet_ is a fully convolution neural network for image semantic segmentation. Consist of *encoder*
- and *decoder* parts connected with *skip connections*. Encoder extract features of different spatial
- resolution (skip connections) which are used by decoder to define accurate segmentation mask. Use *sum*
- for fusing decoder blocks with skip connections.
-
- Note:
- This implementation by default has 4 skip connections (original - 3).
-
- Args:
- encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone)
- to extract features of different spatial resolution
- encoder_depth: A number of stages used in encoder in range [3, 5]. Each stage generate features
- two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features
- with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on).
- Default is 5
- encoder_weights: One of **None** (random initialization), **"imagenet"** (pre-training on ImageNet) and
- other pretrained weights (see table with available weights for each encoder_name)
- decoder_use_batchnorm: If **True**, BatchNorm2d layer between Conv2D and Activation layers
- is used. If **"inplace"** InplaceABN will be used, allows to decrease memory consumption.
- Available options are **True, False, "inplace"**
- in_channels: A number of input channels for the model, default is 3 (RGB images)
- classes: A number of classes for output mask (or you can think as a number of channels of output mask)
- activation: An activation function to apply after the final convolution layer.
- Available options are **"sigmoid"**, **"softmax"**, **"logsoftmax"**, **"tanh"**, **"identity"**,
- **callable** and **None**.
- Default is **None**
- aux_params: Dictionary with parameters of the auxiliary output (classification head). Auxiliary output is build
- on top of encoder if **aux_params** is not **None** (default). Supported params:
- - classes (int): A number of classes
- - pooling (str): One of "max", "avg". Default is "avg"
- - dropout (float): Dropout factor in [0, 1)
- - activation (str): An activation function to apply "sigmoid"/"softmax"
- (could be **None** to return logits)
-
- Returns:
- ``torch.nn.Module``: **Linknet**
-
- .. _Linknet:
- https://arxiv.org/abs/1707.03718
- """
-
- def __init__(
- self,
- encoder_name: str = "resnet34",
- encoder_depth: int = 5,
- encoder_weights: Optional[str] = "imagenet",
- decoder_use_batchnorm: bool = True,
- in_channels: int = 3,
- classes: int = 1,
- activation: Optional[Union[str, callable]] = None,
- aux_params: Optional[dict] = None,
- ):
- super().__init__()
-
- if encoder_name.startswith("mit_b"):
- raise ValueError(
- "Encoder `{}` is not supported for Linknet".format(encoder_name)
- )
-
- self.encoder = get_encoder(
- encoder_name,
- in_channels=in_channels,
- depth=encoder_depth,
- weights=encoder_weights,
- )
-
- self.decoder = LinknetDecoder(
- encoder_channels=self.encoder.out_channels,
- n_blocks=encoder_depth,
- prefinal_channels=32,
- use_batchnorm=decoder_use_batchnorm,
- )
-
- self.segmentation_head = SegmentationHead(
- in_channels=32, out_channels=classes, activation=activation, kernel_size=1
- )
-
- if aux_params is not None:
- self.classification_head = ClassificationHead(
- in_channels=self.encoder.out_channels[-1], **aux_params
- )
- else:
- self.classification_head = None
-
- self.name = "link-{}".format(encoder_name)
- self.initialize()
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Brain Dead Full Movie In Hindi Download [HOT].md b/spaces/gotiQspiryo/whisper-ui/examples/Brain Dead Full Movie In Hindi Download [HOT].md
deleted file mode 100644
index eaf97a1adf0161681090e3615128f6ffbfd35307..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Brain Dead Full Movie In Hindi Download [HOT].md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-Download Movies, Series, TV shows, Mp4, Web, Mobile or Android free. Tv Shows. Netflix TV Series.
-
-Brain Dead Horror Thriller Movie A Dirty Girl. Screnwood Subtitle. Brain Dead Horror Thriller Movie Without Subtitles. Brain Dead Horror Thriller Movie For Dummies. Love at First sight 2014. Hollywood Movies Actors All. C. P. & C. Brain Dead (1993) David Cronenberg as our guide to the near future. If they happen, it's going to be fun.In vivo kinetics of human and rat insulin compared by double-isotope dilution: a new approach to glucose monitoring.
-
-A new method for quantitating the kinetics of insulin action in vivo in both humans and rats has been developed. A constant infusion of a 2-deuterated glucose solution is introduced into the body and allows determination of the fractional catabolic rate (FCR) of glucose as well as the fractional insulin-induced disposal of glucose (FID). Rates of glucose disappearance from plasma are measured at 3-min intervals using a 2H6-glucose infusion. FCR and FID are calculated from the relationship between plasma and body glucose pools. In humans, insulin was infused (in a computer-controlled fashion) at a low (2.0 mU x kg-1 x min-1) or at a high rate (8.0 mU x kg-1 x min-1) for a 90-min period. At the lower infusion rate the mean values for FCR and FID were 0.071 +/- 0.007 and 0.149 +/- 0.016 g/kg/min, respectively. At the higher infusion rate the corresponding values were 0.077 +/- 0.007 and 0.149 +/- 0.016. Values of FCR (but not of FID) were significantly lower in the insulin-treated group than in the saline-treated group. In rats, insulin was infused at a rate of 1.25 mU x kg-1 x min-1 for a 30-min period. The mean values for FCR and FID were 0.037 +/- 0.004 and 0.087 +/- 0.005 g/kg/min, respectively. Thus, the method described is effective in measuring and comparing FCR and FID in humans and in rats.On Monday I received the above USBFlash drive by Matt. It is a One2Net who 4fefd39f24
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/How to Recite Ratib al Athos Correctly A PDF Download Link for the Dzikir that is Full of Wisdom and Mercy.md b/spaces/gotiQspiryo/whisper-ui/examples/How to Recite Ratib al Athos Correctly A PDF Download Link for the Dzikir that is Full of Wisdom and Mercy.md
deleted file mode 100644
index 156b3bf24888338e87a3fd6d4eeeeb379578274a..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/How to Recite Ratib al Athos Correctly A PDF Download Link for the Dzikir that is Full of Wisdom and Mercy.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Sudah jelas, bahwa membaca dzikir dan doa-doa akan mendapatkan pahala, termasuk dengan membaca ratib al-Athos ini. Bahkan ada manfaat lain dari membaca ratib yang disusun oleh al-Habib Umar bin Abdurrahman al-Athos ini. Antara lain : 1. Dengan Izin Alla swt, dipanjangkan umurnya 2. Menggapai Husnul-Khatimah 3. Mendapat perlindungan apa yang dimiliki, baik di laut dan di bumi 4. Senantiasa berada dalam perlindungan Allah, khusunya dari berbagai gangguan ilmu hitam, seperti Sihir, Pelet, Gendam dll.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gradio/sine_curve/run.py b/spaces/gradio/sine_curve/run.py
deleted file mode 100644
index 4f0fc7ce71a6f1edec2010ab1f65424a1567f009..0000000000000000000000000000000000000000
--- a/spaces/gradio/sine_curve/run.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import math
-import gradio as gr
-import plotly.express as px
-import numpy as np
-
-
-plot_end = 2 * math.pi
-
-
-def get_plot(period=1):
- global plot_end
- x = np.arange(plot_end - 2 * math.pi, plot_end, 0.02)
- y = np.sin(2*math.pi*period * x)
- fig = px.line(x=x, y=y)
- plot_end += 2 * math.pi
- if plot_end > 1000:
- plot_end = 2 * math.pi
- return fig
-
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- gr.Markdown("Change the value of the slider to automatically update the plot")
- period = gr.Slider(label="Period of plot", value=1, minimum=0, maximum=10, step=1)
- plot = gr.Plot(label="Plot (updates every half second)")
-
- dep = demo.load(get_plot, None, plot, every=1)
- period.change(get_plot, period, plot, every=1, cancels=[dep])
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/samples/__init__.py b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/samples/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/haakohu/deep_privacy2_face/dp2/data/datasets/fdf.py b/spaces/haakohu/deep_privacy2_face/dp2/data/datasets/fdf.py
deleted file mode 100644
index 23f68a52d4fb50143b2ef6720e126991b2981afc..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2_face/dp2/data/datasets/fdf.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import pathlib
-from typing import Tuple
-import numpy as np
-import torch
-import pathlib
-try:
- import pyspng
- PYSPNG_IMPORTED = True
-except ImportError:
- PYSPNG_IMPORTED = False
- print("Could not load pyspng. Defaulting to pillow image backend.")
- from PIL import Image
-from tops import logger
-
-
-class FDFDataset:
-
- def __init__(self,
- dirpath,
- imsize: Tuple[int],
- load_keypoints: bool,
- transform):
- dirpath = pathlib.Path(dirpath)
- self.dirpath = dirpath
- self.transform = transform
- self.imsize = imsize[0]
- self.load_keypoints = load_keypoints
- assert self.dirpath.is_dir(),\
- f"Did not find dataset at: {dirpath}"
- image_dir = self.dirpath.joinpath("images", str(self.imsize))
- self.image_paths = list(image_dir.glob("*.png"))
- assert len(self.image_paths) > 0,\
- f"Did not find images in: {image_dir}"
- self.image_paths.sort(key=lambda x: int(x.stem))
- self.landmarks = np.load(self.dirpath.joinpath("landmarks.npy")).reshape(-1, 7, 2).astype(np.float32)
-
- self.bounding_boxes = torch.load(self.dirpath.joinpath("bounding_box", f"{self.imsize}.torch"))
- assert len(self.image_paths) == len(self.bounding_boxes)
- assert len(self.image_paths) == len(self.landmarks)
- logger.log(
- f"Dataset loaded from: {dirpath}. Number of samples:{len(self)}, imsize={imsize}")
-
- def get_mask(self, idx):
- mask = torch.ones((1, self.imsize, self.imsize), dtype=torch.bool)
- bounding_box = self.bounding_boxes[idx]
- x0, y0, x1, y1 = bounding_box
- mask[:, y0:y1, x0:x1] = 0
- return mask
-
- def __len__(self):
- return len(self.image_paths)
-
- def __getitem__(self, index):
- impath = self.image_paths[index]
- if PYSPNG_IMPORTED:
- with open(impath, "rb") as fp:
- im = pyspng.load(fp.read())
- else:
- with Image.open(impath) as fp:
- im = np.array(fp)
- im = torch.from_numpy(np.rollaxis(im, -1, 0))
- masks = self.get_mask(index)
- landmark = self.landmarks[index]
- batch = {
- "img": im,
- "mask": masks,
- }
- if self.load_keypoints:
- batch["keypoints"] = landmark
- if self.transform is None:
- return batch
- return self.transform(batch)
-
-
-class FDF256Dataset:
-
- def __init__(self,
- dirpath,
- load_keypoints: bool,
- transform):
- dirpath = pathlib.Path(dirpath)
- self.dirpath = dirpath
- self.transform = transform
- self.load_keypoints = load_keypoints
- assert self.dirpath.is_dir(),\
- f"Did not find dataset at: {dirpath}"
- image_dir = self.dirpath.joinpath("images")
- self.image_paths = list(image_dir.glob("*.png"))
- assert len(self.image_paths) > 0,\
- f"Did not find images in: {image_dir}"
- self.image_paths.sort(key=lambda x: int(x.stem))
- self.landmarks = np.load(self.dirpath.joinpath("landmarks.npy")).reshape(-1, 7, 2).astype(np.float32)
- self.bounding_boxes = torch.from_numpy(np.load(self.dirpath.joinpath("bounding_box.npy")))
- assert len(self.image_paths) == len(self.bounding_boxes)
- assert len(self.image_paths) == len(self.landmarks)
- logger.log(
- f"Dataset loaded from: {dirpath}. Number of samples:{len(self)}")
-
- def get_mask(self, idx):
- mask = torch.ones((1, 256, 256), dtype=torch.bool)
- bounding_box = self.bounding_boxes[idx]
- x0, y0, x1, y1 = bounding_box
- mask[:, y0:y1, x0:x1] = 0
- return mask
-
- def __len__(self):
- return len(self.image_paths)
-
- def __getitem__(self, index):
- impath = self.image_paths[index]
- if PYSPNG_IMPORTED:
- with open(impath, "rb") as fp:
- im = pyspng.load(fp.read())
- else:
- with Image.open(impath) as fp:
- im = np.array(fp)
- im = torch.from_numpy(np.rollaxis(im, -1, 0))
- masks = self.get_mask(index)
- landmark = self.landmarks[index]
- batch = {
- "img": im,
- "mask": masks,
- }
- if self.load_keypoints:
- batch["keypoints"] = landmark
- if self.transform is None:
- return batch
- return self.transform(batch)
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_requests.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_requests.py
deleted file mode 100644
index 406338f46fc7b2381e0b1634c628b123ef20b685..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_requests.py
+++ /dev/null
@@ -1,190 +0,0 @@
-"""Browse a webpage and summarize it using the LLM model"""
-from __future__ import annotations
-
-from urllib.parse import urljoin, urlparse
-
-import requests
-from bs4 import BeautifulSoup
-from requests import Response
-from requests.compat import urljoin
-
-from autogpt.config import Config
-from autogpt.memory import get_memory
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-CFG = Config()
-memory = get_memory(CFG)
-
-session = requests.Session()
-session.headers.update({"User-Agent": CFG.user_agent})
-
-
-def is_valid_url(url: str) -> bool:
- """Check if the URL is valid
-
- Args:
- url (str): The URL to check
-
- Returns:
- bool: True if the URL is valid, False otherwise
- """
- try:
- result = urlparse(url)
- return all([result.scheme, result.netloc])
- except ValueError:
- return False
-
-
-def sanitize_url(url: str) -> str:
- """Sanitize the URL
-
- Args:
- url (str): The URL to sanitize
-
- Returns:
- str: The sanitized URL
- """
- return urljoin(url, urlparse(url).path)
-
-
-def check_local_file_access(url: str) -> bool:
- """Check if the URL is a local file
-
- Args:
- url (str): The URL to check
-
- Returns:
- bool: True if the URL is a local file, False otherwise
- """
- local_prefixes = [
- "file:///",
- "file://localhost/",
- "file://localhost",
- "http://localhost",
- "http://localhost/",
- "https://localhost",
- "https://localhost/",
- "http://2130706433",
- "http://2130706433/",
- "https://2130706433",
- "https://2130706433/",
- "http://127.0.0.1/",
- "http://127.0.0.1",
- "https://127.0.0.1/",
- "https://127.0.0.1",
- "https://0.0.0.0/",
- "https://0.0.0.0",
- "http://0.0.0.0/",
- "http://0.0.0.0",
- "http://0000",
- "http://0000/",
- "https://0000",
- "https://0000/",
- ]
- return any(url.startswith(prefix) for prefix in local_prefixes)
-
-
-def get_response(
- url: str, timeout: int = 10
-) -> tuple[None, str] | tuple[Response, None]:
- """Get the response from a URL
-
- Args:
- url (str): The URL to get the response from
- timeout (int): The timeout for the HTTP request
-
- Returns:
- tuple[None, str] | tuple[Response, None]: The response and error message
-
- Raises:
- ValueError: If the URL is invalid
- requests.exceptions.RequestException: If the HTTP request fails
- """
- try:
- # Restrict access to local files
- if check_local_file_access(url):
- raise ValueError("Access to local files is restricted")
-
- # Most basic check if the URL is valid:
- if not url.startswith("http://") and not url.startswith("https://"):
- raise ValueError("Invalid URL format")
-
- sanitized_url = sanitize_url(url)
-
- response = session.get(sanitized_url, timeout=timeout)
-
- # Check if the response contains an HTTP error
- if response.status_code >= 400:
- return None, f"Error: HTTP {str(response.status_code)} error"
-
- return response, None
- except ValueError as ve:
- # Handle invalid URL format
- return None, f"Error: {str(ve)}"
-
- except requests.exceptions.RequestException as re:
- # Handle exceptions related to the HTTP request
- # (e.g., connection errors, timeouts, etc.)
- return None, f"Error: {str(re)}"
-
-
-def scrape_text(url: str) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- response, error_message = get_response(url)
- if error_message:
- return error_message
- if not response:
- return "Error: Could not get response"
-
- soup = BeautifulSoup(response.text, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
-
- return text
-
-
-def scrape_links(url: str) -> str | list[str]:
- """Scrape links from a webpage
-
- Args:
- url (str): The URL to scrape links from
-
- Returns:
- str | list[str]: The scraped links
- """
- response, error_message = get_response(url)
- if error_message:
- return error_message
- if not response:
- return "Error: Could not get response"
- soup = BeautifulSoup(response.text, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
-
- return format_hyperlinks(hyperlinks)
-
-
-def create_message(chunk, question):
- """Create a message for the user to summarize a chunk of text"""
- return {
- "role": "user",
- "content": f'"""{chunk}""" Using the above text, answer the following'
- f' question: "{question}" -- if the question cannot be answered using the'
- " text, summarize the text.",
- }
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/memory/base.py b/spaces/hamelcubsfan/AutoGPT/autogpt/memory/base.py
deleted file mode 100644
index 691e2299c4caa5c2e9af5b2436727834f3cc6c67..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/memory/base.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""Base class for memory providers."""
-import abc
-
-import openai
-
-from autogpt.config import AbstractSingleton, Config
-
-cfg = Config()
-
-
-def get_ada_embedding(text):
- text = text.replace("\n", " ")
- if cfg.use_azure:
- return openai.Embedding.create(
- input=[text],
- engine=cfg.get_azure_deployment_id_for_model("text-embedding-ada-002"),
- )["data"][0]["embedding"]
- else:
- return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[
- "data"
- ][0]["embedding"]
-
-
-class MemoryProviderSingleton(AbstractSingleton):
- @abc.abstractmethod
- def add(self, data):
- pass
-
- @abc.abstractmethod
- def get(self, data):
- pass
-
- @abc.abstractmethod
- def clear(self):
- pass
-
- @abc.abstractmethod
- def get_relevant(self, data, num_relevant=5):
- pass
-
- @abc.abstractmethod
- def get_stats(self):
- pass
diff --git a/spaces/hands012/gpt-academic/docs/README.md.Portuguese.md b/spaces/hands012/gpt-academic/docs/README.md.Portuguese.md
deleted file mode 100644
index 816ced1993b05c84ec8a3cd84c42adf1c9757cd2..0000000000000000000000000000000000000000
--- a/spaces/hands012/gpt-academic/docs/README.md.Portuguese.md
+++ /dev/null
@@ -1,320 +0,0 @@
-> **Nota**
->
-> Ao instalar as dependências, por favor, selecione rigorosamente as versões **especificadas** no arquivo requirements.txt.
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
->
-
-# Otimização acadêmica GPT (GPT Academic)
-
-**Se você gostou deste projeto, por favor dê um Star. Se você criou atalhos acadêmicos mais úteis ou plugins funcionais, sinta-se livre para abrir uma issue ou pull request. Nós também temos um README em [Inglês|](README_EN.md)[日本語|](README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](README_RS.md)[Français](README_FR.md) traduzidos por este próprio projeto.
-Para traduzir este projeto para qualquer idioma com o GPT, leia e execute [`multi_language.py`](multi_language.py) (experimental).
-
-> **Nota**
->
-> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR!
->
-> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A), auto-análises do projeto geradas pelo GPT também estão podem ser chamadas a qualquer momento ao clicar nos plugins relacionados. As perguntas frequentes estão resumidas no [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Instruções de Instalação](#installation).
->
-> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor.
-
-
Funcionalidade | Descrição
---- | ---
-Um clique de polimento | Suporte a um clique polimento, um clique encontrar erros de gramática no artigo
-Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique
-Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código
-[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados
-Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), os plugins suportam[hot-reload](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) o código-fonte do projeto
-[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/...
-Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo
-Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX
-Geração em lote de comentários | [Plugin de função] Um clique gera comentários de função em lote
-[Tradução chinês-inglês](https://www.bilibili.com/video/BV1yo4y157jV/) markdown | [Plugin de função] Você viu o README em 5 linguagens acima?
-Relatório de análise de chat | [Plugin de função] Gera automaticamente um resumo após a execução
-[Funcionalidade de tradução de artigos completos em PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin de função] Extrai o título e o resumo do artigo PDF e traduz o artigo completo (multithread)
-Assistente arXiv | [Plugin de função] Insira o url do artigo arXiv para traduzir o resumo + baixar PDF
-Assistente de integração acadêmica do Google | [Plugin de função] Dê qualquer URL de página de pesquisa acadêmica do Google e deixe o GPT escrever[trabalhos relacionados](https://www.bilibili.com/video/BV1GP411U7Az/)
-Agregação de informações da Internet + GPT | [Plugin de função] Um clique para obter informações do GPT através da Internet e depois responde a perguntas para informações nunca ficarem desatualizadas
-Exibição de fórmulas/imagem/tabela | Pode exibir simultaneamente a forma de renderização e[TEX] das fórmulas, suporte a fórmulas e realce de código
-Suporte de plugins de várias linhas | Suporte a várias chamadas em linha do chatgpt, um clique para processamento[de massa de texto](https://www.bilibili.com/video/BV1FT411H7c5/) ou programa
-Tema gradio escuro | Adicione ``` /?__theme=dark``` ao final da url do navegador para ativar o tema escuro
-[Suporte para vários modelos LLM](https://www.bilibili.com/video/BV1wT411p7yf), suporte para a nova interface API2D | A sensação de ser atendido simultaneamente por GPT3.5, GPT4, [Chatglm THU](https://github.com/THUDM/ChatGLM-6B), [Moss Fudan](https://github.com/OpenLMLab/MOSS) deve ser ótima, certo?
-Mais modelos LLM incorporados, suporte para a implantação[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Adicione interface Newbing (New Bing), suporte [JittorLLMs](https://github.com/Jittor/JittorLLMs) THU Introdução ao suporte do LLaMA, RWKV e Pan Gu Alpha
-Mais recursos novos mostrados (geração de imagens, etc.) ... | Consulte o final deste documento ...
-
-
-
-- Nova interface (Modifique a opção LAYOUT em `config.py` para alternar entre o layout esquerdo/direito e o layout superior/inferior)
-
-
-
- All buttons are dynamically generated by reading functional.py, and you can add custom functions at will, liberating the clipboard
-
-
-
-
-
-- Proofreading/errors correction
-
-
-
-
-
-
-- If the output contains formulas, it will be displayed in both tex and rendering format at the same time, which is convenient for copying and reading
-
-
-
-
-
-
-- Don't want to read the project code? Just show the whole project to chatgpt
-
-
-
-
-
-
-- Mix the use of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-
-
-
----
-# Instalação
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configure the API KEY
-
-In `config.py`, configure API KEY and other settings, [Special Network Environment Settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py`, and use the configuration in it to cover the configuration with the same name in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`. The writing format of environment variables is referenced to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` > `config.py`)
-
-
-3. Install dependencies
-
-```sh
-# (Option I: for those familiar with python)(python version is 3.9 or above, the newer the better), note: use the official pip source or the Alibaba pip source. Temporary solution for changing source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: for those who are unfamiliar with python) use anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # create anaconda environment
-conda activate gptac_venv # activate anaconda environment
-python -m pip install -r requirements.txt # This step is the same as the pip installation step
-```
-
-If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, click to expand here
-
-
-[Optional Step] If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, you need to install more dependencies (prerequisite: familiar with Python + used Pytorch + computer configuration is strong):
-```sh
-# 【Optional Step I】support Tsinghua ChatGLM。Tsinghua ChatGLM Note: If you encounter a "Call ChatGLM fails cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installed is torch+cpu version, and using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# 【Optional Step II】support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When executing this line of code, you must be in the project root path
-
-# 【Optional Step III】Make sure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports docker solutions):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-4. Run
-
-```sh
-python main.py
-```5. Plugin de Função de Teste
-```
-- Função de modelo de plug-in de teste (exige que o GPT responda ao que aconteceu hoje na história), você pode usar esta função como modelo para implementar funções mais complexas
- Clique em "[Função de plug-in de modelo de demonstração] O que aconteceu hoje na história?"
-```
-
-## Instalação - Método 2: Usando o Docker
-
-1. Apenas ChatGPT (recomendado para a maioria das pessoas)
-
-``` sh
-git clone https://github.com/binary-husky/chatgpt_academic.git # Baixar o projeto
-cd chatgpt_academic # Entrar no caminho
-nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc.
-docker build -t gpt-academic . # Instale
-
-# (Ùltima etapa - escolha 1) Dentro do ambiente Linux, é mais fácil e rápido usar `--net=host`
-docker run --rm -it --net=host gpt-academic
-# (Última etapa - escolha 2) Em ambientes macOS/windows, você só pode usar a opção -p para expor a porta do contêiner (por exemplo, 50923) para a porta no host
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (conhecimento de Docker necessário)
-
-``` sh
-# Edite o arquivo docker-compose.yml, remova as soluções 1 e 3, mantenha a solução 2, e siga as instruções nos comentários do arquivo
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + Pangu + RWKV (conhecimento de Docker necessário)
-``` sh
-# Edite o arquivo docker-compose.yml, remova as soluções 1 e 2, mantenha a solução 3, e siga as instruções nos comentários do arquivo
-docker-compose up
-```
-
-
-## Instalação - Método 3: Outros Métodos de Implantação
-
-1. Como usar URLs de proxy inverso/microsoft Azure API
-Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`.
-
-2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem)
-Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Usando a WSL2 (sub-sistema do Windows para Linux)
-Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. Como executar em um subdiretório (ex. `http://localhost/subpath`)
-Acesse [Instruções de execução FastAPI](docs/WithFastapi.md)
-
-5. Execute usando o docker-compose
-Leia o arquivo docker-compose.yml e siga as instruções.
-
-# Uso Avançado
-## Customize novos botões de acesso rápido / plug-ins de função personalizados
-
-1. Personalizar novos botões de acesso rápido (atalhos acadêmicos)
-Abra `core_functional.py` em qualquer editor de texto e adicione os seguintes itens e reinicie o programa (Se o botão já foi adicionado e pode ser visto, prefixos e sufixos são compatíveis com modificações em tempo real e não exigem reinício do programa para ter efeito.)
-Por exemplo,
-```
-"Super Eng:": {
- # Prefixo, será adicionado antes da sua entrada. Por exemplo, para descrever sua solicitação, como tradução, explicação de código, polimento, etc.
- "Prefix": "Por favor, traduza o seguinte conteúdo para chinês e use uma tabela em Markdown para explicar termos próprios no texto: \n \n",
-
- # Sufixo, será adicionado após a sua entrada. Por exemplo, emparelhado com o prefixo, pode colocar sua entrada entre aspas.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Personalizar plug-ins de função
-
-Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível.
-A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos.
-Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Última atualização
-## Novas funções dinâmicas.1. Função de salvamento de diálogo. Ao chamar o plug-in de função "Salvar diálogo atual", é possível salvar o diálogo atual em um arquivo html legível e reversível. Além disso, ao chamar o plug-in de função "Carregar arquivo de histórico de diálogo" no menu suspenso da área de plug-in, é possível restaurar uma conversa anterior. Dica: clicar em "Carregar arquivo de histórico de diálogo" sem especificar um arquivo permite visualizar o cache do arquivo html de histórico. Clicar em "Excluir todo o registro de histórico de diálogo local" permite excluir todo o cache de arquivo html.
-
-
-
-
-
-2. Geração de relatório. A maioria dos plug-ins gera um relatório de trabalho após a conclusão da execução.
-
-
-
-
-
-
-3. Design modular de funcionalidades, com interfaces simples, mas suporte a recursos poderosos
-
-
-
-
-
-4. Este é um projeto de código aberto que é capaz de "auto-traduzir-se".
-
-
-
-
-5. A tradução de outros projetos de código aberto é simples.
-
-
-
-
-
-
-
-
-6. Recursos decorativos para o [live2d](https://github.com/fghrsh/live2d_demo) (desativados por padrão, é necessário modificar o arquivo `config.py`)
-
-
-
-
-7. Suporte ao modelo de linguagem MOSS
-
-
-
-
-8. Geração de imagens pelo OpenAI
-
-
-
-
-9. Análise e resumo de áudio pelo OpenAI
-
-
-
-
-10. Revisão e correção de erros de texto em Latex.
-
-
-
-
-## Versão:
-- Versão 3.5(Todo): Usar linguagem natural para chamar todas as funções do projeto (prioridade alta)
-- Versão 3.4(Todo): Melhorar o suporte à multithread para o chatglm local
-- Versão 3.3: +Funções integradas de internet
-- Versão 3.2: Suporte a mais interfaces de parâmetros de plug-in (função de salvar diálogo, interpretação de códigos de várias linguagens, perguntas de combinações LLM arbitrárias ao mesmo tempo)
-- Versão 3.1: Suporte a perguntas a vários modelos de gpt simultaneamente! Suporte para api2d e balanceamento de carga para várias chaves api
-- Versão 3.0: Suporte ao chatglm e outros LLMs de pequeno porte
-- Versão 2.6: Refatoração da estrutura de plug-in, melhoria da interatividade e adição de mais plug-ins
-- Versão 2.5: Autoatualização, resolvendo problemas de token de texto excessivamente longo e estouro ao compilar grandes projetos
-- Versão 2.4: (1) Adição de funcionalidade de tradução de texto completo em PDF; (2) Adição de funcionalidade de mudança de posição da área de entrada; (3) Adição de opção de layout vertical; (4) Otimização de plug-ins de multithread.
-- Versão 2.3: Melhoria da interatividade de multithread
-- Versão 2.2: Suporte à recarga a quente de plug-ins
-- Versão 2.1: Layout dobrável
-- Versão 2.0: Introdução de plug-ins de função modular
-- Versão 1.0: Funcionalidades básicasgpt_academic desenvolvedores QQ grupo-2: 610599535
-
-- Problemas conhecidos
- - Extensões de tradução de alguns navegadores podem interferir na execução do front-end deste software
- - Uma versão muito alta ou muito baixa do Gradio pode causar vários erros
-
-## Referências e Aprendizado
-
-```
-Foi feita referência a muitos projetos excelentes em código, principalmente:
-
-# Projeto1: ChatGLM-6B da Tsinghua:
-https://github.com/THUDM/ChatGLM-6B
-
-# Projeto2: JittorLLMs da Tsinghua:
-https://github.com/Jittor/JittorLLMs
-
-# Projeto3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Projeto4: ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projeto5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Mais:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_export_caffe2.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_export_caffe2.py
deleted file mode 100644
index ad989c4a3d11e6675d26ae2690f06d2ffe30d44c..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/test_export_caffe2.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# -*- coding: utf-8 -*-
-
-import copy
-import numpy as np
-import os
-import tempfile
-import unittest
-import cv2
-import torch
-from fvcore.common.file_io import PathManager
-
-from detectron2 import model_zoo
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import DatasetCatalog
-from detectron2.modeling import build_model
-from detectron2.utils.logger import setup_logger
-
-
-@unittest.skipIf(os.environ.get("CIRCLECI"), "Require COCO data and model zoo.")
-class TestCaffe2Export(unittest.TestCase):
- def setUp(self):
- setup_logger()
-
- def _test_model(self, config_path, device="cpu"):
- # requires extra dependencies
- from detectron2.export import Caffe2Model, add_export_config, export_caffe2_model
-
- cfg = get_cfg()
- cfg.merge_from_file(model_zoo.get_config_file(config_path))
- cfg = add_export_config(cfg)
- cfg.MODEL.DEVICE = device
-
- model = build_model(cfg)
- DetectionCheckpointer(model).load(model_zoo.get_checkpoint_url(config_path))
-
- inputs = [{"image": self._get_test_image()}]
- c2_model = export_caffe2_model(cfg, model, copy.deepcopy(inputs))
-
- with tempfile.TemporaryDirectory(prefix="detectron2_unittest") as d:
- c2_model.save_protobuf(d)
- c2_model.save_graph(os.path.join(d, "test.svg"), inputs=copy.deepcopy(inputs))
- c2_model = Caffe2Model.load_protobuf(d)
- c2_model(inputs)[0]["instances"]
-
- def _get_test_image(self):
- try:
- file_name = DatasetCatalog.get("coco_2017_train")[0]["file_name"]
- assert PathManager.exists(file_name)
- except Exception:
- self.skipTest("COCO dataset not available.")
-
- with PathManager.open(file_name, "rb") as f:
- buf = f.read()
- img = cv2.imdecode(np.frombuffer(buf, dtype=np.uint8), cv2.IMREAD_COLOR)
- assert img is not None, file_name
- return torch.from_numpy(img.transpose(2, 0, 1))
-
- def testMaskRCNN(self):
- self._test_model("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def testMaskRCNNGPU(self):
- self._test_model("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml", device="cuda")
-
- def testRetinaNet(self):
- self._test_model("COCO-Detection/retinanet_R_50_FPN_3x.yaml")
-
- def testPanopticFPN(self):
- self._test_model("COCO-PanopticSegmentation/panoptic_fpn_R_50_3x.yaml")
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/finetune_net.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/finetune_net.py
deleted file mode 100644
index 3e521859f70b89da747b324375a5110d8663fdc7..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tools/finetune_net.py
+++ /dev/null
@@ -1,183 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Detection Training Script.
-
-This scripts reads a given config file and runs the training or evaluation.
-It is an entry point that is made to train standard models in detectron2.
-
-In order to let one script support training of many models,
-this script contains logic that are specific to these built-in models and therefore
-may not be suitable for your own project.
-For example, your research project perhaps only needs a single "evaluator".
-
-Therefore, we recommend you to use detectron2 as an library and take
-this file as an example of how to use the library.
-You may want to write your own script with your data and other customizations.
-"""
-
-import logging
-import os
-from collections import OrderedDict
-import torch
-
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import MetadataCatalog
-from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, hooks, launch
-from detectron2.evaluation import (
- CityscapesInstanceEvaluator,
- CityscapesSemSegEvaluator,
- COCOEvaluator,
- COCOPanopticEvaluator,
- DatasetEvaluators,
- LVISEvaluator,
- PascalVOCDetectionEvaluator,
- SemSegEvaluator,
- verify_results,
-)
-from detectron2.modeling import GeneralizedRCNNWithTTA
-
-# Register Custom Dataset
-from detectron2.data.datasets import register_coco_instances
-
-register_coco_instances("CIHP_train", {}, "../../data/msrcnn_finetune_annotations/CIHP_train.json",
- "../../data/instance-level_human_parsing/Training/Images")
-register_coco_instances("CIHP_val", {}, "../../data/msrcnn_finetune_annotations/CIHP_val.json",
- "../../data/instance-level_human_parsing/Validation/Images")
-register_coco_instances("demo_train", {}, "../../demo/annotations/demo_train.json",
- "../../demo/img")
-register_coco_instances("demo_val", {}, "../../demo/annotations/demo_val.json",
- "../../demo/img")
-
-
-class Trainer(DefaultTrainer):
- """
- We use the "DefaultTrainer" which contains pre-defined default logic for
- standard training workflow. They may not work for you, especially if you
- are working on a new research project. In that case you can use the cleaner
- "SimpleTrainer", or write your own training loop. You can use
- "tools/plain_train_net.py" as an example.
- """
-
- @classmethod
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
- """
- Create evaluator(s) for a given dataset.
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
- For your own dataset, you can simply create an evaluator manually in your
- script and do not have to worry about the hacky if-else logic here.
- """
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- evaluator_list = []
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
- if evaluator_type in ["sem_seg", "coco_panoptic_seg"]:
- evaluator_list.append(
- SemSegEvaluator(
- dataset_name,
- distributed=True,
- num_classes=cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES,
- ignore_label=cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- output_dir=output_folder,
- )
- )
- if evaluator_type in ["coco", "coco_panoptic_seg"]:
- evaluator_list.append(COCOEvaluator(dataset_name, cfg, True, output_folder))
- if evaluator_type == "coco_panoptic_seg":
- evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder))
- if evaluator_type == "cityscapes_instance":
- assert (
- torch.cuda.device_count() >= comm.get_rank()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- return CityscapesInstanceEvaluator(dataset_name)
- if evaluator_type == "cityscapes_sem_seg":
- assert (
- torch.cuda.device_count() >= comm.get_rank()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- return CityscapesSemSegEvaluator(dataset_name)
- elif evaluator_type == "pascal_voc":
- return PascalVOCDetectionEvaluator(dataset_name)
- elif evaluator_type == "lvis":
- return LVISEvaluator(dataset_name, cfg, True, output_folder)
- if len(evaluator_list) == 0:
- raise NotImplementedError(
- "no Evaluator for the dataset {} with the type {}".format(
- dataset_name, evaluator_type
- )
- )
- elif len(evaluator_list) == 1:
- return evaluator_list[0]
- return DatasetEvaluators(evaluator_list)
-
- @classmethod
- def test_with_TTA(cls, cfg, model):
- logger = logging.getLogger("detectron2.trainer")
- # In the end of training, run an evaluation with TTA
- # Only support some R-CNN models.
- logger.info("Running inference with test-time augmentation ...")
- model = GeneralizedRCNNWithTTA(cfg, model)
- evaluators = [
- cls.build_evaluator(
- cfg, name, output_folder=os.path.join(cfg.OUTPUT_DIR, "inference_TTA")
- )
- for name in cfg.DATASETS.TEST
- ]
- res = cls.test(cfg, model, evaluators)
- res = OrderedDict({k + "_TTA": v for k, v in res.items()})
- return res
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- if args.eval_only:
- model = Trainer.build_model(cfg)
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- res = Trainer.test(cfg, model)
- if cfg.TEST.AUG.ENABLED:
- res.update(Trainer.test_with_TTA(cfg, model))
- if comm.is_main_process():
- verify_results(cfg, res)
- return res
-
- """
- If you'd like to do anything fancier than the standard training logic,
- consider writing your own training loop (see plain_train_net.py) or
- subclassing the trainer.
- """
- trainer = Trainer(cfg)
- trainer.resume_or_load(resume=False)
- if cfg.TEST.AUG.ENABLED:
- trainer.register_hooks(
- [hooks.EvalHook(0, lambda: trainer.test_with_TTA(cfg, trainer.model))]
- )
- return trainer.train()
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/lovasz_softmax.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/lovasz_softmax.py
deleted file mode 100644
index b6e444f684c0d9bda9d7c2d54a4e79fac0ddf081..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/lovasz_softmax.py
+++ /dev/null
@@ -1,279 +0,0 @@
-#!/usr/bin/env python
-# -*- encoding: utf-8 -*-
-
-"""
-@Author : Peike Li
-@Contact : peike.li@yahoo.com
-@File : lovasz_softmax.py
-@Time : 8/30/19 7:12 PM
-@Desc : Lovasz-Softmax and Jaccard hinge loss in PyTorch
- Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
-@License : This source code is licensed under the license found in the
- LICENSE file in the root directory of this source tree.
-"""
-
-from __future__ import print_function, division
-
-import torch
-from torch.autograd import Variable
-import torch.nn.functional as F
-import numpy as np
-from torch import nn
-
-try:
- from itertools import ifilterfalse
-except ImportError: # py3k
- from itertools import filterfalse as ifilterfalse
-
-
-def lovasz_grad(gt_sorted):
- """
- Computes gradient of the Lovasz extension w.r.t sorted errors
- See Alg. 1 in paper
- """
- p = len(gt_sorted)
- gts = gt_sorted.sum()
- intersection = gts - gt_sorted.float().cumsum(0)
- union = gts + (1 - gt_sorted).float().cumsum(0)
- jaccard = 1. - intersection / union
- if p > 1: # cover 1-pixel case
- jaccard[1:p] = jaccard[1:p] - jaccard[0:-1]
- return jaccard
-
-
-def iou_binary(preds, labels, EMPTY=1., ignore=None, per_image=True):
- """
- IoU for foreground class
- binary: 1 foreground, 0 background
- """
- if not per_image:
- preds, labels = (preds,), (labels,)
- ious = []
- for pred, label in zip(preds, labels):
- intersection = ((label == 1) & (pred == 1)).sum()
- union = ((label == 1) | ((pred == 1) & (label != ignore))).sum()
- if not union:
- iou = EMPTY
- else:
- iou = float(intersection) / float(union)
- ious.append(iou)
- iou = mean(ious) # mean accross images if per_image
- return 100 * iou
-
-
-def iou(preds, labels, C, EMPTY=1., ignore=None, per_image=False):
- """
- Array of IoU for each (non ignored) class
- """
- if not per_image:
- preds, labels = (preds,), (labels,)
- ious = []
- for pred, label in zip(preds, labels):
- iou = []
- for i in range(C):
- if i != ignore: # The ignored label is sometimes among predicted classes (ENet - CityScapes)
- intersection = ((label == i) & (pred == i)).sum()
- union = ((label == i) | ((pred == i) & (label != ignore))).sum()
- if not union:
- iou.append(EMPTY)
- else:
- iou.append(float(intersection) / float(union))
- ious.append(iou)
- ious = [mean(iou) for iou in zip(*ious)] # mean accross images if per_image
- return 100 * np.array(ious)
-
-
-# --------------------------- BINARY LOSSES ---------------------------
-
-
-def lovasz_hinge(logits, labels, per_image=True, ignore=None):
- """
- Binary Lovasz hinge loss
- logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
- labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
- per_image: compute the loss per image instead of per batch
- ignore: void class id
- """
- if per_image:
- loss = mean(lovasz_hinge_flat(*flatten_binary_scores(log.unsqueeze(0), lab.unsqueeze(0), ignore))
- for log, lab in zip(logits, labels))
- else:
- loss = lovasz_hinge_flat(*flatten_binary_scores(logits, labels, ignore))
- return loss
-
-
-def lovasz_hinge_flat(logits, labels):
- """
- Binary Lovasz hinge loss
- logits: [P] Variable, logits at each prediction (between -\infty and +\infty)
- labels: [P] Tensor, binary ground truth labels (0 or 1)
- ignore: label to ignore
- """
- if len(labels) == 0:
- # only void pixels, the gradients should be 0
- return logits.sum() * 0.
- signs = 2. * labels.float() - 1.
- errors = (1. - logits * Variable(signs))
- errors_sorted, perm = torch.sort(errors, dim=0, descending=True)
- perm = perm.data
- gt_sorted = labels[perm]
- grad = lovasz_grad(gt_sorted)
- loss = torch.dot(F.relu(errors_sorted), Variable(grad))
- return loss
-
-
-def flatten_binary_scores(scores, labels, ignore=None):
- """
- Flattens predictions in the batch (binary case)
- Remove labels equal to 'ignore'
- """
- scores = scores.view(-1)
- labels = labels.view(-1)
- if ignore is None:
- return scores, labels
- valid = (labels != ignore)
- vscores = scores[valid]
- vlabels = labels[valid]
- return vscores, vlabels
-
-
-class StableBCELoss(torch.nn.modules.Module):
- def __init__(self):
- super(StableBCELoss, self).__init__()
-
- def forward(self, input, target):
- neg_abs = - input.abs()
- loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()
- return loss.mean()
-
-
-def binary_xloss(logits, labels, ignore=None):
- """
- Binary Cross entropy loss
- logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
- labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
- ignore: void class id
- """
- logits, labels = flatten_binary_scores(logits, labels, ignore)
- loss = StableBCELoss()(logits, Variable(labels.float()))
- return loss
-
-
-# --------------------------- MULTICLASS LOSSES ---------------------------
-
-
-def lovasz_softmax(probas, labels, classes='present', per_image=False, ignore=255, weighted=None):
- """
- Multi-class Lovasz-Softmax loss
- probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1).
- Interpreted as binary (sigmoid) output with outputs of size [B, H, W].
- labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1)
- classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.
- per_image: compute the loss per image instead of per batch
- ignore: void class labels
- """
- if per_image:
- loss = mean(lovasz_softmax_flat(*flatten_probas(prob.unsqueeze(0), lab.unsqueeze(0), ignore), classes=classes, weighted=weighted)
- for prob, lab in zip(probas, labels))
- else:
- loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), classes=classes, weighted=weighted )
- return loss
-
-
-def lovasz_softmax_flat(probas, labels, classes='present', weighted=None):
- """
- Multi-class Lovasz-Softmax loss
- probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1)
- labels: [P] Tensor, ground truth labels (between 0 and C - 1)
- classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.
- """
- if probas.numel() == 0:
- # only void pixels, the gradients should be 0
- return probas * 0.
- C = probas.size(1)
- losses = []
- class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes
- for c in class_to_sum:
- fg = (labels == c).float() # foreground for class c
- if (classes is 'present' and fg.sum() == 0):
- continue
- if C == 1:
- if len(classes) > 1:
- raise ValueError('Sigmoid output possible only with 1 class')
- class_pred = probas[:, 0]
- else:
- class_pred = probas[:, c]
- errors = (Variable(fg) - class_pred).abs()
- errors_sorted, perm = torch.sort(errors, 0, descending=True)
- perm = perm.data
- fg_sorted = fg[perm]
- if weighted is not None:
- losses.append(weighted[c]*torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted))))
- else:
- losses.append(torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted))))
- return mean(losses)
-
-
-def flatten_probas(probas, labels, ignore=None):
- """
- Flattens predictions in the batch
- """
- if probas.dim() == 3:
- # assumes output of a sigmoid layer
- B, H, W = probas.size()
- probas = probas.view(B, 1, H, W)
- B, C, H, W = probas.size()
- probas = probas.permute(0, 2, 3, 1).contiguous().view(-1, C) # B * H * W, C = P, C
- labels = labels.view(-1)
- if ignore is None:
- return probas, labels
- valid = (labels != ignore)
- vprobas = probas[valid.nonzero().squeeze()]
- vlabels = labels[valid]
- return vprobas, vlabels
-
-
-def xloss(logits, labels, ignore=None):
- """
- Cross entropy loss
- """
- return F.cross_entropy(logits, Variable(labels), ignore_index=255)
-
-
-# --------------------------- HELPER FUNCTIONS ---------------------------
-def isnan(x):
- return x != x
-
-
-def mean(l, ignore_nan=False, empty=0):
- """
- nanmean compatible with generators.
- """
- l = iter(l)
- if ignore_nan:
- l = ifilterfalse(isnan, l)
- try:
- n = 1
- acc = next(l)
- except StopIteration:
- if empty == 'raise':
- raise ValueError('Empty mean')
- return empty
- for n, v in enumerate(l, 2):
- acc += v
- if n == 1:
- return acc
- return acc / n
-
-# --------------------------- Class ---------------------------
-class LovaszSoftmax(nn.Module):
- def __init__(self, per_image=False, ignore_index=255, weighted=None):
- super(LovaszSoftmax, self).__init__()
- self.lovasz_softmax = lovasz_softmax
- self.per_image = per_image
- self.ignore_index=ignore_index
- self.weighted = weighted
-
- def forward(self, pred, label):
- pred = F.softmax(pred, dim=1)
- return self.lovasz_softmax(pred, label, per_image=self.per_image, ignore=self.ignore_index, weighted=self.weighted)
\ No newline at end of file
diff --git a/spaces/hasibzunair/fifa-tryon-demo/gradio/demo.py b/spaces/hasibzunair/fifa-tryon-demo/gradio/demo.py
deleted file mode 100644
index 2ad81ef24cdb3e645331aacae729fd20cec78082..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/gradio/demo.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import cv2
-import paddlehub as hub
-import gradio as gr
-import torch
-
-# Images
-torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2018/08/12/16/59/ara-3601194_1280.jpg', 'parrot.jpg')
-torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2016/10/21/14/46/fox-1758183_1280.jpg', 'fox.jpg')
-
-model = hub.Module(name='U2Net')
-
-def infer(img):
- result = model.Segmentation(
- images=[cv2.imread(img.name)],
- paths=None,
- batch_size=1,
- input_size=320,
- output_dir='output',
- visualization=True)
- return result[0]['front'][:,:,::-1], result[0]['mask']
-
-inputs = gr.inputs.Image(type='file', label="Original Image")
-outputs = [
- gr.outputs.Image(type="numpy",label="Front"),
- gr.outputs.Image(type="numpy",label="Mask")
- ]
-
-title = "U^2-Net"
-description = "demo for U^2-Net. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
-article = "
"
-
-examples = [
- ['fox.jpg'],
- ['parrot.jpg']
-]
-
-gr.Interface(infer, inputs, outputs, title=title, description=description, article=article, examples=examples).launch()
\ No newline at end of file
diff --git a/spaces/hf4all/web-ui/_next/static/css/60ec184094fe2bcc.css b/spaces/hf4all/web-ui/_next/static/css/60ec184094fe2bcc.css
deleted file mode 100644
index 67dcb7698c21d38c409d5fc739bba2c8e20aa370..0000000000000000000000000000000000000000
--- a/spaces/hf4all/web-ui/_next/static/css/60ec184094fe2bcc.css
+++ /dev/null
@@ -1 +0,0 @@
-@media (prefers-color-scheme:dark){.markdown-body{color-scheme:dark;--color-prettylights-syntax-comment:#8b949e;--color-prettylights-syntax-constant:#79c0ff;--color-prettylights-syntax-entity:#d2a8ff;--color-prettylights-syntax-storage-modifier-import:#c9d1d9;--color-prettylights-syntax-entity-tag:#7ee787;--color-prettylights-syntax-keyword:#ff7b72;--color-prettylights-syntax-string:#a5d6ff;--color-prettylights-syntax-variable:#ffa657;--color-prettylights-syntax-brackethighlighter-unmatched:#f85149;--color-prettylights-syntax-invalid-illegal-text:#f0f6fc;--color-prettylights-syntax-invalid-illegal-bg:#8e1519;--color-prettylights-syntax-carriage-return-text:#f0f6fc;--color-prettylights-syntax-carriage-return-bg:#b62324;--color-prettylights-syntax-string-regexp:#7ee787;--color-prettylights-syntax-markup-list:#f2cc60;--color-prettylights-syntax-markup-heading:#1f6feb;--color-prettylights-syntax-markup-italic:#c9d1d9;--color-prettylights-syntax-markup-bold:#c9d1d9;--color-prettylights-syntax-markup-deleted-text:#ffdcd7;--color-prettylights-syntax-markup-deleted-bg:#67060c;--color-prettylights-syntax-markup-inserted-text:#aff5b4;--color-prettylights-syntax-markup-inserted-bg:#033a16;--color-prettylights-syntax-markup-changed-text:#ffdfb6;--color-prettylights-syntax-markup-changed-bg:#5a1e02;--color-prettylights-syntax-markup-ignored-text:#c9d1d9;--color-prettylights-syntax-markup-ignored-bg:#1158c7;--color-prettylights-syntax-meta-diff-range:#d2a8ff;--color-prettylights-syntax-brackethighlighter-angle:#8b949e;--color-prettylights-syntax-sublimelinter-gutter-mark:#484f58;--color-prettylights-syntax-constant-other-reference-link:#a5d6ff;--color-fg-default:#c9d1d9;--color-fg-muted:#8b949e;--color-fg-subtle:#6e7681;--color-canvas-default:#0d1117;--color-canvas-subtle:#161b22;--color-border-default:#30363d;--color-border-muted:#21262d;--color-neutral-muted:hsla(215,8%,47%,.4);--color-accent-fg:#58a6ff;--color-accent-emphasis:#1f6feb;--color-attention-subtle:rgba(187,128,9,.15);--color-danger-fg:#f85149}}@media (prefers-color-scheme:light){.markdown-body{color-scheme:light;--color-prettylights-syntax-comment:#6e7781;--color-prettylights-syntax-constant:#0550ae;--color-prettylights-syntax-entity:#8250df;--color-prettylights-syntax-storage-modifier-import:#24292f;--color-prettylights-syntax-entity-tag:#116329;--color-prettylights-syntax-keyword:#cf222e;--color-prettylights-syntax-string:#0a3069;--color-prettylights-syntax-variable:#953800;--color-prettylights-syntax-brackethighlighter-unmatched:#82071e;--color-prettylights-syntax-invalid-illegal-text:#f6f8fa;--color-prettylights-syntax-invalid-illegal-bg:#82071e;--color-prettylights-syntax-carriage-return-text:#f6f8fa;--color-prettylights-syntax-carriage-return-bg:#cf222e;--color-prettylights-syntax-string-regexp:#116329;--color-prettylights-syntax-markup-list:#3b2300;--color-prettylights-syntax-markup-heading:#0550ae;--color-prettylights-syntax-markup-italic:#24292f;--color-prettylights-syntax-markup-bold:#24292f;--color-prettylights-syntax-markup-deleted-text:#82071e;--color-prettylights-syntax-markup-deleted-bg:#ffebe9;--color-prettylights-syntax-markup-inserted-text:#116329;--color-prettylights-syntax-markup-inserted-bg:#dafbe1;--color-prettylights-syntax-markup-changed-text:#953800;--color-prettylights-syntax-markup-changed-bg:#ffd8b5;--color-prettylights-syntax-markup-ignored-text:#eaeef2;--color-prettylights-syntax-markup-ignored-bg:#0550ae;--color-prettylights-syntax-meta-diff-range:#8250df;--color-prettylights-syntax-brackethighlighter-angle:#57606a;--color-prettylights-syntax-sublimelinter-gutter-mark:#8c959f;--color-prettylights-syntax-constant-other-reference-link:#0a3069;--color-fg-default:#24292f;--color-fg-muted:#57606a;--color-fg-subtle:#6e7781;--color-canvas-default:#fff;--color-canvas-subtle:#f6f8fa;--color-border-default:#d0d7de;--color-border-muted:#d8dee4;--color-neutral-muted:rgba(175,184,193,.2);--color-accent-fg:#0969da;--color-accent-emphasis:#0969da;--color-attention-subtle:#fff8c5;--color-danger-fg:#cf222e}}.markdown-body{-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%;margin:0;color:var(--color-fg-default);background-color:var(--color-canvas-default);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Noto Sans,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji;font-size:16px;line-height:1.5;word-wrap:break-word}.markdown-body h1:hover .anchor .octicon-link:before,.markdown-body h2:hover .anchor .octicon-link:before,.markdown-body h3:hover .anchor .octicon-link:before,.markdown-body h4:hover .anchor .octicon-link:before,.markdown-body h5:hover .anchor .octicon-link:before,.markdown-body h6:hover .anchor .octicon-link:before{width:16px;height:16px;content:" ";display:inline-block;background-color:currentColor;-webkit-mask-image:url("data:image/svg+xml,");mask-image:url("data:image/svg+xml,")}.markdown-body details,.markdown-body figcaption,.markdown-body figure{display:block}.markdown-body summary{display:list-item}.markdown-body [hidden]{display:none!important}.markdown-body a{background-color:transparent;color:var(--color-accent-fg);text-decoration:none}.markdown-body abbr[title]{border-bottom:none;-webkit-text-decoration:underline dotted;text-decoration:underline dotted}.markdown-body b,.markdown-body strong{font-weight:var(--base-text-weight-semibold,600)}.markdown-body dfn{font-style:italic}.markdown-body h1{margin:.67em 0;font-weight:var(--base-text-weight-semibold,600);padding-bottom:.3em;font-size:2em;border-bottom:1px solid var(--color-border-muted)}.markdown-body mark{background-color:var(--color-attention-subtle);color:var(--color-fg-default)}.markdown-body small{font-size:90%}.markdown-body sub,.markdown-body sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}.markdown-body sub{bottom:-.25em}.markdown-body sup{top:-.5em}.markdown-body img{border-style:none;max-width:100%;box-sizing:content-box;background-color:var(--color-canvas-default)}.markdown-body code,.markdown-body kbd,.markdown-body pre,.markdown-body samp{font-family:monospace;font-size:1em}.markdown-body figure{margin:1em 40px}.markdown-body hr{box-sizing:content-box;overflow:hidden;background:transparent;height:.25em;padding:0;margin:24px 0;background-color:var(--color-border-default);border:0}.markdown-body input{font:inherit;margin:0;overflow:visible;font-family:inherit;font-size:inherit;line-height:inherit}.markdown-body [type=button],.markdown-body [type=reset],.markdown-body [type=submit]{-webkit-appearance:button}.markdown-body [type=checkbox],.markdown-body [type=radio]{box-sizing:border-box;padding:0}.markdown-body [type=number]::-webkit-inner-spin-button,.markdown-body [type=number]::-webkit-outer-spin-button{height:auto}.markdown-body [type=search]::-webkit-search-cancel-button,.markdown-body [type=search]::-webkit-search-decoration{-webkit-appearance:none}.markdown-body ::-webkit-input-placeholder{color:inherit;opacity:.54}.markdown-body ::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}.markdown-body a:hover{text-decoration:underline}.markdown-body ::-moz-placeholder{color:var(--color-fg-subtle);opacity:1}.markdown-body ::placeholder{color:var(--color-fg-subtle);opacity:1}.markdown-body hr:after,.markdown-body hr:before{display:table;content:""}.markdown-body hr:after{clear:both}.markdown-body table{border-spacing:0;border-collapse:collapse;display:block;width:-moz-max-content;width:max-content;max-width:100%;overflow:auto}.markdown-body td,.markdown-body th{padding:0}.markdown-body details summary{cursor:pointer}.markdown-body details:not([open])>:not(summary){display:none!important}.markdown-body [role=button]:focus,.markdown-body a:focus,.markdown-body input[type=checkbox]:focus,.markdown-body input[type=radio]:focus{outline:2px solid var(--color-accent-fg);outline-offset:-2px;box-shadow:none}.markdown-body [role=button]:focus:not(:focus-visible),.markdown-body a:focus:not(:focus-visible),.markdown-body input[type=checkbox]:focus:not(:focus-visible),.markdown-body input[type=radio]:focus:not(:focus-visible){outline:1px solid transparent}.markdown-body [role=button]:focus-visible,.markdown-body a:focus-visible,.markdown-body input[type=checkbox]:focus-visible,.markdown-body input[type=radio]:focus-visible{outline:2px solid var(--color-accent-fg);outline-offset:-2px;box-shadow:none}.markdown-body a:not([class]):focus,.markdown-body a:not([class]):focus-visible,.markdown-body input[type=checkbox]:focus,.markdown-body input[type=checkbox]:focus-visible,.markdown-body input[type=radio]:focus,.markdown-body input[type=radio]:focus-visible{outline-offset:0}.markdown-body kbd{display:inline-block;padding:3px 5px;font:11px ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;line-height:10px;color:var(--color-fg-default);vertical-align:middle;background-color:var(--color-canvas-subtle);border-bottom-color:var(--color-neutral-muted);border:1px solid var(--color-neutral-muted);border-radius:6px;box-shadow:inset 0 -1px 0 var(--color-neutral-muted)}.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{margin-top:24px;margin-bottom:16px;font-weight:var(--base-text-weight-semibold,600);line-height:1.25}.markdown-body h2{padding-bottom:.3em;font-size:1.5em;border-bottom:1px solid var(--color-border-muted)}.markdown-body h2,.markdown-body h3{font-weight:var(--base-text-weight-semibold,600)}.markdown-body h3{font-size:1.25em}.markdown-body h4{font-size:1em}.markdown-body h4,.markdown-body h5{font-weight:var(--base-text-weight-semibold,600)}.markdown-body h5{font-size:.875em}.markdown-body h6{font-weight:var(--base-text-weight-semibold,600);font-size:.85em;color:var(--color-fg-muted)}.markdown-body p{margin-top:0;margin-bottom:10px}.markdown-body blockquote{margin:0;padding:0 1em;color:var(--color-fg-muted);border-left:.25em solid var(--color-border-default)}.markdown-body ol,.markdown-body ul{margin-top:0;margin-bottom:0;padding-left:2em}.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}.markdown-body dd{margin-left:0}.markdown-body code,.markdown-body pre,.markdown-body samp,.markdown-body tt{font-family:ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;font-size:12px}.markdown-body pre{margin-top:0;margin-bottom:0;word-wrap:normal}.markdown-body .octicon{display:inline-block;overflow:visible!important;vertical-align:text-bottom;fill:currentColor}.markdown-body input::-webkit-inner-spin-button,.markdown-body input::-webkit-outer-spin-button{margin:0;-webkit-appearance:none;appearance:none}.markdown-body:after,.markdown-body:before{display:table;content:""}.markdown-body:after{clear:both}.markdown-body>:first-child{margin-top:0!important}.markdown-body>:last-child{margin-bottom:0!important}.markdown-body a:not([href]){color:inherit;text-decoration:none}.markdown-body .absent{color:var(--color-danger-fg)}.markdown-body .anchor{float:left;padding-right:4px;margin-left:-20px;line-height:1}.markdown-body .anchor:focus{outline:none}.markdown-body blockquote,.markdown-body details,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}.markdown-body blockquote>:first-child{margin-top:0}.markdown-body blockquote>:last-child{margin-bottom:0}.markdown-body h1 .octicon-link,.markdown-body h2 .octicon-link,.markdown-body h3 .octicon-link,.markdown-body h4 .octicon-link,.markdown-body h5 .octicon-link,.markdown-body h6 .octicon-link{color:var(--color-fg-default);vertical-align:middle;visibility:hidden}.markdown-body h1:hover .anchor,.markdown-body h2:hover .anchor,.markdown-body h3:hover .anchor,.markdown-body h4:hover .anchor,.markdown-body h5:hover .anchor,.markdown-body h6:hover .anchor{text-decoration:none}.markdown-body h1:hover .anchor .octicon-link,.markdown-body h2:hover .anchor .octicon-link,.markdown-body h3:hover .anchor .octicon-link,.markdown-body h4:hover .anchor .octicon-link,.markdown-body h5:hover .anchor .octicon-link,.markdown-body h6:hover .anchor .octicon-link{visibility:visible}.markdown-body h1 code,.markdown-body h1 tt,.markdown-body h2 code,.markdown-body h2 tt,.markdown-body h3 code,.markdown-body h3 tt,.markdown-body h4 code,.markdown-body h4 tt,.markdown-body h5 code,.markdown-body h5 tt,.markdown-body h6 code,.markdown-body h6 tt{padding:0 .2em;font-size:inherit}.markdown-body summary h1,.markdown-body summary h2,.markdown-body summary h3,.markdown-body summary h4,.markdown-body summary h5,.markdown-body summary h6{display:inline-block}.markdown-body summary h1 .anchor,.markdown-body summary h2 .anchor,.markdown-body summary h3 .anchor,.markdown-body summary h4 .anchor,.markdown-body summary h5 .anchor,.markdown-body summary h6 .anchor{margin-left:-40px}.markdown-body summary h1,.markdown-body summary h2{padding-bottom:0;border-bottom:0}.markdown-body ol.no-list,.markdown-body ul.no-list{padding:0;list-style-type:none}.markdown-body ol[type=a]{list-style-type:lower-alpha}.markdown-body ol[type=A]{list-style-type:upper-alpha}.markdown-body ol[type=i]{list-style-type:lower-roman}.markdown-body ol[type=I]{list-style-type:upper-roman}.markdown-body div>ol:not([type]),.markdown-body ol[type="1"]{list-style-type:decimal}.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}.markdown-body li>p{margin-top:16px}.markdown-body li+li{margin-top:.25em}.markdown-body dl{padding:0}.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:var(--base-text-weight-semibold,600)}.markdown-body dl dd{padding:0 16px;margin-bottom:16px}.markdown-body table th{font-weight:var(--base-text-weight-semibold,600)}.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid var(--color-border-default)}.markdown-body table tr{background-color:var(--color-canvas-default);border-top:1px solid var(--color-border-muted)}.markdown-body table tr:nth-child(2n){background-color:var(--color-canvas-subtle)}.markdown-body table img{background-color:transparent}.markdown-body img[align=right]{padding-left:20px}.markdown-body img[align=left]{padding-right:20px}.markdown-body .emoji{max-width:none;vertical-align:text-top;background-color:transparent}.markdown-body span.frame{display:block;overflow:hidden}.markdown-body span.frame>span{display:block;float:left;width:auto;padding:7px;margin:13px 0 0;overflow:hidden;border:1px solid var(--color-border-default)}.markdown-body span.frame span img{display:block;float:left}.markdown-body span.frame span span{display:block;padding:5px 0 0;clear:both;color:var(--color-fg-default)}.markdown-body span.align-center{display:block;overflow:hidden;clear:both}.markdown-body span.align-center>span{display:block;margin:13px auto 0;overflow:hidden;text-align:center}.markdown-body span.align-center span img{margin:0 auto;text-align:center}.markdown-body span.align-right{display:block;overflow:hidden;clear:both}.markdown-body span.align-right>span{display:block;margin:13px 0 0;overflow:hidden;text-align:right}.markdown-body span.align-right span img{margin:0;text-align:right}.markdown-body span.float-left{display:block;float:left;margin-right:13px;overflow:hidden}.markdown-body span.float-left span{margin:13px 0 0}.markdown-body span.float-right{display:block;float:right;margin-left:13px;overflow:hidden}.markdown-body span.float-right>span{display:block;margin:13px auto 0;overflow:hidden;text-align:right}.markdown-body code,.markdown-body tt{padding:.2em .4em;margin:0;font-size:85%;white-space:break-spaces;background-color:var(--color-neutral-muted);border-radius:6px}.markdown-body code br,.markdown-body tt br{display:none}.markdown-body del code{text-decoration:inherit}.markdown-body samp{font-size:85%}.markdown-body pre code{font-size:100%}.markdown-body pre>code{padding:0;margin:0;word-break:normal;white-space:pre;background:transparent;border:0}.markdown-body .highlight{margin-bottom:16px}.markdown-body .highlight pre{margin-bottom:0;word-break:normal}.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:var(--color-canvas-subtle);border-radius:6px}.markdown-body pre code,.markdown-body pre tt{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}.markdown-body .csv-data td,.markdown-body .csv-data th{padding:5px;overflow:hidden;font-size:12px;line-height:1;text-align:left;white-space:nowrap}.markdown-body .csv-data .blob-num{padding:10px 8px 9px;text-align:right;background:var(--color-canvas-default);border:0}.markdown-body .csv-data tr{border-top:0}.markdown-body .csv-data th{font-weight:var(--base-text-weight-semibold,600);background:var(--color-canvas-subtle);border-top:0}.markdown-body [data-footnote-ref]:before{content:"["}.markdown-body [data-footnote-ref]:after{content:"]"}.markdown-body .footnotes{font-size:12px;color:var(--color-fg-muted);border-top:1px solid var(--color-border-default)}.markdown-body .footnotes ol{padding-left:16px}.markdown-body .footnotes ol ul{display:inline-block;padding-left:16px;margin-top:16px}.markdown-body .footnotes li{position:relative}.markdown-body .footnotes li:target:before{position:absolute;top:-8px;right:-8px;bottom:-8px;left:-24px;pointer-events:none;content:"";border:2px solid var(--color-accent-emphasis);border-radius:6px}.markdown-body .footnotes li:target{color:var(--color-fg-default)}.markdown-body .footnotes .data-footnote-backref g-emoji{font-family:monospace}.markdown-body .pl-c{color:var(--color-prettylights-syntax-comment)}.markdown-body .pl-c1,.markdown-body .pl-s .pl-v{color:var(--color-prettylights-syntax-constant)}.markdown-body .pl-e,.markdown-body .pl-en{color:var(--color-prettylights-syntax-entity)}.markdown-body .pl-s .pl-s1,.markdown-body .pl-smi{color:var(--color-prettylights-syntax-storage-modifier-import)}.markdown-body .pl-ent{color:var(--color-prettylights-syntax-entity-tag)}.markdown-body .pl-k{color:var(--color-prettylights-syntax-keyword)}.markdown-body .pl-pds,.markdown-body .pl-s,.markdown-body .pl-s .pl-pse .pl-s1,.markdown-body .pl-sr,.markdown-body .pl-sr .pl-cce,.markdown-body .pl-sr .pl-sra,.markdown-body .pl-sr .pl-sre{color:var(--color-prettylights-syntax-string)}.markdown-body .pl-smw,.markdown-body .pl-v{color:var(--color-prettylights-syntax-variable)}.markdown-body .pl-bu{color:var(--color-prettylights-syntax-brackethighlighter-unmatched)}.markdown-body .pl-ii{color:var(--color-prettylights-syntax-invalid-illegal-text);background-color:var(--color-prettylights-syntax-invalid-illegal-bg)}.markdown-body .pl-c2{color:var(--color-prettylights-syntax-carriage-return-text);background-color:var(--color-prettylights-syntax-carriage-return-bg)}.markdown-body .pl-sr .pl-cce{font-weight:700;color:var(--color-prettylights-syntax-string-regexp)}.markdown-body .pl-ml{color:var(--color-prettylights-syntax-markup-list)}.markdown-body .pl-mh,.markdown-body .pl-mh .pl-en,.markdown-body .pl-ms{font-weight:700;color:var(--color-prettylights-syntax-markup-heading)}.markdown-body .pl-mi{font-style:italic;color:var(--color-prettylights-syntax-markup-italic)}.markdown-body .pl-mb{font-weight:700;color:var(--color-prettylights-syntax-markup-bold)}.markdown-body .pl-md{color:var(--color-prettylights-syntax-markup-deleted-text);background-color:var(--color-prettylights-syntax-markup-deleted-bg)}.markdown-body .pl-mi1{color:var(--color-prettylights-syntax-markup-inserted-text);background-color:var(--color-prettylights-syntax-markup-inserted-bg)}.markdown-body .pl-mc{color:var(--color-prettylights-syntax-markup-changed-text);background-color:var(--color-prettylights-syntax-markup-changed-bg)}.markdown-body .pl-mi2{color:var(--color-prettylights-syntax-markup-ignored-text);background-color:var(--color-prettylights-syntax-markup-ignored-bg)}.markdown-body .pl-mdr{font-weight:700;color:var(--color-prettylights-syntax-meta-diff-range)}.markdown-body .pl-ba{color:var(--color-prettylights-syntax-brackethighlighter-angle)}.markdown-body .pl-sg{color:var(--color-prettylights-syntax-sublimelinter-gutter-mark)}.markdown-body .pl-corl{text-decoration:underline;color:var(--color-prettylights-syntax-constant-other-reference-link)}.markdown-body g-emoji{display:inline-block;min-width:1ch;font-family:Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol;font-size:1em;font-style:normal!important;font-weight:var(--base-text-weight-normal,400);line-height:1;vertical-align:-.075em}.markdown-body g-emoji img{width:1em;height:1em}.markdown-body .task-list-item{list-style-type:none}.markdown-body .task-list-item label{font-weight:var(--base-text-weight-normal,400)}.markdown-body .task-list-item.enabled label{cursor:pointer}.markdown-body .task-list-item+.task-list-item{margin-top:4px}.markdown-body .task-list-item .handle{display:none}.markdown-body .task-list-item-checkbox{margin:0 .2em .25em -1.4em;vertical-align:middle}.markdown-body .contains-task-list:dir(rtl) .task-list-item-checkbox{margin:0 -1.6em .25em .2em}.markdown-body .contains-task-list{position:relative}.markdown-body .contains-task-list:focus-within .task-list-item-convert-container,.markdown-body .contains-task-list:hover .task-list-item-convert-container{display:block;width:auto;height:24px;overflow:visible;clip:auto}.markdown-body ::-webkit-calendar-picker-indicator{filter:invert(50%)}.markdown-custom-styles{color:inherit;background-color:transparent;>p,>ul,ol{margin-bottom:5px}>ul,ol{list-style:disc;padding-left:1em}& li p{margin-top:5px;margin-bottom:5px}& pre{padding:0;margin-top:10px;margin-bottom:10px}& pre code{white-space:pre-wrap;padding:10px}& img{max-width:min(80%,300px);margin-top:5px}& a:not(:has(sup)){color:inherit;text-decoration:underline}}
\ No newline at end of file
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/model_selection/ensemble.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/model_selection/ensemble.py
deleted file mode 100644
index 9e0a489d1d95822ef580bbb3d7e2c8f38b2735e4..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/model_selection/ensemble.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import shutil
-from multiprocessing.pool import Pool
-
-import numpy as np
-from batchgenerators.utilities.file_and_folder_operations import *
-from nnunet.configuration import default_num_threads
-from nnunet.evaluation.evaluator import aggregate_scores
-from nnunet.inference.segmentation_export import save_segmentation_nifti_from_softmax
-from nnunet.paths import network_training_output_dir, preprocessing_output_dir
-from nnunet.postprocessing.connected_components import determine_postprocessing
-
-
-def merge(args):
- file1, file2, properties_file, out_file = args
- if not isfile(out_file):
- res1 = np.load(file1)['softmax']
- res2 = np.load(file2)['softmax']
- props = load_pickle(properties_file)
- mn = np.mean((res1, res2), 0)
- # Softmax probabilities are already at target spacing so this will not do any resampling (resampling parameters
- # don't matter here)
- save_segmentation_nifti_from_softmax(mn, out_file, props, 3, None, None, None, force_separate_z=None,
- interpolation_order_z=0)
-
-
-def ensemble(training_output_folder1, training_output_folder2, output_folder, task, validation_folder, folds, allow_ensembling: bool = True):
- print("\nEnsembling folders\n", training_output_folder1, "\n", training_output_folder2)
-
- output_folder_base = output_folder
- output_folder = join(output_folder_base, "ensembled_raw")
-
- # only_keep_largest_connected_component is the same for all stages
- dataset_directory = join(preprocessing_output_dir, task)
- plans = load_pickle(join(training_output_folder1, "plans.pkl")) # we need this only for the labels
-
- files1 = []
- files2 = []
- property_files = []
- out_files = []
- gt_segmentations = []
-
- folder_with_gt_segs = join(dataset_directory, "gt_segmentations")
- # in the correct shape and we need the original geometry to restore the niftis
-
- for f in folds:
- validation_folder_net1 = join(training_output_folder1, "fold_%d" % f, validation_folder)
- validation_folder_net2 = join(training_output_folder2, "fold_%d" % f, validation_folder)
-
- if not isdir(validation_folder_net1):
- raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnUNet_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net1)
- if not isdir(validation_folder_net2):
- raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnUNet_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net2)
-
- # we need to ensure the validation was successful. We can verify this via the presence of the summary.json file
- if not isfile(join(validation_folder_net1, 'summary.json')):
- raise AssertionError("Validation directory incomplete: %s. Please rerun validation with `nnUNet_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net1)
- if not isfile(join(validation_folder_net2, 'summary.json')):
- raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnUNet_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net2)
-
- patient_identifiers1_npz = [i[:-4] for i in subfiles(validation_folder_net1, False, None, 'npz', True)]
- patient_identifiers2_npz = [i[:-4] for i in subfiles(validation_folder_net2, False, None, 'npz', True)]
-
- # we don't do postprocessing anymore so there should not be any of that noPostProcess
- patient_identifiers1_nii = [i[:-7] for i in subfiles(validation_folder_net1, False, None, suffix='nii.gz', sort=True) if not i.endswith("noPostProcess.nii.gz") and not i.endswith('_postprocessed.nii.gz')]
- patient_identifiers2_nii = [i[:-7] for i in subfiles(validation_folder_net2, False, None, suffix='nii.gz', sort=True) if not i.endswith("noPostProcess.nii.gz") and not i.endswith('_postprocessed.nii.gz')]
-
- if not all([i in patient_identifiers1_npz for i in patient_identifiers1_nii]):
- raise AssertionError("Missing npz files in folder %s. Please run the validation for all models and folds with the '--npz' flag." % (validation_folder_net1))
- if not all([i in patient_identifiers2_npz for i in patient_identifiers2_nii]):
- raise AssertionError("Missing npz files in folder %s. Please run the validation for all models and folds with the '--npz' flag." % (validation_folder_net2))
-
- patient_identifiers1_npz.sort()
- patient_identifiers2_npz.sort()
-
- assert all([i == j for i, j in zip(patient_identifiers1_npz, patient_identifiers2_npz)]), "npz filenames do not match. This should not happen."
-
- maybe_mkdir_p(output_folder)
-
- for p in patient_identifiers1_npz:
- files1.append(join(validation_folder_net1, p + '.npz'))
- files2.append(join(validation_folder_net2, p + '.npz'))
- property_files.append(join(validation_folder_net1, p) + ".pkl")
- out_files.append(join(output_folder, p + ".nii.gz"))
- gt_segmentations.append(join(folder_with_gt_segs, p + ".nii.gz"))
-
- p = Pool(default_num_threads)
- p.map(merge, zip(files1, files2, property_files, out_files))
- p.close()
- p.join()
-
- if not isfile(join(output_folder, "summary.json")) and len(out_files) > 0:
- aggregate_scores(tuple(zip(out_files, gt_segmentations)), labels=plans['all_classes'],
- json_output_file=join(output_folder, "summary.json"), json_task=task,
- json_name=task + "__" + output_folder_base.split("/")[-1], num_threads=default_num_threads)
-
- if allow_ensembling and not isfile(join(output_folder_base, "postprocessing.json")):
- # now lets also look at postprocessing. We cannot just take what we determined in cross-validation and apply it
- # here because things may have changed and may also be too inconsistent between the two networks
- determine_postprocessing(output_folder_base, folder_with_gt_segs, "ensembled_raw", "temp",
- "ensembled_postprocessed", default_num_threads, dice_threshold=0)
-
- out_dir_all_json = join(network_training_output_dir, "summary_jsons")
- json_out = load_json(join(output_folder_base, "ensembled_postprocessed", "summary.json"))
-
- json_out["experiment_name"] = output_folder_base.split("/")[-1]
- save_json(json_out, join(output_folder_base, "ensembled_postprocessed", "summary.json"))
-
- maybe_mkdir_p(out_dir_all_json)
- shutil.copy(join(output_folder_base, "ensembled_postprocessed", "summary.json"),
- join(out_dir_all_json, "%s__%s.json" % (task, output_folder_base.split("/")[-1])))
diff --git a/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard/utils.py b/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard/utils.py
deleted file mode 100644
index 13587c3623fee788f38388fc0917d174580e36f6..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard/utils.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Based on Omar Sanseviero work
-# Make model clickable link
-def make_clickable_model(model_name):
- # remove user from model name
- model_name_show = ' '.join(model_name.split('/')[1:])
-
- link = "https://huggingface.co/" + model_name
- return f'{model_name_show}'
-
-# Make user clickable link
-def make_clickable_user(user_id):
- link = "https://huggingface.co/" + user_id
- return f'{user_id}'
-
\ No newline at end of file
diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/2-f87c835b.js b/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/2-f87c835b.js
deleted file mode 100644
index 8fd3cf7ab061f9ac7549e40ba1305451105dcf59..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/2-f87c835b.js
+++ /dev/null
@@ -1 +0,0 @@
-import{_ as r}from"./_page-802cc2a3.js";import{default as t}from"../components/pages/_page.svelte-4566c4b6.js";export{t as component,r as shared};
diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/README.md b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/README.md
deleted file mode 100644
index 8d391f63684dd1f47900dc6449a5e22fa25e3da3..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/README.md
+++ /dev/null
@@ -1,218 +0,0 @@
-# Distributed Arcface Training in Pytorch
-
-The "arcface_torch" repository is the official implementation of the ArcFace algorithm. It supports distributed and sparse training with multiple distributed training examples, including several memory-saving techniques such as mixed precision training and gradient checkpointing. It also supports training for ViT models and datasets including WebFace42M and Glint360K, two of the largest open-source datasets. Additionally, the repository comes with a built-in tool for converting to ONNX format, making it easy to submit to MFR evaluation systems.
-
-[](https://paperswithcode.com/sota/face-verification-on-ijb-c?p=killing-two-birds-with-one-stone-efficient)
-[](https://paperswithcode.com/sota/face-verification-on-ijb-b?p=killing-two-birds-with-one-stone-efficient)
-[](https://paperswithcode.com/sota/face-verification-on-agedb-30?p=killing-two-birds-with-one-stone-efficient)
-[](https://paperswithcode.com/sota/face-verification-on-cfp-fp?p=killing-two-birds-with-one-stone-efficient)
-
-## Requirements
-
-To avail the latest features of PyTorch, we have upgraded to version 1.12.0.
-
-- Install [PyTorch](https://pytorch.org/get-started/previous-versions/) (torch>=1.12.0).
-- (Optional) Install [DALI](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/), our doc for [install_dali.md](docs/install_dali.md).
-- `pip install -r requirement.txt`.
-
-## How to Training
-
-To train a model, execute the `train.py` script with the path to the configuration files. The sample commands provided below demonstrate the process of conducting distributed training.
-
-### 1. To run on one GPU:
-
-```shell
-python train_v2.py configs/ms1mv3_r50_onegpu
-```
-
-Note:
-It is not recommended to use a single GPU for training, as this may result in longer training times and suboptimal performance. For best results, we suggest using multiple GPUs or a GPU cluster.
-
-
-### 2. To run on a machine with 8 GPUs:
-
-```shell
-torchrun --nproc_per_node=8 train.py configs/ms1mv3_r50
-```
-
-### 3. To run on 2 machines with 8 GPUs each:
-
-Node 0:
-
-```shell
-torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=12581 train.py configs/wf42m_pfc02_16gpus_r100
-```
-
-Node 1:
-
-```shell
-torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=12581 train.py configs/wf42m_pfc02_16gpus_r100
-```
-
-### 4. Run ViT-B on a machine with 24k batchsize:
-
-```shell
-torchrun --nproc_per_node=8 train_v2.py configs/wf42m_pfc03_40epoch_8gpu_vit_b
-```
-
-
-## Download Datasets or Prepare Datasets
-- [MS1MV2](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_#ms1m-arcface-85k-ids58m-images-57) (87k IDs, 5.8M images)
-- [MS1MV3](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_#ms1m-retinaface) (93k IDs, 5.2M images)
-- [Glint360K](https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc#4-download) (360k IDs, 17.1M images)
-- [WebFace42M](docs/prepare_webface42m.md) (2M IDs, 42.5M images)
-- [Your Dataset, Click Here!](docs/prepare_custom_dataset.md)
-
-Note:
-If you want to use DALI for data reading, please use the script 'scripts/shuffle_rec.py' to shuffle the InsightFace style rec before using it.
-Example:
-
-`python scripts/shuffle_rec.py ms1m-retinaface-t1`
-
-You will get the "shuffled_ms1m-retinaface-t1" folder, where the samples in the "train.rec" file are shuffled.
-
-
-## Model Zoo
-
-- The models are available for non-commercial research purposes only.
-- All models can be found in here.
-- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g): e8pw
-- [OneDrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d)
-
-### Performance on IJB-C and [**ICCV2021-MFR**](https://github.com/deepinsight/insightface/blob/master/challenges/mfr/README.md)
-
-ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face
-recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities.
-As the result, we can evaluate the FAIR performance for different algorithms.
-
-For **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The
-globalised multi-racial testset contains 242,143 identities and 1,624,305 images.
-
-
-#### 1. Training on Single-Host GPU
-
-| Datasets | Backbone | **MFR-ALL** | IJB-C(1E-4) | IJB-C(1E-5) | log |
-|:---------------|:--------------------|:------------|:------------|:------------|:------------------------------------------------------------------------------------------------------------------------------------|
-| MS1MV2 | mobilefacenet-0.45G | 62.07 | 93.61 | 90.28 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv2_mbf/training.log) |
-| MS1MV2 | r50 | 75.13 | 95.97 | 94.07 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv2_r50/training.log) |
-| MS1MV2 | r100 | 78.12 | 96.37 | 94.27 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv2_r100/training.log) |
-| MS1MV3 | mobilefacenet-0.45G | 63.78 | 94.23 | 91.33 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_mbf/training.log) |
-| MS1MV3 | r50 | 79.14 | 96.37 | 94.47 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_r50/training.log) |
-| MS1MV3 | r100 | 81.97 | 96.85 | 95.02 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_r100/training.log) |
-| Glint360K | mobilefacenet-0.45G | 70.18 | 95.04 | 92.62 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_mbf/training.log) |
-| Glint360K | r50 | 86.34 | 97.16 | 95.81 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_r50/training.log) |
-| Glint360k | r100 | 89.52 | 97.55 | 96.38 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_r100/training.log) |
-| WF4M | r100 | 89.87 | 97.19 | 95.48 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf4m_r100/training.log) |
-| WF12M-PFC-0.2 | r100 | 94.75 | 97.60 | 95.90 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf12m_pfc02_r100/training.log) |
-| WF12M-PFC-0.3 | r100 | 94.71 | 97.64 | 96.01 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf12m_pfc03_r100/training.log) |
-| WF12M | r100 | 94.69 | 97.59 | 95.97 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf12m_r100/training.log) |
-| WF42M-PFC-0.2 | r100 | 96.27 | 97.70 | 96.31 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf42m_pfc02_r100/training.log) |
-| WF42M-PFC-0.2 | ViT-T-1.5G | 92.04 | 97.27 | 95.68 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf42m_pfc02_40epoch_8gpu_vit_t/training.log) |
-| WF42M-PFC-0.3 | ViT-B-11G | 97.16 | 97.91 | 97.05 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/pfc03_wf42m_vit_b_8gpu/training.log) |
-
-#### 2. Training on Multi-Host GPU
-
-| Datasets | Backbone(bs*gpus) | **MFR-ALL** | IJB-C(1E-4) | IJB-C(1E-5) | Throughout | log |
-|:-----------------|:------------------|:------------|:------------|:------------|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|
-| WF42M-PFC-0.2 | r50(512*8) | 93.83 | 97.53 | 96.16 | ~5900 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/webface42m_r50_bs4k_pfc02/training.log) |
-| WF42M-PFC-0.2 | r50(512*16) | 93.96 | 97.46 | 96.12 | ~11000 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/webface42m_r50_lr01_pfc02_bs8k_16gpus/training.log) |
-| WF42M-PFC-0.2 | r50(128*32) | 94.04 | 97.48 | 95.94 | ~17000 | click me |
-| WF42M-PFC-0.2 | r100(128*16) | 96.28 | 97.80 | 96.57 | ~5200 | click me |
-| WF42M-PFC-0.2 | r100(256*16) | 96.69 | 97.85 | 96.63 | ~5200 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/webface42m_r100_bs4k_pfc02/training.log) |
-| WF42M-PFC-0.0018 | r100(512*32) | 93.08 | 97.51 | 95.88 | ~10000 | click me |
-| WF42M-PFC-0.2 | r100(128*32) | 96.57 | 97.83 | 96.50 | ~9800 | click me |
-
-`r100(128*32)` means backbone is r100, batchsize per gpu is 128, the number of gpus is 32.
-
-
-
-#### 3. ViT For Face Recognition
-
-| Datasets | Backbone(bs) | FLOPs | **MFR-ALL** | IJB-C(1E-4) | IJB-C(1E-5) | Throughout | log |
-|:--------------|:--------------|:------|:------------|:------------|:------------|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|
-| WF42M-PFC-0.3 | r18(128*32) | 2.6 | 79.13 | 95.77 | 93.36 | - | click me |
-| WF42M-PFC-0.3 | r50(128*32) | 6.3 | 94.03 | 97.48 | 95.94 | - | click me |
-| WF42M-PFC-0.3 | r100(128*32) | 12.1 | 96.69 | 97.82 | 96.45 | - | click me |
-| WF42M-PFC-0.3 | r200(128*32) | 23.5 | 97.70 | 97.97 | 96.93 | - | click me |
-| WF42M-PFC-0.3 | VIT-T(384*64) | 1.5 | 92.24 | 97.31 | 95.97 | ~35000 | click me |
-| WF42M-PFC-0.3 | VIT-S(384*64) | 5.7 | 95.87 | 97.73 | 96.57 | ~25000 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/pfc03_wf42m_vit_s_64gpu/training.log) |
-| WF42M-PFC-0.3 | VIT-B(384*64) | 11.4 | 97.42 | 97.90 | 97.04 | ~13800 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/pfc03_wf42m_vit_b_64gpu/training.log) |
-| WF42M-PFC-0.3 | VIT-L(384*64) | 25.3 | 97.85 | 98.00 | 97.23 | ~9406 | [click me](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/pfc03_wf42m_vit_l_64gpu/training.log) |
-
-`WF42M` means WebFace42M, `PFC-0.3` means negivate class centers sample rate is 0.3.
-
-#### 4. Noisy Datasets
-
-| Datasets | Backbone | **MFR-ALL** | IJB-C(1E-4) | IJB-C(1E-5) | log |
-|:-------------------------|:---------|:------------|:------------|:------------|:---------|
-| WF12M-Flip(40%) | r50 | 43.87 | 88.35 | 80.78 | click me |
-| WF12M-Flip(40%)-PFC-0.1* | r50 | 80.20 | 96.11 | 93.79 | click me |
-| WF12M-Conflict | r50 | 79.93 | 95.30 | 91.56 | click me |
-| WF12M-Conflict-PFC-0.3* | r50 | 91.68 | 97.28 | 95.75 | click me |
-
-`WF12M` means WebFace12M, `+PFC-0.1*` denotes additional abnormal inter-class filtering.
-
-
-
-## Speed Benchmark
-
-
-
-**Arcface-Torch** is an efficient tool for training large-scale face recognition training sets. When the number of classes in the training sets exceeds one million, the partial FC sampling strategy maintains the same accuracy while providing several times faster training performance and lower GPU memory utilization. The partial FC is a sparse variant of the model parallel architecture for large-scale face recognition, utilizing a sparse softmax that dynamically samples a subset of class centers for each training batch. During each iteration, only a sparse portion of the parameters are updated, leading to a significant reduction in GPU memory requirements and computational demands. With the partial FC approach, it is possible to train sets with up to 29 million identities, the largest to date. Furthermore, the partial FC method supports multi-machine distributed training and mixed precision training.
-
-
-
-More details see
-[speed_benchmark.md](docs/speed_benchmark.md) in docs.
-
-> 1. Training Speed of Various Parallel Techniques (Samples per Second) on a Tesla V100 32GB x 8 System (Higher is Optimal)
-
-`-` means training failed because of gpu memory limitations.
-
-| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |
-|:--------------------------------|:--------------|:---------------|:---------------|
-| 125000 | 4681 | 4824 | 5004 |
-| 1400000 | **1672** | 3043 | 4738 |
-| 5500000 | **-** | **1389** | 3975 |
-| 8000000 | **-** | **-** | 3565 |
-| 16000000 | **-** | **-** | 2679 |
-| 29000000 | **-** | **-** | **1855** |
-
-> 2. GPU Memory Utilization of Various Parallel Techniques (MB per GPU) on a Tesla V100 32GB x 8 System (Lower is Optimal)
-
-| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |
-|:--------------------------------|:--------------|:---------------|:---------------|
-| 125000 | 7358 | 5306 | 4868 |
-| 1400000 | 32252 | 11178 | 6056 |
-| 5500000 | **-** | 32188 | 9854 |
-| 8000000 | **-** | **-** | 12310 |
-| 16000000 | **-** | **-** | 19950 |
-| 29000000 | **-** | **-** | 32324 |
-
-
-## Citations
-
-```
-@inproceedings{deng2019arcface,
- title={Arcface: Additive angular margin loss for deep face recognition},
- author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},
- booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
- pages={4690--4699},
- year={2019}
-}
-@inproceedings{An_2022_CVPR,
- author={An, Xiang and Deng, Jiankang and Guo, Jia and Feng, Ziyong and Zhu, XuHan and Yang, Jing and Liu, Tongliang},
- title={Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC},
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
- month={June},
- year={2022},
- pages={4042-4051}
-}
-@inproceedings{zhu2021webface260m,
- title={Webface260m: A benchmark unveiling the power of million-scale deep face recognition},
- author={Zhu, Zheng and Huang, Guan and Deng, Jiankang and Ye, Yun and Huang, Junjie and Chen, Xinze and Zhu, Jiagang and Yang, Tian and Lu, Jiwen and Du, Dalong and Zhou, Jie},
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
- pages={10492--10502},
- year={2021}
-}
-```
diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_r50.py
deleted file mode 100644
index fde56fed6d8513b95882b7701f93f8574afbca9c..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_flip_r50.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.margin_list = (1.0, 0.0, 0.4)
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.interclass_filtering_threshold = 0
-config.fp16 = True
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.optimizer = "sgd"
-config.lr = 0.1
-config.verbose = 2000
-config.dali = False
-
-config.rec = "/train_tmp/WebFace12M_FLIP40"
-config.num_classes = 617970
-config.num_image = 12720066
-config.num_epoch = 20
-config.warmup_epoch = config.num_epoch // 10
-config.val_targets = []
diff --git a/spaces/imseldrith/Imagine/README.md b/spaces/imseldrith/Imagine/README.md
deleted file mode 100644
index b1c96d771d074a51d273fdba14742b8d7d2837cb..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/Imagine/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Imagine
-emoji: 🔥
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.34.0
-app_file: tapp.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/inamXcontru/PoeticTTS/Create Your Own Paradise with My Sunny Resort Crack Download Skidrow.md b/spaces/inamXcontru/PoeticTTS/Create Your Own Paradise with My Sunny Resort Crack Download Skidrow.md
deleted file mode 100644
index 9449477221dba14481ee1eadf4bc3efb7efe045a..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Create Your Own Paradise with My Sunny Resort Crack Download Skidrow.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/DJ Krush Strictly Turntablized 320kbpsrar Why This Album is a Must-Have for Any Fan of Experimental Music.md b/spaces/inamXcontru/PoeticTTS/DJ Krush Strictly Turntablized 320kbpsrar Why This Album is a Must-Have for Any Fan of Experimental Music.md
deleted file mode 100644
index 692cfaaa1c6e8612785bbab8546d9730485a1d57..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/DJ Krush Strictly Turntablized 320kbpsrar Why This Album is a Must-Have for Any Fan of Experimental Music.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Aprender A Vivir Jose Antonio Marina Epub.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Aprender A Vivir Jose Antonio Marina Epub.md
deleted file mode 100644
index cdf68de46bc7899f3619d594c95c2ba3216aac61..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Aprender A Vivir Jose Antonio Marina Epub.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
En este libro, Marina nos propone un esbozo de psicologÃa emergente, el desarrollo de la personalidad a partir de unas estructuras biológicas y sociales, un proceso que empieza en la psicologÃa y acaba en la moral. Además, nos ofrece las bases para ayudar al niño a desarrollar una personalidad inteligente desplegada en la acción.
Según Marina, la personalidad es el punto donde estructuras psicológicas y normas culturales se mezclan. Por eso, aprender a vivir implica aprender a pensar, a sentir y a actuar de forma coherente y responsable. El autor nos invita a reflexionar sobre nuestra propia vida y sobre cómo podemos mejorarla con el ejercicio de la inteligencia y la voluntad.
Marina es un autor que combina rigor y claridad en sus obras, con un estilo ameno y cercano que invita al lector a participar en su reflexión. Su objetivo es ofrecer herramientas para mejorar nuestra vida personal y social, fomentando el desarrollo de una inteligencia creadora y crÃtica. Su pensamiento se basa en el diálogo entre las ciencias y las humanidades, buscando una visión integradora y actualizada del conocimiento.
-
-Free Download Kung Fu Panda 3 MB Full Movie In Hindi P HD HD Full . Kung Fu ... Maharani kottai 2015 hd 720p tamil movie watch online tamil movie tamilrockers torrent. ... Telugu Full Movie Download You can watch this Movie hd free MLA full . ... yeto vellipoyindi manasu dvdrip download movies 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Bheja Fry Man Movie Mp4 Download LINK.md b/spaces/inreVtussa/clothingai/Examples/Bheja Fry Man Movie Mp4 Download LINK.md
deleted file mode 100644
index 050212da58f94d823f54405604b4cb46593633f9..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bheja Fry Man Movie Mp4 Download LINK.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Bheja Fry: A Hilarious Comedy Film Starring Vinay Pathak
-
Bheja Fry is a 2007 Indian comedy film directed by Sagar Ballary and starring Vinay Pathak, Rajat Kapoor, Sarika, Ranvir Shorey and Milind Soman. The film is a remake of the 1998 French film Le Dîner de Cons (The Dinner Game) and follows the misadventures of a simpleton who is invited to a dinner party by a wealthy businessman who likes to make fun of his guests.
-
The film was a surprise hit at the box office and received positive reviews from critics and audiences alike. It was praised for its witty dialogues, hilarious situations and brilliant performances by the cast, especially Vinay Pathak who played the role of Bharat Bhushan, the naive and annoying tax inspector who loves to sing old Hindi songs. The film also spawned two sequels, Bheja Fry 2 (2011) and Bheja Fry 3 (2017), which were less successful than the original.
If you are looking for a fun and entertaining movie to watch with your friends or family, you can download Bheja Fry in mp4 format from various online platforms such as SoundCloud[^1^], Step Up Business School[^2^] or Microsoft Sway[^3^]. However, please be aware that downloading movies from unauthorized sources may be illegal and unethical. We recommend that you watch the movie legally on streaming services such as Netflix or Amazon Prime Video.
In this article, we will give you a brief overview of the plot and the characters of Bheja Fry. The film revolves around Rajat Kapoor's character, Ranjeet Thadani, a successful music producer who hosts a weekly dinner party with his friends where they invite a fool (a bheja fry) and make fun of him behind his back. One day, Ranjeet meets Bharat Bhushan (Vinay Pathak), a tax inspector who claims to be an aspiring singer and invites him to his dinner party. However, things go awry when Bharat arrives at Ranjeet's house and causes a series of mishaps that ruin Ranjeet's life.
-
Bharat is a naive and good-hearted man who loves to sing old Hindi songs and share his personal stories with anyone who would listen. He is oblivious to the fact that Ranjeet and his friends are mocking him and thinks that they are genuinely interested in him. He also has a crush on Ranjeet's wife Sheetal (Sarika), who is having an affair with Ranjeet's friend Anant Ghoshal (Milind Soman), a tax evader. Bharat unknowingly exposes Anant's illegal activities to Ranjeet and also reveals Sheetal's infidelity to him. He also annoys Ranjeet's other friend Asif Merchant (Ranvir Shorey), a film critic who hates Bharat's singing.
-
The film is full of hilarious scenes and dialogues that will make you laugh out loud. Some of the memorable scenes include Bharat singing "O Majhi Re" in a high-pitched voice, Bharat calling Ranjeet's doctor friend Dr. Kachroo (Harsh Chhaya) and asking him about his health problems, Bharat giving Ranjeet a massage with mustard oil and turmeric, Bharat playing antakshari with Sheetal and Anant, and Bharat accidentally deleting Ranjeet's important files from his laptop. The film also has a twist ending that will surprise you.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/ioclab/brightness-controlnet/README.md b/spaces/ioclab/brightness-controlnet/README.md
deleted file mode 100644
index bddf2ba19a8d7b595fb8acb5cd4ec482984857ce..0000000000000000000000000000000000000000
--- a/spaces/ioclab/brightness-controlnet/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Brightness ControlNet
-emoji: 💻
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-tags:
-- jax-diffusers-event
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/irvay/RVC_IR/mygit.sh b/spaces/irvay/RVC_IR/mygit.sh
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ismot/8testi1/utils/loss.py b/spaces/ismot/8testi1/utils/loss.py
deleted file mode 100644
index 31386328ec1564bb13cab9f5de1a2fdabdf922f7..0000000000000000000000000000000000000000
--- a/spaces/ismot/8testi1/utils/loss.py
+++ /dev/null
@@ -1,1157 +0,0 @@
-# Loss functions
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from utils.general import bbox_iou, bbox_alpha_iou, box_iou, box_giou, box_diou, box_ciou, xywh2xyxy
-from utils.torch_utils import is_parallel
-
-
-def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
- # return positive, negative label smoothing BCE targets
- return 1.0 - 0.5 * eps, 0.5 * eps
-
-
-class BCEBlurWithLogitsLoss(nn.Module):
- # BCEwithLogitLoss() with reduced missing label effects.
- def __init__(self, alpha=0.05):
- super(BCEBlurWithLogitsLoss, self).__init__()
- self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
- self.alpha = alpha
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- pred = torch.sigmoid(pred) # prob from logits
- dx = pred - true # reduce only missing label effects
- # dx = (pred - true).abs() # reduce missing label and false label effects
- alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
- loss *= alpha_factor
- return loss.mean()
-
-
-class SigmoidBin(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, bin_count=10, min=0.0, max=1.0, reg_scale = 2.0, use_loss_regression=True, use_fw_regression=True, BCE_weight=1.0, smooth_eps=0.0):
- super(SigmoidBin, self).__init__()
-
- self.bin_count = bin_count
- self.length = bin_count + 1
- self.min = min
- self.max = max
- self.scale = float(max - min)
- self.shift = self.scale / 2.0
-
- self.use_loss_regression = use_loss_regression
- self.use_fw_regression = use_fw_regression
- self.reg_scale = reg_scale
- self.BCE_weight = BCE_weight
-
- start = min + (self.scale/2.0) / self.bin_count
- end = max - (self.scale/2.0) / self.bin_count
- step = self.scale / self.bin_count
- self.step = step
- #print(f" start = {start}, end = {end}, step = {step} ")
-
- bins = torch.range(start, end + 0.0001, step).float()
- self.register_buffer('bins', bins)
-
-
- self.cp = 1.0 - 0.5 * smooth_eps
- self.cn = 0.5 * smooth_eps
-
- self.BCEbins = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([BCE_weight]))
- self.MSELoss = nn.MSELoss()
-
- def get_length(self):
- return self.length
-
- def forward(self, pred):
- assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
-
- pred_reg = (pred[..., 0] * self.reg_scale - self.reg_scale/2.0) * self.step
- pred_bin = pred[..., 1:(1+self.bin_count)]
-
- _, bin_idx = torch.max(pred_bin, dim=-1)
- bin_bias = self.bins[bin_idx]
-
- if self.use_fw_regression:
- result = pred_reg + bin_bias
- else:
- result = bin_bias
- result = result.clamp(min=self.min, max=self.max)
-
- return result
-
-
- def training_loss(self, pred, target):
- assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
- assert pred.shape[0] == target.shape[0], 'pred.shape=%d is not equal to the target.shape=%d' % (pred.shape[0], target.shape[0])
- device = pred.device
-
- pred_reg = (pred[..., 0].sigmoid() * self.reg_scale - self.reg_scale/2.0) * self.step
- pred_bin = pred[..., 1:(1+self.bin_count)]
-
- diff_bin_target = torch.abs(target[..., None] - self.bins)
- _, bin_idx = torch.min(diff_bin_target, dim=-1)
-
- bin_bias = self.bins[bin_idx]
- bin_bias.requires_grad = False
- result = pred_reg + bin_bias
-
- target_bins = torch.full_like(pred_bin, self.cn, device=device) # targets
- n = pred.shape[0]
- target_bins[range(n), bin_idx] = self.cp
-
- loss_bin = self.BCEbins(pred_bin, target_bins) # BCE
-
- if self.use_loss_regression:
- loss_regression = self.MSELoss(result, target) # MSE
- loss = loss_bin + loss_regression
- else:
- loss = loss_bin
-
- out_result = result.clamp(min=self.min, max=self.max)
-
- return loss, out_result
-
-
-class FocalLoss(nn.Module):
- # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(FocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- # p_t = torch.exp(-loss)
- # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
-
- # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
- pred_prob = torch.sigmoid(pred) # prob from logits
- p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = (1.0 - p_t) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class QFocalLoss(nn.Module):
- # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(QFocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
-
- pred_prob = torch.sigmoid(pred) # prob from logits
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = torch.abs(true - pred_prob) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-class RankSort(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10):
-
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets > 0.)
- fg_logits = logits[fg_labels]
- fg_targets = targets[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta_RS
- relevant_bg_labels=((targets==0) & (logits>=threshold_logit))
-
- relevant_bg_logits = logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- sorting_error=torch.zeros(fg_num).cuda()
- ranking_error=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- # Difference Transforms (x_ij)
- fg_relations=fg_logits-fg_logits[ii]
- bg_relations=relevant_bg_logits-fg_logits[ii]
-
- if delta_RS > 0:
- fg_relations=torch.clamp(fg_relations/(2*delta_RS)+0.5,min=0,max=1)
- bg_relations=torch.clamp(bg_relations/(2*delta_RS)+0.5,min=0,max=1)
- else:
- fg_relations = (fg_relations >= 0).float()
- bg_relations = (bg_relations >= 0).float()
-
- # Rank of ii among pos and false positive number (bg with larger scores)
- rank_pos=torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
-
- # Rank of ii among all examples
- rank=rank_pos+FP_num
-
- # Ranking error of example ii. target_ranking_error is always 0. (Eq. 7)
- ranking_error[ii]=FP_num/rank
-
- # Current sorting error of example ii. (Eq. 7)
- current_sorting_error = torch.sum(fg_relations*(1-fg_targets))/rank_pos
-
- #Find examples in the target sorted order for example ii
- iou_relations = (fg_targets >= fg_targets[ii])
- target_sorted_order = iou_relations * fg_relations
-
- #The rank of ii among positives in sorted order
- rank_pos_target = torch.sum(target_sorted_order)
-
- #Compute target sorting error. (Eq. 8)
- #Since target ranking error is 0, this is also total target error
- target_sorting_error= torch.sum(target_sorted_order*(1-fg_targets))/rank_pos_target
-
- #Compute sorting error on example ii
- sorting_error[ii] = current_sorting_error - target_sorting_error
-
- #Identity Update for Ranking Error
- if FP_num > eps:
- #For ii the update is the ranking error
- fg_grad[ii] -= ranking_error[ii]
- #For negatives, distribute error via ranking pmf (i.e. bg_relations/FP_num)
- relevant_bg_grad += (bg_relations*(ranking_error[ii]/FP_num))
-
- #Find the positives that are misranked (the cause of the error)
- #These are the ones with smaller IoU but larger logits
- missorted_examples = (~ iou_relations) * fg_relations
-
- #Denominotor of sorting pmf
- sorting_pmf_denom = torch.sum(missorted_examples)
-
- #Identity Update for Sorting Error
- if sorting_pmf_denom > eps:
- #For ii the update is the sorting error
- fg_grad[ii] -= sorting_error[ii]
- #For positives, distribute error via sorting pmf (i.e. missorted_examples/sorting_pmf_denom)
- fg_grad += (missorted_examples*(sorting_error[ii]/sorting_pmf_denom))
-
- #Normalize gradients by number of positives
- classification_grads[fg_labels]= (fg_grad/fg_num)
- classification_grads[relevant_bg_labels]= (relevant_bg_grad/fg_num)
-
- ctx.save_for_backward(classification_grads)
-
- return ranking_error.mean(), sorting_error.mean()
-
- @staticmethod
- def backward(ctx, out_grad1, out_grad2):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None, None
-
-class aLRPLoss(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, regression_losses, delta=1., eps=1e-5):
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets == 1)
- fg_logits = logits[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta
-
- #Get valid bg logits
- relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
- relevant_bg_logits=logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- rank=torch.zeros(fg_num).cuda()
- prec=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- max_prec=0
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- #x_ij s as score differences with fgs
- fg_relations=fg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with fgs
- fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
- #Discard i=j in the summation in rank_pos
- fg_relations[ii]=0
-
- #x_ij s as score differences with bgs
- bg_relations=relevant_bg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with bgs
- bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
-
- #Compute the rank of the example within fgs and number of bgs with larger scores
- rank_pos=1+torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
- #Store the total since it is normalizer also for aLRP Regression error
- rank[ii]=rank_pos+FP_num
-
- #Compute precision for this example to compute classification loss
- prec[ii]=rank_pos/rank[ii]
- #For stability, set eps to a infinitesmall value (e.g. 1e-6), then compute grads
- if FP_num > eps:
- fg_grad[ii] = -(torch.sum(fg_relations*regression_losses)+FP_num)/rank[ii]
- relevant_bg_grad += (bg_relations*(-fg_grad[ii]/FP_num))
-
- #aLRP with grad formulation fg gradient
- classification_grads[fg_labels]= fg_grad
- #aLRP with grad formulation bg gradient
- classification_grads[relevant_bg_labels]= relevant_bg_grad
-
- classification_grads /= (fg_num)
-
- cls_loss=1-prec.mean()
- ctx.save_for_backward(classification_grads)
-
- return cls_loss, rank, order
-
- @staticmethod
- def backward(ctx, out_grad1, out_grad2, out_grad3):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None, None, None
-
-
-class APLoss(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, delta=1.):
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets == 1)
- fg_logits = logits[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta
-
- #Get valid bg logits
- relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
- relevant_bg_logits=logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- rank=torch.zeros(fg_num).cuda()
- prec=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- max_prec=0
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- #x_ij s as score differences with fgs
- fg_relations=fg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with fgs
- fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
- #Discard i=j in the summation in rank_pos
- fg_relations[ii]=0
-
- #x_ij s as score differences with bgs
- bg_relations=relevant_bg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with bgs
- bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
-
- #Compute the rank of the example within fgs and number of bgs with larger scores
- rank_pos=1+torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
- #Store the total since it is normalizer also for aLRP Regression error
- rank[ii]=rank_pos+FP_num
-
- #Compute precision for this example
- current_prec=rank_pos/rank[ii]
-
- #Compute interpolated AP and store gradients for relevant bg examples
- if (max_prec<=current_prec):
- max_prec=current_prec
- relevant_bg_grad += (bg_relations/rank[ii])
- else:
- relevant_bg_grad += (bg_relations/rank[ii])*(((1-max_prec)/(1-current_prec)))
-
- #Store fg gradients
- fg_grad[ii]=-(1-max_prec)
- prec[ii]=max_prec
-
- #aLRP with grad formulation fg gradient
- classification_grads[fg_labels]= fg_grad
- #aLRP with grad formulation bg gradient
- classification_grads[relevant_bg_labels]= relevant_bg_grad
-
- classification_grads /= fg_num
-
- cls_loss=1-prec.mean()
- ctx.save_for_backward(classification_grads)
-
- return cls_loss
-
- @staticmethod
- def backward(ctx, out_grad1):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None
-
-
-class ComputeLoss:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLoss, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.1, .05]) # P3-P7
- #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.5, 0.4, .1]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), tcls[i]] = self.cp
- #t[t==self.cp] = iou.detach().clamp(0).type(t.dtype)
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- tcls, tbox, indices, anch = [], [], [], []
- gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
- anch.append(anchors[a]) # anchors
- tcls.append(c) # class
-
- return tcls, tbox, indices, anch
-
-
-class ComputeLossOTA:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLossOTA, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors', 'stride':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets, imgs): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
- pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
-
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- grid = torch.stack([gi, gj], dim=1)
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- #pxy = ps[:, :2].sigmoid() * 3. - 1.
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
- selected_tbox[:, :2] -= grid
- iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- selected_tcls = targets[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), selected_tcls] = self.cp
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets, imgs):
-
- #indices, anch = self.find_positive(p, targets)
- indices, anch = self.find_3_positive(p, targets)
- #indices, anch = self.find_4_positive(p, targets)
- #indices, anch = self.find_5_positive(p, targets)
- #indices, anch = self.find_9_positive(p, targets)
-
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append(torch.ones(size=(len(b),)) * i)
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, 4:5])
- p_cls.append(fg_pred[:, 5:])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
- pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pxywh = torch.cat([pxy, pwh], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = matching_matrix.sum(0) > 0.0
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def find_3_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
-
-
-class ComputeLossBinOTA:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLossBinOTA, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
- #MSEangle = nn.MSELoss().to(device)
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors', 'stride', 'bin_count':
- setattr(self, k, getattr(det, k))
-
- #xy_bin_sigmoid = SigmoidBin(bin_count=11, min=-0.5, max=1.5, use_loss_regression=False).to(device)
- wh_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0, use_loss_regression=False).to(device)
- #angle_bin_sigmoid = SigmoidBin(bin_count=31, min=-1.1, max=1.1, use_loss_regression=False).to(device)
- self.wh_bin_sigmoid = wh_bin_sigmoid
-
- def __call__(self, p, targets, imgs): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
- pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
-
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 # x,y, w-bce, h-bce # xy_bin_sigmoid.get_length()*2
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- grid = torch.stack([gi, gj], dim=1)
- selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
- selected_tbox[:, :2] -= grid
-
- #pxy = ps[:, :2].sigmoid() * 2. - 0.5
- ##pxy = ps[:, :2].sigmoid() * 3. - 1.
- #pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- #pbox = torch.cat((pxy, pwh), 1) # predicted box
-
- #x_loss, px = xy_bin_sigmoid.training_loss(ps[..., 0:12], tbox[i][..., 0])
- #y_loss, py = xy_bin_sigmoid.training_loss(ps[..., 12:24], tbox[i][..., 1])
- w_loss, pw = self.wh_bin_sigmoid.training_loss(ps[..., 2:(3+self.bin_count)], selected_tbox[..., 2] / anchors[i][..., 0])
- h_loss, ph = self.wh_bin_sigmoid.training_loss(ps[..., (3+self.bin_count):obj_idx], selected_tbox[..., 3] / anchors[i][..., 1])
-
- pw *= anchors[i][..., 0]
- ph *= anchors[i][..., 1]
-
- px = ps[:, 0].sigmoid() * 2. - 0.5
- py = ps[:, 1].sigmoid() * 2. - 0.5
-
- lbox += w_loss + h_loss # + x_loss + y_loss
-
- #print(f"\n px = {px.shape}, py = {py.shape}, pw = {pw.shape}, ph = {ph.shape} \n")
-
- pbox = torch.cat((px.unsqueeze(1), py.unsqueeze(1), pw.unsqueeze(1), ph.unsqueeze(1)), 1).to(device) # predicted box
-
-
-
-
- iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- selected_tcls = targets[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, (1+obj_idx):], self.cn, device=device) # targets
- t[range(n), selected_tcls] = self.cp
- lcls += self.BCEcls(ps[:, (1+obj_idx):], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., obj_idx], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets, imgs):
-
- #indices, anch = self.find_positive(p, targets)
- indices, anch = self.find_3_positive(p, targets)
- #indices, anch = self.find_4_positive(p, targets)
- #indices, anch = self.find_5_positive(p, targets)
- #indices, anch = self.find_9_positive(p, targets)
-
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append(torch.ones(size=(len(b),)) * i)
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, obj_idx:(obj_idx+1)])
- p_cls.append(fg_pred[:, (obj_idx+1):])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pw = self.wh_bin_sigmoid.forward(fg_pred[..., 2:(3+self.bin_count)].sigmoid()) * anch[i][idx][:, 0] * self.stride[i]
- ph = self.wh_bin_sigmoid.forward(fg_pred[..., (3+self.bin_count):obj_idx].sigmoid()) * anch[i][idx][:, 1] * self.stride[i]
-
- pxywh = torch.cat([pxy, pw.unsqueeze(1), ph.unsqueeze(1)], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = matching_matrix.sum(0) > 0.0
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def find_3_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/modules.py b/spaces/ispast/Genshin_MB_VITS_TTS/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/ispast/Genshin_MB_VITS_TTS/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/jbilcke-hf/LifeSim/src/components/business/video-renderer.tsx b/spaces/jbilcke-hf/LifeSim/src/components/business/video-renderer.tsx
deleted file mode 100644
index aca52150b18c84953e0638dd205cf4a649b678b7..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/LifeSim/src/components/business/video-renderer.tsx
+++ /dev/null
@@ -1,22 +0,0 @@
-"use client"
-
-export const VideoRenderer = ({ url }: { url?: string }) => {
-
- if (!url) {
- return
-
Rendering first frames.. (might take around 30s)
-
- }
-
- return (
-
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/games/pharaoh.ts b/spaces/jbilcke-hf/VideoQuest/src/app/games/pharaoh.ts
deleted file mode 100644
index 4261df7f35374ce7fd0406451173f047eb8d5ca2..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/VideoQuest/src/app/games/pharaoh.ts
+++ /dev/null
@@ -1,74 +0,0 @@
-import { macondo } from "@/lib/fonts"
-import { Game } from "./types"
-import { InventoryItem } from "../../types"
-
-const initialSituation = [
- `looking at a beautiful pyramid, ancient egypt, during golden hour, surrounded by sand dunes, near the Nile`,
-].join(", ")
-
-const initialActionnables = [
- "pyramid",
- "person",
- "rocks",
- "dune",
- "sceptre",
- "tree",
- "river",
- "boat",
- "sun"
-]
-
-const inventory: InventoryItem[] = [
- {
- name: "bowl",
- title: "Bowl",
- caption: "",
- description: "A bowl. To eat things."
- },
- {
- name: "box",
- title: "Box",
- caption: "",
- description: "Full of mysteries."
- },
- {
- name: "golden-beetle",
- title: "Beetle pendant",
- caption: "",
- description: "This pendant has a mysterious aura.."
- },
- {
- name: "staff",
- title: "Staff",
- caption: "",
- description: "This used to belong to a magician."
- },
-]
-
-export const game: Game = {
- title: "Pharaoh",
- type: "pharaoh",
- description: [
- "The game is a role playing adventure set in ancient egypt.",
- "The player is Ahmose, a scribe asked by the Pharaoh to investigate ancient ruins about an unknown deity.",
- "The player can click around to move to new scenes, find or activate artifacts.",
- "They can also use objects from their inventory.",
- ],
- engines: [
- "cartesian_image",
- "cartesian_video",
- "spherical_image",
- ],
- className: macondo.className,
- initialSituation,
- initialActionnables,
- inventory,
- getScenePrompt: (situation?: string) => [
- `Screenshot from a videogame`,
- `unreal engine`,
- `ancient egypt`,
- `first person`,
- situation || initialSituation,
- ]
-}
-
diff --git a/spaces/jbilcke-hf/webapp-factory-llama-node/public/css/tailwind-typography@0.1.2.css b/spaces/jbilcke-hf/webapp-factory-llama-node/public/css/tailwind-typography@0.1.2.css
deleted file mode 100644
index 6824ef97438023939b62642ce3a28a69cc9e1176..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/webapp-factory-llama-node/public/css/tailwind-typography@0.1.2.css
+++ /dev/null
@@ -1 +0,0 @@
-.prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.prose a{color:#1a202c;text-decoration:underline}.prose strong{color:#1a202c;font-weight:600}.prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.prose ul>li{position:relative;padding-left:1.75em}.prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.prose blockquote p:first-of-type::before{content:open-quote}.prose blockquote p:last-of-type::after{content:close-quote}.prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.prose code::before{content:"`"}.prose code::after{content:"`"}.prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.prose pre code::before{content:""}.prose pre code::after{content:""}.prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.prose tbody tr:last-child{border-bottom-width:0}.prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.prose p{margin-top:1.25em;margin-bottom:1.25em}.prose img{margin-top:2em;margin-bottom:2em}.prose video{margin-top:2em;margin-bottom:2em}.prose figure{margin-top:2em;margin-bottom:2em}.prose figure>*{margin-top:0;margin-bottom:0}.prose h2 code{font-size:.875em}.prose h3 code{font-size:.9em}.prose ul{margin-top:1.25em;margin-bottom:1.25em}.prose li{margin-top:.5em;margin-bottom:.5em}.prose ol>li:before{left:0}.prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.prose>ul>li>:first-child{margin-top:1.25em}.prose>ul>li>:last-child{margin-bottom:1.25em}.prose>ol>li>:first-child{margin-top:1.25em}.prose>ol>li>:last-child{margin-bottom:1.25em}.prose ol ol,.prose ol ul,.prose ul ol,.prose ul ul{margin-top:.75em;margin-bottom:.75em}.prose hr+*{margin-top:0}.prose h2+*{margin-top:0}.prose h3+*{margin-top:0}.prose h4+*{margin-top:0}.prose thead th:first-child{padding-left:0}.prose thead th:last-child{padding-right:0}.prose tbody td:first-child{padding-left:0}.prose tbody td:last-child{padding-right:0}.prose>:first-child{margin-top:0}.prose>:last-child{margin-bottom:0}.prose-sm{font-size:.875rem;line-height:1.7142857}.prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.prose-sm figure>*{margin-top:0;margin-bottom:0}.prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.prose-sm code{font-size:.8571429em}.prose-sm h2 code{font-size:.9em}.prose-sm h3 code{font-size:.8888889em}.prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.prose-sm ol>li{padding-left:1.5714286em}.prose-sm ol>li:before{left:0}.prose-sm ul>li{padding-left:1.5714286em}.prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.prose-sm>ul>li>:first-child{margin-top:1.1428571em}.prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.prose-sm>ol>li>:first-child{margin-top:1.1428571em}.prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.prose-sm ol ol,.prose-sm ol ul,.prose-sm ul ol,.prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.prose-sm hr+*{margin-top:0}.prose-sm h2+*{margin-top:0}.prose-sm h3+*{margin-top:0}.prose-sm h4+*{margin-top:0}.prose-sm table{font-size:.8571429em;line-height:1.5}.prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm thead th:first-child{padding-left:0}.prose-sm thead th:last-child{padding-right:0}.prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.prose-sm tbody td:first-child{padding-left:0}.prose-sm tbody td:last-child{padding-right:0}.prose-sm>:first-child{margin-top:0}.prose-sm>:last-child{margin-bottom:0}.prose-lg{font-size:1.125rem;line-height:1.7777778}.prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.prose-lg figure>*{margin-top:0;margin-bottom:0}.prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.prose-lg code{font-size:.8888889em}.prose-lg h2 code{font-size:.8666667em}.prose-lg h3 code{font-size:.875em}.prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.prose-lg ol>li{padding-left:1.6666667em}.prose-lg ol>li:before{left:0}.prose-lg ul>li{padding-left:1.6666667em}.prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.prose-lg>ul>li>:first-child{margin-top:1.3333333em}.prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.prose-lg>ol>li>:first-child{margin-top:1.3333333em}.prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.prose-lg ol ol,.prose-lg ol ul,.prose-lg ul ol,.prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.prose-lg hr+*{margin-top:0}.prose-lg h2+*{margin-top:0}.prose-lg h3+*{margin-top:0}.prose-lg h4+*{margin-top:0}.prose-lg table{font-size:.8888889em;line-height:1.5}.prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.prose-lg thead th:first-child{padding-left:0}.prose-lg thead th:last-child{padding-right:0}.prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.prose-lg tbody td:first-child{padding-left:0}.prose-lg tbody td:last-child{padding-right:0}.prose-lg>:first-child{margin-top:0}.prose-lg>:last-child{margin-bottom:0}.prose-xl{font-size:1.25rem;line-height:1.8}.prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.prose-xl img{margin-top:2em;margin-bottom:2em}.prose-xl video{margin-top:2em;margin-bottom:2em}.prose-xl figure{margin-top:2em;margin-bottom:2em}.prose-xl figure>*{margin-top:0;margin-bottom:0}.prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.prose-xl code{font-size:.9em}.prose-xl h2 code{font-size:.8611111em}.prose-xl h3 code{font-size:.9em}.prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.prose-xl li{margin-top:.6em;margin-bottom:.6em}.prose-xl ol>li{padding-left:1.8em}.prose-xl ol>li:before{left:0}.prose-xl ul>li{padding-left:1.8em}.prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.prose-xl>ul>li>:first-child{margin-top:1.2em}.prose-xl>ul>li>:last-child{margin-bottom:1.2em}.prose-xl>ol>li>:first-child{margin-top:1.2em}.prose-xl>ol>li>:last-child{margin-bottom:1.2em}.prose-xl ol ol,.prose-xl ol ul,.prose-xl ul ol,.prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.prose-xl hr+*{margin-top:0}.prose-xl h2+*{margin-top:0}.prose-xl h3+*{margin-top:0}.prose-xl h4+*{margin-top:0}.prose-xl table{font-size:.9em;line-height:1.5555556}.prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.prose-xl thead th:first-child{padding-left:0}.prose-xl thead th:last-child{padding-right:0}.prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.prose-xl tbody td:first-child{padding-left:0}.prose-xl tbody td:last-child{padding-right:0}.prose-xl>:first-child{margin-top:0}.prose-xl>:last-child{margin-bottom:0}.prose-2xl{font-size:1.5rem;line-height:1.6666667}.prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.prose-2xl img{margin-top:2em;margin-bottom:2em}.prose-2xl video{margin-top:2em;margin-bottom:2em}.prose-2xl figure{margin-top:2em;margin-bottom:2em}.prose-2xl figure>*{margin-top:0;margin-bottom:0}.prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.prose-2xl code{font-size:.8333333em}.prose-2xl h2 code{font-size:.875em}.prose-2xl h3 code{font-size:.8888889em}.prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl li{margin-top:.5em;margin-bottom:.5em}.prose-2xl ol>li{padding-left:1.6666667em}.prose-2xl ol>li:before{left:0}.prose-2xl ul>li{padding-left:1.6666667em}.prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.prose-2xl ol ol,.prose-2xl ol ul,.prose-2xl ul ol,.prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.prose-2xl hr{margin-top:3em;margin-bottom:3em}.prose-2xl hr+*{margin-top:0}.prose-2xl h2+*{margin-top:0}.prose-2xl h3+*{margin-top:0}.prose-2xl h4+*{margin-top:0}.prose-2xl table{font-size:.8333333em;line-height:1.4}.prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.prose-2xl thead th:first-child{padding-left:0}.prose-2xl thead th:last-child{padding-right:0}.prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.prose-2xl tbody td:first-child{padding-left:0}.prose-2xl tbody td:last-child{padding-right:0}.prose-2xl>:first-child{margin-top:0}.prose-2xl>:last-child{margin-bottom:0}@media (min-width:640px){.sm\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .sm\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.sm\:prose a{color:#1a202c;text-decoration:underline}.sm\:prose strong{color:#1a202c;font-weight:600}.sm\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.sm\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.sm\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.sm\:prose ul>li{position:relative;padding-left:1.75em}.sm\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.sm\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.sm\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.sm\:prose blockquote p:first-of-type::before{content:open-quote}.sm\:prose blockquote p:last-of-type::after{content:close-quote}.sm\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.sm\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.sm\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.sm\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.sm\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.sm\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.sm\:prose code::before{content:"`"}.sm\:prose code::after{content:"`"}.sm\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.sm\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.sm\:prose pre code::before{content:""}.sm\:prose pre code::after{content:""}.sm\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.sm\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.sm\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.sm\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.sm\:prose tbody tr:last-child{border-bottom-width:0}.sm\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.sm\:prose p{margin-top:1.25em;margin-bottom:1.25em}.sm\:prose img{margin-top:2em;margin-bottom:2em}.sm\:prose video{margin-top:2em;margin-bottom:2em}.sm\:prose figure{margin-top:2em;margin-bottom:2em}.sm\:prose figure>*{margin-top:0;margin-bottom:0}.sm\:prose h2 code{font-size:.875em}.sm\:prose h3 code{font-size:.9em}.sm\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.sm\:prose li{margin-top:.5em;margin-bottom:.5em}.sm\:prose ol>li:before{left:0}.sm\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.sm\:prose>ul>li>:first-child{margin-top:1.25em}.sm\:prose>ul>li>:last-child{margin-bottom:1.25em}.sm\:prose>ol>li>:first-child{margin-top:1.25em}.sm\:prose>ol>li>:last-child{margin-bottom:1.25em}.sm\:prose ol ol,.sm\:prose ol ul,.sm\:prose ul ol,.sm\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.sm\:prose hr+*{margin-top:0}.sm\:prose h2+*{margin-top:0}.sm\:prose h3+*{margin-top:0}.sm\:prose h4+*{margin-top:0}.sm\:prose thead th:first-child{padding-left:0}.sm\:prose thead th:last-child{padding-right:0}.sm\:prose tbody td:first-child{padding-left:0}.sm\:prose tbody td:last-child{padding-right:0}.sm\:prose>:first-child{margin-top:0}.sm\:prose>:last-child{margin-bottom:0}.sm\:prose-sm{font-size:.875rem;line-height:1.7142857}.sm\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .sm\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.sm\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.sm\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.sm\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.sm\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.sm\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.sm\:prose-sm figure>*{margin-top:0;margin-bottom:0}.sm\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.sm\:prose-sm code{font-size:.8571429em}.sm\:prose-sm h2 code{font-size:.9em}.sm\:prose-sm h3 code{font-size:.8888889em}.sm\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.sm\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.sm\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.sm\:prose-sm ol>li{padding-left:1.5714286em}.sm\:prose-sm ol>li:before{left:0}.sm\:prose-sm ul>li{padding-left:1.5714286em}.sm\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.sm\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.sm\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.sm\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.sm\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.sm\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.sm\:prose-sm ol ol,.sm\:prose-sm ol ul,.sm\:prose-sm ul ol,.sm\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.sm\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.sm\:prose-sm hr+*{margin-top:0}.sm\:prose-sm h2+*{margin-top:0}.sm\:prose-sm h3+*{margin-top:0}.sm\:prose-sm h4+*{margin-top:0}.sm\:prose-sm table{font-size:.8571429em;line-height:1.5}.sm\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm thead th:first-child{padding-left:0}.sm\:prose-sm thead th:last-child{padding-right:0}.sm\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.sm\:prose-sm tbody td:first-child{padding-left:0}.sm\:prose-sm tbody td:last-child{padding-right:0}.sm\:prose-sm>:first-child{margin-top:0}.sm\:prose-sm>:last-child{margin-bottom:0}.sm\:prose-lg{font-size:1.125rem;line-height:1.7777778}.sm\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .sm\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.sm\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.sm\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.sm\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.sm\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.sm\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.sm\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.sm\:prose-lg figure>*{margin-top:0;margin-bottom:0}.sm\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.sm\:prose-lg code{font-size:.8888889em}.sm\:prose-lg h2 code{font-size:.8666667em}.sm\:prose-lg h3 code{font-size:.875em}.sm\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.sm\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.sm\:prose-lg ol>li{padding-left:1.6666667em}.sm\:prose-lg ol>li:before{left:0}.sm\:prose-lg ul>li{padding-left:1.6666667em}.sm\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.sm\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.sm\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.sm\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-lg ol ol,.sm\:prose-lg ol ul,.sm\:prose-lg ul ol,.sm\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.sm\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.sm\:prose-lg hr+*{margin-top:0}.sm\:prose-lg h2+*{margin-top:0}.sm\:prose-lg h3+*{margin-top:0}.sm\:prose-lg h4+*{margin-top:0}.sm\:prose-lg table{font-size:.8888889em;line-height:1.5}.sm\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.sm\:prose-lg thead th:first-child{padding-left:0}.sm\:prose-lg thead th:last-child{padding-right:0}.sm\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.sm\:prose-lg tbody td:first-child{padding-left:0}.sm\:prose-lg tbody td:last-child{padding-right:0}.sm\:prose-lg>:first-child{margin-top:0}.sm\:prose-lg>:last-child{margin-bottom:0}.sm\:prose-xl{font-size:1.25rem;line-height:1.8}.sm\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .sm\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.sm\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.sm\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.sm\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.sm\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.sm\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.sm\:prose-xl img{margin-top:2em;margin-bottom:2em}.sm\:prose-xl video{margin-top:2em;margin-bottom:2em}.sm\:prose-xl figure{margin-top:2em;margin-bottom:2em}.sm\:prose-xl figure>*{margin-top:0;margin-bottom:0}.sm\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.sm\:prose-xl code{font-size:.9em}.sm\:prose-xl h2 code{font-size:.8611111em}.sm\:prose-xl h3 code{font-size:.9em}.sm\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.sm\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.sm\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.sm\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.sm\:prose-xl ol>li{padding-left:1.8em}.sm\:prose-xl ol>li:before{left:0}.sm\:prose-xl ul>li{padding-left:1.8em}.sm\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.sm\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.sm\:prose-xl>ul>li>:first-child{margin-top:1.2em}.sm\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.sm\:prose-xl>ol>li>:first-child{margin-top:1.2em}.sm\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.sm\:prose-xl ol ol,.sm\:prose-xl ol ul,.sm\:prose-xl ul ol,.sm\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.sm\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.sm\:prose-xl hr+*{margin-top:0}.sm\:prose-xl h2+*{margin-top:0}.sm\:prose-xl h3+*{margin-top:0}.sm\:prose-xl h4+*{margin-top:0}.sm\:prose-xl table{font-size:.9em;line-height:1.5555556}.sm\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.sm\:prose-xl thead th:first-child{padding-left:0}.sm\:prose-xl thead th:last-child{padding-right:0}.sm\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.sm\:prose-xl tbody td:first-child{padding-left:0}.sm\:prose-xl tbody td:last-child{padding-right:0}.sm\:prose-xl>:first-child{margin-top:0}.sm\:prose-xl>:last-child{margin-bottom:0}.sm\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.sm\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .sm\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.sm\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.sm\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.sm\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.sm\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.sm\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.sm\:prose-2xl img{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl video{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.sm\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.sm\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.sm\:prose-2xl code{font-size:.8333333em}.sm\:prose-2xl h2 code{font-size:.875em}.sm\:prose-2xl h3 code{font-size:.8888889em}.sm\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.sm\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.sm\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.sm\:prose-2xl ol>li{padding-left:1.6666667em}.sm\:prose-2xl ol>li:before{left:0}.sm\:prose-2xl ul>li{padding-left:1.6666667em}.sm\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.sm\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.sm\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.sm\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.sm\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.sm\:prose-2xl ol ol,.sm\:prose-2xl ol ul,.sm\:prose-2xl ul ol,.sm\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.sm\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.sm\:prose-2xl hr+*{margin-top:0}.sm\:prose-2xl h2+*{margin-top:0}.sm\:prose-2xl h3+*{margin-top:0}.sm\:prose-2xl h4+*{margin-top:0}.sm\:prose-2xl table{font-size:.8333333em;line-height:1.4}.sm\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.sm\:prose-2xl thead th:first-child{padding-left:0}.sm\:prose-2xl thead th:last-child{padding-right:0}.sm\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.sm\:prose-2xl tbody td:first-child{padding-left:0}.sm\:prose-2xl tbody td:last-child{padding-right:0}.sm\:prose-2xl>:first-child{margin-top:0}.sm\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:768px){.md\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .md\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.md\:prose a{color:#1a202c;text-decoration:underline}.md\:prose strong{color:#1a202c;font-weight:600}.md\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.md\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.md\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.md\:prose ul>li{position:relative;padding-left:1.75em}.md\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.md\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.md\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.md\:prose blockquote p:first-of-type::before{content:open-quote}.md\:prose blockquote p:last-of-type::after{content:close-quote}.md\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.md\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.md\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.md\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.md\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.md\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.md\:prose code::before{content:"`"}.md\:prose code::after{content:"`"}.md\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.md\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.md\:prose pre code::before{content:""}.md\:prose pre code::after{content:""}.md\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.md\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.md\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.md\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.md\:prose tbody tr:last-child{border-bottom-width:0}.md\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.md\:prose p{margin-top:1.25em;margin-bottom:1.25em}.md\:prose img{margin-top:2em;margin-bottom:2em}.md\:prose video{margin-top:2em;margin-bottom:2em}.md\:prose figure{margin-top:2em;margin-bottom:2em}.md\:prose figure>*{margin-top:0;margin-bottom:0}.md\:prose h2 code{font-size:.875em}.md\:prose h3 code{font-size:.9em}.md\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.md\:prose li{margin-top:.5em;margin-bottom:.5em}.md\:prose ol>li:before{left:0}.md\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.md\:prose>ul>li>:first-child{margin-top:1.25em}.md\:prose>ul>li>:last-child{margin-bottom:1.25em}.md\:prose>ol>li>:first-child{margin-top:1.25em}.md\:prose>ol>li>:last-child{margin-bottom:1.25em}.md\:prose ol ol,.md\:prose ol ul,.md\:prose ul ol,.md\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.md\:prose hr+*{margin-top:0}.md\:prose h2+*{margin-top:0}.md\:prose h3+*{margin-top:0}.md\:prose h4+*{margin-top:0}.md\:prose thead th:first-child{padding-left:0}.md\:prose thead th:last-child{padding-right:0}.md\:prose tbody td:first-child{padding-left:0}.md\:prose tbody td:last-child{padding-right:0}.md\:prose>:first-child{margin-top:0}.md\:prose>:last-child{margin-bottom:0}.md\:prose-sm{font-size:.875rem;line-height:1.7142857}.md\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .md\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.md\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.md\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.md\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.md\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.md\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.md\:prose-sm figure>*{margin-top:0;margin-bottom:0}.md\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.md\:prose-sm code{font-size:.8571429em}.md\:prose-sm h2 code{font-size:.9em}.md\:prose-sm h3 code{font-size:.8888889em}.md\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.md\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.md\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.md\:prose-sm ol>li{padding-left:1.5714286em}.md\:prose-sm ol>li:before{left:0}.md\:prose-sm ul>li{padding-left:1.5714286em}.md\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.md\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.md\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.md\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.md\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.md\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.md\:prose-sm ol ol,.md\:prose-sm ol ul,.md\:prose-sm ul ol,.md\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.md\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.md\:prose-sm hr+*{margin-top:0}.md\:prose-sm h2+*{margin-top:0}.md\:prose-sm h3+*{margin-top:0}.md\:prose-sm h4+*{margin-top:0}.md\:prose-sm table{font-size:.8571429em;line-height:1.5}.md\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm thead th:first-child{padding-left:0}.md\:prose-sm thead th:last-child{padding-right:0}.md\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.md\:prose-sm tbody td:first-child{padding-left:0}.md\:prose-sm tbody td:last-child{padding-right:0}.md\:prose-sm>:first-child{margin-top:0}.md\:prose-sm>:last-child{margin-bottom:0}.md\:prose-lg{font-size:1.125rem;line-height:1.7777778}.md\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .md\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.md\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.md\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.md\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.md\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.md\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.md\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.md\:prose-lg figure>*{margin-top:0;margin-bottom:0}.md\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.md\:prose-lg code{font-size:.8888889em}.md\:prose-lg h2 code{font-size:.8666667em}.md\:prose-lg h3 code{font-size:.875em}.md\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.md\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.md\:prose-lg ol>li{padding-left:1.6666667em}.md\:prose-lg ol>li:before{left:0}.md\:prose-lg ul>li{padding-left:1.6666667em}.md\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.md\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.md\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.md\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.md\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.md\:prose-lg ol ol,.md\:prose-lg ol ul,.md\:prose-lg ul ol,.md\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.md\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.md\:prose-lg hr+*{margin-top:0}.md\:prose-lg h2+*{margin-top:0}.md\:prose-lg h3+*{margin-top:0}.md\:prose-lg h4+*{margin-top:0}.md\:prose-lg table{font-size:.8888889em;line-height:1.5}.md\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.md\:prose-lg thead th:first-child{padding-left:0}.md\:prose-lg thead th:last-child{padding-right:0}.md\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.md\:prose-lg tbody td:first-child{padding-left:0}.md\:prose-lg tbody td:last-child{padding-right:0}.md\:prose-lg>:first-child{margin-top:0}.md\:prose-lg>:last-child{margin-bottom:0}.md\:prose-xl{font-size:1.25rem;line-height:1.8}.md\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .md\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.md\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.md\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.md\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.md\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.md\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.md\:prose-xl img{margin-top:2em;margin-bottom:2em}.md\:prose-xl video{margin-top:2em;margin-bottom:2em}.md\:prose-xl figure{margin-top:2em;margin-bottom:2em}.md\:prose-xl figure>*{margin-top:0;margin-bottom:0}.md\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.md\:prose-xl code{font-size:.9em}.md\:prose-xl h2 code{font-size:.8611111em}.md\:prose-xl h3 code{font-size:.9em}.md\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.md\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.md\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.md\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.md\:prose-xl ol>li{padding-left:1.8em}.md\:prose-xl ol>li:before{left:0}.md\:prose-xl ul>li{padding-left:1.8em}.md\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.md\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.md\:prose-xl>ul>li>:first-child{margin-top:1.2em}.md\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.md\:prose-xl>ol>li>:first-child{margin-top:1.2em}.md\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.md\:prose-xl ol ol,.md\:prose-xl ol ul,.md\:prose-xl ul ol,.md\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.md\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.md\:prose-xl hr+*{margin-top:0}.md\:prose-xl h2+*{margin-top:0}.md\:prose-xl h3+*{margin-top:0}.md\:prose-xl h4+*{margin-top:0}.md\:prose-xl table{font-size:.9em;line-height:1.5555556}.md\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.md\:prose-xl thead th:first-child{padding-left:0}.md\:prose-xl thead th:last-child{padding-right:0}.md\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.md\:prose-xl tbody td:first-child{padding-left:0}.md\:prose-xl tbody td:last-child{padding-right:0}.md\:prose-xl>:first-child{margin-top:0}.md\:prose-xl>:last-child{margin-bottom:0}.md\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.md\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .md\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.md\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.md\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.md\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.md\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.md\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.md\:prose-2xl img{margin-top:2em;margin-bottom:2em}.md\:prose-2xl video{margin-top:2em;margin-bottom:2em}.md\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.md\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.md\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.md\:prose-2xl code{font-size:.8333333em}.md\:prose-2xl h2 code{font-size:.875em}.md\:prose-2xl h3 code{font-size:.8888889em}.md\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.md\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.md\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.md\:prose-2xl ol>li{padding-left:1.6666667em}.md\:prose-2xl ol>li:before{left:0}.md\:prose-2xl ul>li{padding-left:1.6666667em}.md\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.md\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.md\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.md\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.md\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.md\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.md\:prose-2xl ol ol,.md\:prose-2xl ol ul,.md\:prose-2xl ul ol,.md\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.md\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.md\:prose-2xl hr+*{margin-top:0}.md\:prose-2xl h2+*{margin-top:0}.md\:prose-2xl h3+*{margin-top:0}.md\:prose-2xl h4+*{margin-top:0}.md\:prose-2xl table{font-size:.8333333em;line-height:1.4}.md\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.md\:prose-2xl thead th:first-child{padding-left:0}.md\:prose-2xl thead th:last-child{padding-right:0}.md\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.md\:prose-2xl tbody td:first-child{padding-left:0}.md\:prose-2xl tbody td:last-child{padding-right:0}.md\:prose-2xl>:first-child{margin-top:0}.md\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:1024px){.lg\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .lg\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.lg\:prose a{color:#1a202c;text-decoration:underline}.lg\:prose strong{color:#1a202c;font-weight:600}.lg\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.lg\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.lg\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.lg\:prose ul>li{position:relative;padding-left:1.75em}.lg\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.lg\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.lg\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.lg\:prose blockquote p:first-of-type::before{content:open-quote}.lg\:prose blockquote p:last-of-type::after{content:close-quote}.lg\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.lg\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.lg\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.lg\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.lg\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.lg\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.lg\:prose code::before{content:"`"}.lg\:prose code::after{content:"`"}.lg\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.lg\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.lg\:prose pre code::before{content:""}.lg\:prose pre code::after{content:""}.lg\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.lg\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.lg\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.lg\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.lg\:prose tbody tr:last-child{border-bottom-width:0}.lg\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.lg\:prose p{margin-top:1.25em;margin-bottom:1.25em}.lg\:prose img{margin-top:2em;margin-bottom:2em}.lg\:prose video{margin-top:2em;margin-bottom:2em}.lg\:prose figure{margin-top:2em;margin-bottom:2em}.lg\:prose figure>*{margin-top:0;margin-bottom:0}.lg\:prose h2 code{font-size:.875em}.lg\:prose h3 code{font-size:.9em}.lg\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.lg\:prose li{margin-top:.5em;margin-bottom:.5em}.lg\:prose ol>li:before{left:0}.lg\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.lg\:prose>ul>li>:first-child{margin-top:1.25em}.lg\:prose>ul>li>:last-child{margin-bottom:1.25em}.lg\:prose>ol>li>:first-child{margin-top:1.25em}.lg\:prose>ol>li>:last-child{margin-bottom:1.25em}.lg\:prose ol ol,.lg\:prose ol ul,.lg\:prose ul ol,.lg\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.lg\:prose hr+*{margin-top:0}.lg\:prose h2+*{margin-top:0}.lg\:prose h3+*{margin-top:0}.lg\:prose h4+*{margin-top:0}.lg\:prose thead th:first-child{padding-left:0}.lg\:prose thead th:last-child{padding-right:0}.lg\:prose tbody td:first-child{padding-left:0}.lg\:prose tbody td:last-child{padding-right:0}.lg\:prose>:first-child{margin-top:0}.lg\:prose>:last-child{margin-bottom:0}.lg\:prose-sm{font-size:.875rem;line-height:1.7142857}.lg\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .lg\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.lg\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.lg\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.lg\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.lg\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.lg\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.lg\:prose-sm figure>*{margin-top:0;margin-bottom:0}.lg\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.lg\:prose-sm code{font-size:.8571429em}.lg\:prose-sm h2 code{font-size:.9em}.lg\:prose-sm h3 code{font-size:.8888889em}.lg\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.lg\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.lg\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.lg\:prose-sm ol>li{padding-left:1.5714286em}.lg\:prose-sm ol>li:before{left:0}.lg\:prose-sm ul>li{padding-left:1.5714286em}.lg\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.lg\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.lg\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.lg\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.lg\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.lg\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.lg\:prose-sm ol ol,.lg\:prose-sm ol ul,.lg\:prose-sm ul ol,.lg\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.lg\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.lg\:prose-sm hr+*{margin-top:0}.lg\:prose-sm h2+*{margin-top:0}.lg\:prose-sm h3+*{margin-top:0}.lg\:prose-sm h4+*{margin-top:0}.lg\:prose-sm table{font-size:.8571429em;line-height:1.5}.lg\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm thead th:first-child{padding-left:0}.lg\:prose-sm thead th:last-child{padding-right:0}.lg\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.lg\:prose-sm tbody td:first-child{padding-left:0}.lg\:prose-sm tbody td:last-child{padding-right:0}.lg\:prose-sm>:first-child{margin-top:0}.lg\:prose-sm>:last-child{margin-bottom:0}.lg\:prose-lg{font-size:1.125rem;line-height:1.7777778}.lg\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .lg\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.lg\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.lg\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.lg\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.lg\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.lg\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.lg\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.lg\:prose-lg figure>*{margin-top:0;margin-bottom:0}.lg\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.lg\:prose-lg code{font-size:.8888889em}.lg\:prose-lg h2 code{font-size:.8666667em}.lg\:prose-lg h3 code{font-size:.875em}.lg\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.lg\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.lg\:prose-lg ol>li{padding-left:1.6666667em}.lg\:prose-lg ol>li:before{left:0}.lg\:prose-lg ul>li{padding-left:1.6666667em}.lg\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.lg\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.lg\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.lg\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-lg ol ol,.lg\:prose-lg ol ul,.lg\:prose-lg ul ol,.lg\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.lg\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.lg\:prose-lg hr+*{margin-top:0}.lg\:prose-lg h2+*{margin-top:0}.lg\:prose-lg h3+*{margin-top:0}.lg\:prose-lg h4+*{margin-top:0}.lg\:prose-lg table{font-size:.8888889em;line-height:1.5}.lg\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.lg\:prose-lg thead th:first-child{padding-left:0}.lg\:prose-lg thead th:last-child{padding-right:0}.lg\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.lg\:prose-lg tbody td:first-child{padding-left:0}.lg\:prose-lg tbody td:last-child{padding-right:0}.lg\:prose-lg>:first-child{margin-top:0}.lg\:prose-lg>:last-child{margin-bottom:0}.lg\:prose-xl{font-size:1.25rem;line-height:1.8}.lg\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .lg\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.lg\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.lg\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.lg\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.lg\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.lg\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.lg\:prose-xl img{margin-top:2em;margin-bottom:2em}.lg\:prose-xl video{margin-top:2em;margin-bottom:2em}.lg\:prose-xl figure{margin-top:2em;margin-bottom:2em}.lg\:prose-xl figure>*{margin-top:0;margin-bottom:0}.lg\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.lg\:prose-xl code{font-size:.9em}.lg\:prose-xl h2 code{font-size:.8611111em}.lg\:prose-xl h3 code{font-size:.9em}.lg\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.lg\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.lg\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.lg\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.lg\:prose-xl ol>li{padding-left:1.8em}.lg\:prose-xl ol>li:before{left:0}.lg\:prose-xl ul>li{padding-left:1.8em}.lg\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.lg\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.lg\:prose-xl>ul>li>:first-child{margin-top:1.2em}.lg\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.lg\:prose-xl>ol>li>:first-child{margin-top:1.2em}.lg\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.lg\:prose-xl ol ol,.lg\:prose-xl ol ul,.lg\:prose-xl ul ol,.lg\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.lg\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.lg\:prose-xl hr+*{margin-top:0}.lg\:prose-xl h2+*{margin-top:0}.lg\:prose-xl h3+*{margin-top:0}.lg\:prose-xl h4+*{margin-top:0}.lg\:prose-xl table{font-size:.9em;line-height:1.5555556}.lg\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.lg\:prose-xl thead th:first-child{padding-left:0}.lg\:prose-xl thead th:last-child{padding-right:0}.lg\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.lg\:prose-xl tbody td:first-child{padding-left:0}.lg\:prose-xl tbody td:last-child{padding-right:0}.lg\:prose-xl>:first-child{margin-top:0}.lg\:prose-xl>:last-child{margin-bottom:0}.lg\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.lg\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .lg\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.lg\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.lg\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.lg\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.lg\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.lg\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.lg\:prose-2xl img{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl video{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.lg\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.lg\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.lg\:prose-2xl code{font-size:.8333333em}.lg\:prose-2xl h2 code{font-size:.875em}.lg\:prose-2xl h3 code{font-size:.8888889em}.lg\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.lg\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.lg\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.lg\:prose-2xl ol>li{padding-left:1.6666667em}.lg\:prose-2xl ol>li:before{left:0}.lg\:prose-2xl ul>li{padding-left:1.6666667em}.lg\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.lg\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.lg\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.lg\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.lg\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.lg\:prose-2xl ol ol,.lg\:prose-2xl ol ul,.lg\:prose-2xl ul ol,.lg\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.lg\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.lg\:prose-2xl hr+*{margin-top:0}.lg\:prose-2xl h2+*{margin-top:0}.lg\:prose-2xl h3+*{margin-top:0}.lg\:prose-2xl h4+*{margin-top:0}.lg\:prose-2xl table{font-size:.8333333em;line-height:1.4}.lg\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.lg\:prose-2xl thead th:first-child{padding-left:0}.lg\:prose-2xl thead th:last-child{padding-right:0}.lg\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.lg\:prose-2xl tbody td:first-child{padding-left:0}.lg\:prose-2xl tbody td:last-child{padding-right:0}.lg\:prose-2xl>:first-child{margin-top:0}.lg\:prose-2xl>:last-child{margin-bottom:0}}@media (min-width:1280px){.xl\:prose{color:#4a5568;max-width:65ch;font-size:1rem;line-height:1.75}.prose .xl\:lead{color:#4a5568;font-size:1.25em;line-height:1.6;margin-top:1.2em;margin-bottom:1.2em}.xl\:prose a{color:#1a202c;text-decoration:underline}.xl\:prose strong{color:#1a202c;font-weight:600}.xl\:prose ol{counter-reset:list-counter;margin-top:1.25em;margin-bottom:1.25em}.xl\:prose ol>li{position:relative;counter-increment:list-counter;padding-left:1.75em}.xl\:prose ol>li::before{content:counter(list-counter) ".";position:absolute;font-weight:400;color:#718096}.xl\:prose ul>li{position:relative;padding-left:1.75em}.xl\:prose ul>li::before{content:"";position:absolute;background-color:#cbd5e0;border-radius:50%;width:.375em;height:.375em;top:calc(.875em - .1875em);left:.25em}.xl\:prose hr{border-color:#e2e8f0;border-top-width:1px;margin-top:3em;margin-bottom:3em}.xl\:prose blockquote{font-weight:500;font-style:italic;color:#1a202c;border-left-width:.25rem;border-left-color:#e2e8f0;quotes:"\201C""\201D""\2018""\2019";margin-top:1.6em;margin-bottom:1.6em;padding-left:1em}.xl\:prose blockquote p:first-of-type::before{content:open-quote}.xl\:prose blockquote p:last-of-type::after{content:close-quote}.xl\:prose h1{color:#1a202c;font-weight:800;font-size:2.25em;margin-top:0;margin-bottom:.8888889em;line-height:1.1111111}.xl\:prose h2{color:#1a202c;font-weight:700;font-size:1.5em;margin-top:2em;margin-bottom:1em;line-height:1.3333333}.xl\:prose h3{color:#1a202c;font-weight:600;font-size:1.25em;margin-top:1.6em;margin-bottom:.6em;line-height:1.6}.xl\:prose h4{color:#1a202c;font-weight:600;margin-top:1.5em;margin-bottom:.5em;line-height:1.5}.xl\:prose figure figcaption{color:#718096;font-size:.875em;line-height:1.4285714;margin-top:.8571429em}.xl\:prose code{font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#1a202c;font-weight:600;font-size:.875em}.xl\:prose code::before{content:"`"}.xl\:prose code::after{content:"`"}.xl\:prose pre{color:#e2e8f0;font-family:Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;background-color:#2d3748;overflow-x:auto;font-size:.875em;line-height:1.7142857;margin-top:1.7142857em;margin-bottom:1.7142857em;border-radius:.375rem;padding-top:.8571429em;padding-right:1.1428571em;padding-bottom:.8571429em;padding-left:1.1428571em}.xl\:prose pre code{background-color:transparent;border-width:0;border-radius:0;padding:0;font-weight:400;color:inherit;font-size:inherit;font-family:inherit;line-height:inherit}.xl\:prose pre code::before{content:""}.xl\:prose pre code::after{content:""}.xl\:prose table{width:100%;table-layout:auto;text-align:left;margin-top:2em;margin-bottom:2em;font-size:.875em;line-height:1.7142857}.xl\:prose thead{color:#1a202c;font-weight:600;border-bottom-width:1px;border-bottom-color:#cbd5e0}.xl\:prose thead th{vertical-align:bottom;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.xl\:prose tbody tr{border-bottom-width:1px;border-bottom-color:#e2e8f0}.xl\:prose tbody tr:last-child{border-bottom-width:0}.xl\:prose tbody td{vertical-align:top;padding-top:.5714286em;padding-right:.5714286em;padding-bottom:.5714286em;padding-left:.5714286em}.xl\:prose p{margin-top:1.25em;margin-bottom:1.25em}.xl\:prose img{margin-top:2em;margin-bottom:2em}.xl\:prose video{margin-top:2em;margin-bottom:2em}.xl\:prose figure{margin-top:2em;margin-bottom:2em}.xl\:prose figure>*{margin-top:0;margin-bottom:0}.xl\:prose h2 code{font-size:.875em}.xl\:prose h3 code{font-size:.9em}.xl\:prose ul{margin-top:1.25em;margin-bottom:1.25em}.xl\:prose li{margin-top:.5em;margin-bottom:.5em}.xl\:prose ol>li:before{left:0}.xl\:prose>ul>li p{margin-top:.75em;margin-bottom:.75em}.xl\:prose>ul>li>:first-child{margin-top:1.25em}.xl\:prose>ul>li>:last-child{margin-bottom:1.25em}.xl\:prose>ol>li>:first-child{margin-top:1.25em}.xl\:prose>ol>li>:last-child{margin-bottom:1.25em}.xl\:prose ol ol,.xl\:prose ol ul,.xl\:prose ul ol,.xl\:prose ul ul{margin-top:.75em;margin-bottom:.75em}.xl\:prose hr+*{margin-top:0}.xl\:prose h2+*{margin-top:0}.xl\:prose h3+*{margin-top:0}.xl\:prose h4+*{margin-top:0}.xl\:prose thead th:first-child{padding-left:0}.xl\:prose thead th:last-child{padding-right:0}.xl\:prose tbody td:first-child{padding-left:0}.xl\:prose tbody td:last-child{padding-right:0}.xl\:prose>:first-child{margin-top:0}.xl\:prose>:last-child{margin-bottom:0}.xl\:prose-sm{font-size:.875rem;line-height:1.7142857}.xl\:prose-sm p{margin-top:1.1428571em;margin-bottom:1.1428571em}.prose-sm .xl\:lead{font-size:1.2857143em;line-height:1.5555556;margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-sm blockquote{margin-top:1.3333333em;margin-bottom:1.3333333em;padding-left:1.1111111em}.xl\:prose-sm h1{font-size:2.1428571em;margin-top:0;margin-bottom:.8em;line-height:1.2}.xl\:prose-sm h2{font-size:1.4285714em;margin-top:1.6em;margin-bottom:.8em;line-height:1.4}.xl\:prose-sm h3{font-size:1.2857143em;margin-top:1.5555556em;margin-bottom:.4444444em;line-height:1.5555556}.xl\:prose-sm h4{margin-top:1.4285714em;margin-bottom:.5714286em;line-height:1.4285714}.xl\:prose-sm img{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm video{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm figure{margin-top:1.7142857em;margin-bottom:1.7142857em}.xl\:prose-sm figure>*{margin-top:0;margin-bottom:0}.xl\:prose-sm figure figcaption{font-size:.8571429em;line-height:1.3333333;margin-top:.6666667em}.xl\:prose-sm code{font-size:.8571429em}.xl\:prose-sm h2 code{font-size:.9em}.xl\:prose-sm h3 code{font-size:.8888889em}.xl\:prose-sm pre{font-size:.8571429em;line-height:1.6666667;margin-top:1.6666667em;margin-bottom:1.6666667em;border-radius:.25rem;padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm ol{margin-top:1.1428571em;margin-bottom:1.1428571em}.xl\:prose-sm ul{margin-top:1.1428571em;margin-bottom:1.1428571em}.xl\:prose-sm li{margin-top:.2857143em;margin-bottom:.2857143em}.xl\:prose-sm ol>li{padding-left:1.5714286em}.xl\:prose-sm ol>li:before{left:0}.xl\:prose-sm ul>li{padding-left:1.5714286em}.xl\:prose-sm ul>li::before{height:.3571429em;width:.3571429em;top:calc(.8571429em - .1785714em);left:.2142857em}.xl\:prose-sm>ul>li p{margin-top:.5714286em;margin-bottom:.5714286em}.xl\:prose-sm>ul>li>:first-child{margin-top:1.1428571em}.xl\:prose-sm>ul>li>:last-child{margin-bottom:1.1428571em}.xl\:prose-sm>ol>li>:first-child{margin-top:1.1428571em}.xl\:prose-sm>ol>li>:last-child{margin-bottom:1.1428571em}.xl\:prose-sm ol ol,.xl\:prose-sm ol ul,.xl\:prose-sm ul ol,.xl\:prose-sm ul ul{margin-top:.5714286em;margin-bottom:.5714286em}.xl\:prose-sm hr{margin-top:2.8571429em;margin-bottom:2.8571429em}.xl\:prose-sm hr+*{margin-top:0}.xl\:prose-sm h2+*{margin-top:0}.xl\:prose-sm h3+*{margin-top:0}.xl\:prose-sm h4+*{margin-top:0}.xl\:prose-sm table{font-size:.8571429em;line-height:1.5}.xl\:prose-sm thead th{padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm thead th:first-child{padding-left:0}.xl\:prose-sm thead th:last-child{padding-right:0}.xl\:prose-sm tbody td{padding-top:.6666667em;padding-right:1em;padding-bottom:.6666667em;padding-left:1em}.xl\:prose-sm tbody td:first-child{padding-left:0}.xl\:prose-sm tbody td:last-child{padding-right:0}.xl\:prose-sm>:first-child{margin-top:0}.xl\:prose-sm>:last-child{margin-bottom:0}.xl\:prose-lg{font-size:1.125rem;line-height:1.7777778}.xl\:prose-lg p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-lg .xl\:lead{font-size:1.2222222em;line-height:1.4545455;margin-top:1.0909091em;margin-bottom:1.0909091em}.xl\:prose-lg blockquote{margin-top:1.6666667em;margin-bottom:1.6666667em;padding-left:1em}.xl\:prose-lg h1{font-size:2.6666667em;margin-top:0;margin-bottom:.8333333em;line-height:1}.xl\:prose-lg h2{font-size:1.6666667em;margin-top:1.8666667em;margin-bottom:1.0666667em;line-height:1.3333333}.xl\:prose-lg h3{font-size:1.3333333em;margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.xl\:prose-lg h4{margin-top:1.7777778em;margin-bottom:.4444444em;line-height:1.5555556}.xl\:prose-lg img{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg video{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg figure{margin-top:1.7777778em;margin-bottom:1.7777778em}.xl\:prose-lg figure>*{margin-top:0;margin-bottom:0}.xl\:prose-lg figure figcaption{font-size:.8888889em;line-height:1.5;margin-top:1em}.xl\:prose-lg code{font-size:.8888889em}.xl\:prose-lg h2 code{font-size:.8666667em}.xl\:prose-lg h3 code{font-size:.875em}.xl\:prose-lg pre{font-size:.8888889em;line-height:1.75;margin-top:2em;margin-bottom:2em;border-radius:.375rem;padding-top:1em;padding-right:1.5em;padding-bottom:1em;padding-left:1.5em}.xl\:prose-lg ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-lg ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-lg li{margin-top:.6666667em;margin-bottom:.6666667em}.xl\:prose-lg ol>li{padding-left:1.6666667em}.xl\:prose-lg ol>li:before{left:0}.xl\:prose-lg ul>li{padding-left:1.6666667em}.xl\:prose-lg ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8888889em - .1666667em);left:.2222222em}.xl\:prose-lg>ul>li p{margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-lg>ul>li>:first-child{margin-top:1.3333333em}.xl\:prose-lg>ul>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-lg>ol>li>:first-child{margin-top:1.3333333em}.xl\:prose-lg>ol>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-lg ol ol,.xl\:prose-lg ol ul,.xl\:prose-lg ul ol,.xl\:prose-lg ul ul{margin-top:.8888889em;margin-bottom:.8888889em}.xl\:prose-lg hr{margin-top:3.1111111em;margin-bottom:3.1111111em}.xl\:prose-lg hr+*{margin-top:0}.xl\:prose-lg h2+*{margin-top:0}.xl\:prose-lg h3+*{margin-top:0}.xl\:prose-lg h4+*{margin-top:0}.xl\:prose-lg table{font-size:.8888889em;line-height:1.5}.xl\:prose-lg thead th{padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.xl\:prose-lg thead th:first-child{padding-left:0}.xl\:prose-lg thead th:last-child{padding-right:0}.xl\:prose-lg tbody td{padding-top:.75em;padding-right:.75em;padding-bottom:.75em;padding-left:.75em}.xl\:prose-lg tbody td:first-child{padding-left:0}.xl\:prose-lg tbody td:last-child{padding-right:0}.xl\:prose-lg>:first-child{margin-top:0}.xl\:prose-lg>:last-child{margin-bottom:0}.xl\:prose-xl{font-size:1.25rem;line-height:1.8}.xl\:prose-xl p{margin-top:1.2em;margin-bottom:1.2em}.prose-xl .xl\:lead{font-size:1.2em;line-height:1.5;margin-top:1em;margin-bottom:1em}.xl\:prose-xl blockquote{margin-top:1.6em;margin-bottom:1.6em;padding-left:1.0666667em}.xl\:prose-xl h1{font-size:2.8em;margin-top:0;margin-bottom:.8571429em;line-height:1}.xl\:prose-xl h2{font-size:1.8em;margin-top:1.5555556em;margin-bottom:.8888889em;line-height:1.1111111}.xl\:prose-xl h3{font-size:1.5em;margin-top:1.6em;margin-bottom:.6666667em;line-height:1.3333333}.xl\:prose-xl h4{margin-top:1.8em;margin-bottom:.6em;line-height:1.6}.xl\:prose-xl img{margin-top:2em;margin-bottom:2em}.xl\:prose-xl video{margin-top:2em;margin-bottom:2em}.xl\:prose-xl figure{margin-top:2em;margin-bottom:2em}.xl\:prose-xl figure>*{margin-top:0;margin-bottom:0}.xl\:prose-xl figure figcaption{font-size:.9em;line-height:1.5555556;margin-top:1em}.xl\:prose-xl code{font-size:.9em}.xl\:prose-xl h2 code{font-size:.8611111em}.xl\:prose-xl h3 code{font-size:.9em}.xl\:prose-xl pre{font-size:.9em;line-height:1.7777778;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.1111111em;padding-right:1.3333333em;padding-bottom:1.1111111em;padding-left:1.3333333em}.xl\:prose-xl ol{margin-top:1.2em;margin-bottom:1.2em}.xl\:prose-xl ul{margin-top:1.2em;margin-bottom:1.2em}.xl\:prose-xl li{margin-top:.6em;margin-bottom:.6em}.xl\:prose-xl ol>li{padding-left:1.8em}.xl\:prose-xl ol>li:before{left:0}.xl\:prose-xl ul>li{padding-left:1.8em}.xl\:prose-xl ul>li::before{width:.35em;height:.35em;top:calc(.9em - .175em);left:.25em}.xl\:prose-xl>ul>li p{margin-top:.8em;margin-bottom:.8em}.xl\:prose-xl>ul>li>:first-child{margin-top:1.2em}.xl\:prose-xl>ul>li>:last-child{margin-bottom:1.2em}.xl\:prose-xl>ol>li>:first-child{margin-top:1.2em}.xl\:prose-xl>ol>li>:last-child{margin-bottom:1.2em}.xl\:prose-xl ol ol,.xl\:prose-xl ol ul,.xl\:prose-xl ul ol,.xl\:prose-xl ul ul{margin-top:.8em;margin-bottom:.8em}.xl\:prose-xl hr{margin-top:2.8em;margin-bottom:2.8em}.xl\:prose-xl hr+*{margin-top:0}.xl\:prose-xl h2+*{margin-top:0}.xl\:prose-xl h3+*{margin-top:0}.xl\:prose-xl h4+*{margin-top:0}.xl\:prose-xl table{font-size:.9em;line-height:1.5555556}.xl\:prose-xl thead th{padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.xl\:prose-xl thead th:first-child{padding-left:0}.xl\:prose-xl thead th:last-child{padding-right:0}.xl\:prose-xl tbody td{padding-top:.8888889em;padding-right:.6666667em;padding-bottom:.8888889em;padding-left:.6666667em}.xl\:prose-xl tbody td:first-child{padding-left:0}.xl\:prose-xl tbody td:last-child{padding-right:0}.xl\:prose-xl>:first-child{margin-top:0}.xl\:prose-xl>:last-child{margin-bottom:0}.xl\:prose-2xl{font-size:1.5rem;line-height:1.6666667}.xl\:prose-2xl p{margin-top:1.3333333em;margin-bottom:1.3333333em}.prose-2xl .xl\:lead{font-size:1.25em;line-height:1.4666667;margin-top:1.0666667em;margin-bottom:1.0666667em}.xl\:prose-2xl blockquote{margin-top:1.7777778em;margin-bottom:1.7777778em;padding-left:1.1111111em}.xl\:prose-2xl h1{font-size:2.6666667em;margin-top:0;margin-bottom:.875em;line-height:1}.xl\:prose-2xl h2{font-size:2em;margin-top:1.5em;margin-bottom:.8333333em;line-height:1.0833333}.xl\:prose-2xl h3{font-size:1.5em;margin-top:1.5555556em;margin-bottom:.6666667em;line-height:1.2222222}.xl\:prose-2xl h4{margin-top:1.6666667em;margin-bottom:.6666667em;line-height:1.5}.xl\:prose-2xl img{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl video{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl figure{margin-top:2em;margin-bottom:2em}.xl\:prose-2xl figure>*{margin-top:0;margin-bottom:0}.xl\:prose-2xl figure figcaption{font-size:.8333333em;line-height:1.6;margin-top:1em}.xl\:prose-2xl code{font-size:.8333333em}.xl\:prose-2xl h2 code{font-size:.875em}.xl\:prose-2xl h3 code{font-size:.8888889em}.xl\:prose-2xl pre{font-size:.8333333em;line-height:1.8;margin-top:2em;margin-bottom:2em;border-radius:.5rem;padding-top:1.2em;padding-right:1.6em;padding-bottom:1.2em;padding-left:1.6em}.xl\:prose-2xl ol{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-2xl ul{margin-top:1.3333333em;margin-bottom:1.3333333em}.xl\:prose-2xl li{margin-top:.5em;margin-bottom:.5em}.xl\:prose-2xl ol>li{padding-left:1.6666667em}.xl\:prose-2xl ol>li:before{left:0}.xl\:prose-2xl ul>li{padding-left:1.6666667em}.xl\:prose-2xl ul>li::before{width:.3333333em;height:.3333333em;top:calc(.8333333em - .1666667em);left:.25em}.xl\:prose-2xl>ul>li p{margin-top:.8333333em;margin-bottom:.8333333em}.xl\:prose-2xl>ul>li>:first-child{margin-top:1.3333333em}.xl\:prose-2xl>ul>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-2xl>ol>li>:first-child{margin-top:1.3333333em}.xl\:prose-2xl>ol>li>:last-child{margin-bottom:1.3333333em}.xl\:prose-2xl ol ol,.xl\:prose-2xl ol ul,.xl\:prose-2xl ul ol,.xl\:prose-2xl ul ul{margin-top:.6666667em;margin-bottom:.6666667em}.xl\:prose-2xl hr{margin-top:3em;margin-bottom:3em}.xl\:prose-2xl hr+*{margin-top:0}.xl\:prose-2xl h2+*{margin-top:0}.xl\:prose-2xl h3+*{margin-top:0}.xl\:prose-2xl h4+*{margin-top:0}.xl\:prose-2xl table{font-size:.8333333em;line-height:1.4}.xl\:prose-2xl thead th{padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.xl\:prose-2xl thead th:first-child{padding-left:0}.xl\:prose-2xl thead th:last-child{padding-right:0}.xl\:prose-2xl tbody td{padding-top:.8em;padding-right:.6em;padding-bottom:.8em;padding-left:.6em}.xl\:prose-2xl tbody td:first-child{padding-left:0}.xl\:prose-2xl tbody td:last-child{padding-right:0}.xl\:prose-2xl>:first-child{margin-top:0}.xl\:prose-2xl>:last-child{margin-bottom:0}}
\ No newline at end of file
diff --git a/spaces/jbondy007/Video_Search_CLIP/app.py b/spaces/jbondy007/Video_Search_CLIP/app.py
deleted file mode 100644
index d2eab3b31970abf438efd09b633dc4847f010a09..0000000000000000000000000000000000000000
--- a/spaces/jbondy007/Video_Search_CLIP/app.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os
-os.system("pip freeze")
-import cv2
-from PIL import Image
-import clip
-import torch
-import math
-import numpy as np
-import torch
-import datetime
-import gradio as gr
-
-
-# Load the open CLIP model
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model, preprocess = clip.load("ViT-B/32", device=device)
-
-
-
-def inference(video, text):
- # The frame images will be stored in video_frames
- video_frames = []
- # Open the video file
-
- capture = cv2.VideoCapture(video)
- fps = capture.get(cv2.CAP_PROP_FPS)
-
- current_frame = 0
- # Read the current frame
- ret, frame = capture.read()
- while capture.isOpened() and ret:
- ret,frame = capture.read()
- print('Read a new frame: ', ret)
- current_frame += 1
- if ret:
- video_frames.append(Image.fromarray(frame[:, :, ::-1]))
-
-
- # Print some statistics
- print(f"Frames extracted: {len(video_frames)}")
-
-
- # You can try tuning the batch size for very large videos, but it should usually be OK
- batch_size = 256
- batches = math.ceil(len(video_frames) / batch_size)
-
- # The encoded features will bs stored in video_features
- video_features = torch.empty([0, 512], dtype=torch.float16).to(device)
-
- # Process each batch
- for i in range(batches):
- print(f"Processing batch {i+1}/{batches}")
-
- # Get the relevant frames
- batch_frames = video_frames[i*batch_size : (i+1)*batch_size]
-
- # Preprocess the images for the batch
- batch_preprocessed = torch.stack([preprocess(frame) for frame in batch_frames]).to(device)
-
- # Encode with CLIP and normalize
- with torch.no_grad():
- batch_features = model.encode_image(batch_preprocessed)
- batch_features /= batch_features.norm(dim=-1, keepdim=True)
-
- # Append the batch to the list containing all features
- video_features = torch.cat((video_features, batch_features))
-
- # Print some stats
- print(f"Features: {video_features.shape}")
-
-
- search_query=text
- display_heatmap=False
- display_results_count=1
- # Encode and normalize the search query using CLIP
- with torch.no_grad():
- text_features = model.encode_text(clip.tokenize(search_query).to(device))
- text_features /= text_features.norm(dim=-1, keepdim=True)
-
- # Compute the similarity between the search query and each frame using the Cosine similarity
- similarities = (100.0 * video_features @ text_features.T)
- values, best_photo_idx = similarities.topk(display_results_count, dim=0)
-
-
- for frame_id in best_photo_idx:
- frame = video_frames[frame_id]
- # Find the timestamp in the video and display it
- seconds = round(frame_id.cpu().numpy()[0]/fps)
- return frame,f"Found at {str(datetime.timedelta(seconds=seconds))}"
-
-title = "Video Search"
-description = "Gradio demo for using OpenAI's CLIP to search inside videos. To use it, simply upload your video and add your text. Read more at the links below."
-article = "