diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!LINK! Download Sanskrit Dictionary English.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!LINK! Download Sanskrit Dictionary English.md deleted file mode 100644 index 294595dac392deb4fee48c13c584c2101f477ea9..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!LINK! Download Sanskrit Dictionary English.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

How to Download Sanskrit Dictionary English for Free

-

If you are looking for a way to learn Sanskrit, the ancient and sacred language of India, you might want to download Sanskrit dictionary English. This is a handy tool that can help you translate words and phrases from Sanskrit to English and vice versa. You can also use it to study the grammar, pronunciation, and culture of Sanskrit.

-

Download Sanskrit Dictionary English


Download Zip >>> https://byltly.com/2uKzQu



-

But where can you find a reliable and free Sanskrit dictionary English? There are many websites and apps that claim to offer this service, but not all of them are trustworthy or accurate. Some might contain errors, malware, or ads that can ruin your experience. Others might charge you a fee or require you to register or subscribe.

-

That's why we have compiled a list of the best sources to download Sanskrit dictionary English for free. These are reputable and safe platforms that have been tested and reviewed by users and experts. They offer high-quality and comprehensive Sanskrit dictionaries that you can access online or offline. Here they are:

- -

These are some of the best sources to download Sanskrit dictionary English for free. We hope you find them useful and enjoy learning Sanskrit. If you have any questions or feedback, please let us know in the comments below.

- -

Why Learn Sanskrit?

-

Sanskrit is one of the oldest and most influential languages in the world. It is the language of the Vedas, the Upanishads, the Bhagavad Gita, and many other sacred texts of Hinduism, Buddhism, and Jainism. It is also the source of many words and concepts in other languages, such as Hindi, Urdu, Bengali, Nepali, and English.

-

-

Learning Sanskrit can enrich your knowledge and appreciation of the ancient and modern cultures of India and beyond. It can also improve your cognitive and linguistic skills, as Sanskrit is known for its logical and grammatical structure, rich vocabulary, and poetic beauty. It can also help you access the original texts and teachings of various spiritual traditions and philosophies.

-

How to Learn Sanskrit?

-

Learning Sanskrit can be challenging but rewarding. It requires dedication, patience, and practice. But it is not impossible. There are many resources and methods that can help you learn Sanskrit at your own pace and level. Here are some tips to get you started:

- -

These are some of the tips that can help you learn Sanskrit effectively. Remember that learning a new language takes time and effort. But with consistent practice and enthusiasm, you will be able to master Sanskrit and enjoy its benefits.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md deleted file mode 100644 index b7ea38ab95a90ed736f86933dbe5f4fd6850e247..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md +++ /dev/null @@ -1,156 +0,0 @@ - -

Brutal Doom V16 Download: How to Choose the Best Mod for Your Doom Experience

- -

Doom is one of the most iconic and influential games of all time. It revolutionized the FPS genre with its fast-paced action, immersive graphics, and brutal violence. But even after almost 30 years, Doom is still alive and kicking, thanks to the countless mods that enhance and expand the game in various ways.

-

Brutal Doom V16 Download


Download ⇒⇒⇒ https://imgfil.com/2uxZUP



- -

One of the most popular and acclaimed mods for Doom is Brutal Doom, which adds new features, weapons, enemies, gore, sounds, and gameplay mechanics to make Doom more brutal, intense, and fun. But did you know that there are different versions of Brutal Doom that you can download and play?

- -

In this article, we will introduce you to two of the most recent and interesting versions of Brutal Doom: the Classic Edition v16a and the Extended Edition v16. We will tell you what they are, how they differ from each other and from the original Brutal Doom, and how to download and install them on your PC. Let's get started!

- -

What is Brutal Doom Classic Edition v16a?

- -

Brutal Doom Classic Edition v16a is a mod that aims to recreate the original Brutal Doom experience with elements from v18, v20, and v21. It is a more classic version of Brutal Doom, with less features and changes than the newer versions, but still with plenty of brutality and fun.

- -

Some of the features of Brutal Doom Classic Edition v16a are:

-

- - - -

If you want to enjoy Brutal Doom as it was originally intended, with a simple and straightforward gameplay that focuses on shooting and killing demons, then Brutal Doom Classic Edition v16a is for you.

- -

What is Brutal Doom Extended Edition v16?

- -

Brutal Doom Extended Edition v16 is a mod that is based on Brutal Doom and Dox778's personalized Addon. It is a mod that aims to improve the overall gameplay of Brutal Doom with new features, enhancements, and fixes. It is a more modern version of Brutal Doom, with more options and customization than the older versions, but still with the same core gameplay that makes Brutal Doom so great.

- -

Some of the features of Brutal Doom Extended Edition v16 are:

- - - -

If you want to enjoy Brutal Doom with more variety and challenge, with a lot of options and settings to customize your gameplay experience, then Brutal Doom Extended Edition v16 is for you.

- -

How to Download and Install Brutal Doom V16 Mods?

- -

To download and install Brutal Doom V16 mods, you will need a few things:

- - - -

Once you have everything ready, follow these steps:

- -
    -
  1. Extract the source port files to a folder on your PC.
  2. -
  3. Copy the DOOM.WAD or DOOM2.WAD file from your game folder to the source port folder.
  4. -
  5. Extract the brutalv21.pk3 file from the Brutal Doom archive to the source port folder.
  6. -
  7. Extract the mod file (Brutal_Classic_v16a.zip or BDEE_v16_Compressed.zip) to the source port folder.
  8. -
  9. Launch the source port executable (gzdoom.exe or zandronum.exe).
  10. -
  11. Select your game (Doom or Doom II) and your mod (Brutal_Doom_Classic_Edition_v16a.pk3 or BDEE_v16_Compressed.pk3).
  12. -
  13. Enjoy!
  14. -
- -

Conclusion

- -

Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. Whether you prefer a classic or a modern version of Brutal Doom, you will find a mod that suits your taste and style. Download them now and have fun!

-

What are the Benefits of Playing Brutal Doom V16 Mods?

- -

Playing Brutal Doom V16 mods can offer you many benefits, such as:

- - - -

Playing Brutal Doom V16 mods can give you a whole new perspective on Doom and make you appreciate the game even more.

- -

What are the Requirements for Playing Brutal Doom V16 Mods?

- -

To play Brutal Doom V16 mods, you will need a few things:

- - - -

If you have these things, then you are ready to play Brutal Doom V16 mods and have a blast!

-

What are the Differences between Brutal Doom V16 Mods?

- -

Brutal Doom V16 mods have some differences that make them unique and appealing to different types of players. Here are some of the main differences between them:

- - - -

Depending on your preferences and expectations, you can choose the mod that best suits your needs and tastes.

- -

What are the Reviews of Brutal Doom V16 Mods?

- -

Brutal Doom V16 mods have received positive reviews from players and critics alike. They have been praised for their quality, variety, and fun factor. Here are some of the reviews of Brutal Doom V16 mods:

- -
-

"Brutal Doom Classic Edition v16a is a great mod for those who want to relive the glory days of Brutal Doom. It has everything you need to enjoy a classic and brutal Doom experience, without any unnecessary or distracting features. It is simple, fast, and fun."

-- A Mod DB user -
- -
-

"Brutal Doom Extended Edition v16 is a great mod for those who want to explore the possibilities of Brutal Doom. It has everything you need to enjoy a modern and diverse Doom experience, with a lot of options and settings to customize your gameplay. It is varied, challenging, and immersive."

-- A Mod DB user -
- -

Brutal Doom V16 mods have been rated highly by the community and have received many awards and recognitions. They are among the best mods for Doom ever made.

-

What are the Tips and Tricks for Playing Brutal Doom V16 Mods?

- -

Playing Brutal Doom V16 mods can be a lot of fun, but also a lot of challenge. Here are some tips and tricks that can help you survive and enjoy the game more:

- - - -

Playing Brutal Doom V16 mods can be a rewarding and satisfying experience if you know how to play smart and use your resources wisely.

- -

What are the Future Plans for Brutal Doom V16 Mods?

- -

Brutal Doom V16 mods are not finished yet. The modders behind them are constantly working on improving and updating them with new features, fixes, and content. Here are some of the future plans for Brutal Doom V16 mods:

- - - -

Brutal Doom V16 mods are still in development and have a lot of potential. They will continue to grow and evolve with time and effort.

-

Conclusion

- -

Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. They offer you different versions of Brutal Doom that suit your preferences and expectations. They enhance and expand the game with new and improved features that make it more fun and challenging. They are easy to download and install, and they have a lot of benefits, tips, and tricks that can help you survive and enjoy the game more. They are also constantly being updated and supported by the modders and the community, making them better and more enjoyable with time and effort.

- -

If you are a fan of Doom and Brutal Doom, you should definitely try Brutal Doom V16 mods. They will give you a whole new perspective on Doom and make you appreciate the game even more. Download them now and have a blast!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md deleted file mode 100644 index 0cb667c4ffdbd6aba27dc5dd3ce909a7ff846e21..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md +++ /dev/null @@ -1,179 +0,0 @@ - -

Billiards Pool Games Download: How to Enjoy the Fun of Pool on Your Phone

-

Do you love playing pool but don't have the time or space to own a pool table? Do you want to practice your skills and challenge your friends online? Do you want to have fun and relax with a realistic and engaging pool game on your phone? If you answered yes to any of these questions, then you should download billiards pool games on your Android device.

-

Introduction

-

What are billiards pool games?

-

Billiards pool games are digital versions of the popular cue sports that involve hitting balls with a stick on a cloth-covered table. There are different types of billiards pool games, such as 8-ball, 9-ball, snooker, and carom. Each game has its own rules, objectives, and strategies. Billiards pool games can be played solo, against the computer, or online with other players.

-

billiards pool games download


Download Ziphttps://urlin.us/2uSVRL



-

Why should you download billiards pool games?

-

Downloading billiards pool games on your phone has many benefits, such as:

- -

Best Billiards Pool Games for Android

-

There are many billiards pool games available for Android devices, but not all of them are worth downloading. Here are some of the best ones that you should try:

-

8 Ball Pool

-

Features

-

8 Ball Pool is one of the most popular and downloaded billiards pool games on Android. It is developed by Miniclip, a leading online game company. 8 Ball Pool offers the following features:

- -

Pros and cons

-

8 Ball Pool has many pros, such as:

- -

However, 8 Ball Pool also has some cons, such as:

- -

Pool Billiards Pro

-

Features

-

Pool Billiards Pro is another popular and well-rated billiards pool game on Android. It is developed by TerranDroid, a game studio that specializes in casual and sports games. Pool Billiards Pro offers the following features:

-

8 ball pool online multiplayer free
-pool billiards pro offline android
-realistic 3D pool games for pc
-9 ball pool tournaments app
-best pool game with practice mode
-pool strategy and cue tips
-level-based billiard challenge game
-offline 8 ball pool against bots
-online 9 ball pool with friends
-free pool game with no ads
-pool billiards pro apk download
-8 ball pool miniclip for ios
-3D pool game with custom cues
-offline 9 ball pool game
-online 8 ball pool league
-pool game with high score record
-billiard game with arcade mode
-8 ball pool game with rules
-9 ball pool game with no rules
-realistic pool game with physics
-offline pool game with data safety
-online pool game with leader board
-free billiard game with in-app purchases
-pool game with touch control
-billiard game with single player mode
-8 ball pool game with time mode
-9 ball pool game with challenge mode
-realistic billiard game with animation
-offline 8 ball pool for tablet
-online 9 ball pool for phone
-free pool game with editors' choice
-billiard game with ratings and reviews
-8 ball pool game with data encryption
-9 ball pool game with data deletion request
-realistic pool game with sound effects
-offline billiard game for watch
-online pool game for chromebook
-free 8 ball pool for tv
-billiard game with privacy policy
-9 ball pool game with terms and conditions

- -

Pros and cons

-

Pool Billiards Pro has many pros, such as:

- -

However, Pool Billiards Pro also has some cons, such as:

- -

8 Ball Billiards Offline Pool

-

Features

-

8 Ball Billiards Offline Pool is a newer and lesser-known billiards pool game on Android. It is developed by SNG Games, a game developer that focuses on offline and classic games. 8 Ball Billiards Offline Pool offers the following features:

- -

Pros and cons

-

8 Ball Billiards Offline Pool has many pros, such as:

- -

However, 8 Ball Billiards Offline Pool also has some cons, such as:

- -

Conclusion

-

Summary of the main points

-

In conclusion, billiards pool games are fun and exciting games that you can download on your Android device. They allow you to enjoy the thrill of pool without needing a physical table or equipment. They also help you improve your skills and compete with other players online. Some of the best billiards pool games for Android are 8 Ball Pool, Pool Billiards Pro, and 8 Ball Billiards Offline Pool. Each game has its own features, pros, and cons that you should consider before downloading them.

-

Call to action

-

If you are looking for a great way to spend your free time, then you should download billiards pool games on your Android device. They are easy to play, fun to master, and challenging to beat. They will keep you entertained and engaged for hours. So what are you waiting for? Download billiards pool games today and start playing!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about billiards pool games:

-
    -
  1. What are the rules of billiards pool games?
  2. -

    The rules of billiards pool games vary depending on the type of game you are playing. However, some general rules are:

    - -
  3. How can I download billiards pool games on my Android device?
  4. -

    You can download billiards pool games on your Android device by following these steps:

    - -
  5. Are billiards pool games free or paid?
  6. -

    Most billiards pool games are free to download and play on your Android device. However, some games may contain ads or in-app purchases that may require you to pay real money to access certain features or items. You can choose to disable or enable these options in the game settings or in your device settings.

    -
  7. Which billiards pool game is the best for me?
  8. -

    The best billiards pool game for you depends on your personal preference and taste. You should consider factors such as:

    - -

    You can try different games and see which one suits you best. You can also read reviews and ratings from other players to get an idea of what they think about the games.

    -
  9. How can I improve my skills in billiards pool games?
  10. -

    You can improve your skills in billiards pool games by practicing regularly and learning from your mistakes. You can also follow these tips:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md b/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md deleted file mode 100644 index 93ef6f35be20b26d1d58addbb1e150647d488e6c..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md +++ /dev/null @@ -1,131 +0,0 @@ -
    -, , - - - - - - - - - - - - - - - - - - - - - - - -

    Espero que hayas disfrutado leyendo este artículo y hayas aprendido algo nuevo sobre "Hard Life" de Blackface Naija con Alabai. Si tiene alguna pregunta, comentario o comentario, por favor siéntase libre de compartirlos conmigo. Me encantaría saber de usted.

    -

    Gracias por tu tiempo y atención. ¡Que tengas un gran día!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000 --- a/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.gt_folder = opt['dataroot_gt'] - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.gt_folder] - self.io_backend_opt['client_keys'] = ['gt'] - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip().split(' ')[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability - self.blur_sigma = opt['blur_sigma'] - self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels - self.betap_range = opt['betap_range'] # betap used in plateau blur kernels - self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt['blur_kernel_size2'] - self.kernel_list2 = opt['kernel_list2'] - self.kernel_prob2 = opt['kernel_prob2'] - self.blur_sigma2 = opt['blur_sigma2'] - self.betag_range2 = opt['betag_range2'] - self.betap_range2 = opt['betap_range2'] - self.sinc_prob2 = opt['sinc_prob2'] - - # a final sinc filter - self.final_sinc_prob = opt['final_sinc_prob'] - - self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py deleted file mode 100644 index ad410aa465fefce20c27799c86ff405ffafd0e02..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py +++ /dev/null @@ -1,156 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import contextlib -import copy -import io -import itertools -import json -import logging -import os -from collections import OrderedDict -import torch -from pycocotools.coco import COCO - -from detectron2.data import MetadataCatalog -from detectron2.evaluation import DatasetEvaluator -from detectron2.structures import BoxMode -from detectron2.utils.comm import all_gather, is_main_process, synchronize -from detectron2.utils.logger import create_small_table - -from .densepose_coco_evaluation import DensePoseCocoEval, DensePoseEvalMode - - -class DensePoseCOCOEvaluator(DatasetEvaluator): - def __init__(self, dataset_name, distributed, output_dir=None): - self._distributed = distributed - self._output_dir = output_dir - - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - self._metadata = MetadataCatalog.get(dataset_name) - with contextlib.redirect_stdout(io.StringIO()): - self._coco_api = COCO(self._metadata.json_file) - - def reset(self): - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a COCO model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - The :class:`Instances` object needs to have `densepose` field. - """ - for input, output in zip(inputs, outputs): - instances = output["instances"].to(self._cpu_device) - - boxes = instances.pred_boxes.tensor.clone() - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - instances.pred_densepose = instances.pred_densepose.to_result(boxes) - - json_results = prediction_to_json(instances, input["image_id"]) - self._predictions.extend(json_results) - - def evaluate(self): - if self._distributed: - synchronize() - predictions = all_gather(self._predictions) - predictions = list(itertools.chain(*predictions)) - if not is_main_process(): - return - else: - predictions = self._predictions - - return copy.deepcopy(self._eval_predictions(predictions)) - - def _eval_predictions(self, predictions): - """ - Evaluate predictions on densepose. - Return results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_densepose_results.json") - with open(file_path, "w") as f: - json.dump(predictions, f) - f.flush() - os.fsync(f.fileno()) - - self._logger.info("Evaluating predictions ...") - res = OrderedDict() - results_gps, results_gpsm = _evaluate_predictions_on_coco(self._coco_api, predictions) - res["densepose_gps"] = results_gps - res["densepose_gpsm"] = results_gpsm - return res - - -def prediction_to_json(instances, img_id): - """ - Args: - instances (Instances): the output of the model - img_id (str): the image id in COCO - - Returns: - list[dict]: the results in densepose evaluation format - """ - scores = instances.scores.tolist() - - results = [] - for k in range(len(instances)): - densepose = instances.pred_densepose[k] - result = { - "image_id": img_id, - "category_id": 1, # densepose only has one class - "bbox": densepose[1], - "score": scores[k], - "densepose": densepose, - } - results.append(result) - return results - - -def _evaluate_predictions_on_coco(coco_gt, coco_results): - metrics = ["AP", "AP50", "AP75", "APm", "APl"] - - logger = logging.getLogger(__name__) - - if len(coco_results) == 0: # cocoapi does not handle empty results very well - logger.warn("No predictions from the model! Set scores to -1") - results_gps = {metric: -1 for metric in metrics} - results_gpsm = {metric: -1 for metric in metrics} - return results_gps, results_gpsm - - coco_dt = coco_gt.loadRes(coco_results) - results_gps = _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics) - logger.info( - "Evaluation results for densepose, GPS metric: \n" + create_small_table(results_gps) - ) - results_gpsm = _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics) - logger.info( - "Evaluation results for densepose, GPSm metric: \n" + create_small_table(results_gpsm) - ) - return results_gps, results_gpsm - - -def _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics): - coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPS) - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)} - return results - - -def _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics): - coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPSM) - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)} - return results diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py deleted file mode 100644 index 3f1cffb4c985dc3121a863eb7b378965b718a19d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.layers import Conv2d, ShapeSpec -from detectron2.modeling import ROI_MASK_HEAD_REGISTRY - - -@ROI_MASK_HEAD_REGISTRY.register() -class CoarseMaskHead(nn.Module): - """ - A mask head with fully connected layers. Given pooled features it first reduces channels and - spatial dimensions with conv layers and then uses FC layers to predict coarse masks analogously - to the standard box head. - """ - - def __init__(self, cfg, input_shape: ShapeSpec): - """ - The following attributes are parsed from config: - conv_dim: the output dimension of the conv layers - fc_dim: the feature dimenstion of the FC layers - num_fc: the number of FC layers - output_side_resolution: side resolution of the output square mask prediction - """ - super(CoarseMaskHead, self).__init__() - - # fmt: off - self.num_classes = cfg.MODEL.ROI_HEADS.NUM_CLASSES - conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM - self.fc_dim = cfg.MODEL.ROI_MASK_HEAD.FC_DIM - num_fc = cfg.MODEL.ROI_MASK_HEAD.NUM_FC - self.output_side_resolution = cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION - self.input_channels = input_shape.channels - self.input_h = input_shape.height - self.input_w = input_shape.width - # fmt: on - - self.conv_layers = [] - if self.input_channels > conv_dim: - self.reduce_channel_dim_conv = Conv2d( - self.input_channels, - conv_dim, - kernel_size=1, - stride=1, - padding=0, - bias=True, - activation=F.relu, - ) - self.conv_layers.append(self.reduce_channel_dim_conv) - - self.reduce_spatial_dim_conv = Conv2d( - conv_dim, conv_dim, kernel_size=2, stride=2, padding=0, bias=True, activation=F.relu - ) - self.conv_layers.append(self.reduce_spatial_dim_conv) - - input_dim = conv_dim * self.input_h * self.input_w - input_dim //= 4 - - self.fcs = [] - for k in range(num_fc): - fc = nn.Linear(input_dim, self.fc_dim) - self.add_module("coarse_mask_fc{}".format(k + 1), fc) - self.fcs.append(fc) - input_dim = self.fc_dim - - output_dim = self.num_classes * self.output_side_resolution * self.output_side_resolution - - self.prediction = nn.Linear(self.fc_dim, output_dim) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.prediction.weight, std=0.001) - nn.init.constant_(self.prediction.bias, 0) - - for layer in self.conv_layers: - weight_init.c2_msra_fill(layer) - for layer in self.fcs: - weight_init.c2_xavier_fill(layer) - - def forward(self, x): - # unlike BaseMaskRCNNHead, this head only outputs intermediate - # features, because the features will be used later by PointHead. - N = x.shape[0] - x = x.view(N, self.input_channels, self.input_h, self.input_w) - for layer in self.conv_layers: - x = layer(x) - x = torch.flatten(x, start_dim=1) - for layer in self.fcs: - x = F.relu(layer(x)) - return self.prediction(x).view( - N, self.num_classes, self.output_side_resolution, self.output_side_resolution - ) diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h deleted file mode 100644 index 81d52f14087e8681e425d176c6c8e3991a5cfda7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h +++ /dev/null @@ -1,76 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -// this awkward sequence of definitions arises -// from the desire both for tag to derive -// from execution_policy and for execution_policy -// to convert to tag (when execution_policy is not -// an ancestor of tag) - -// forward declaration of tag -struct tag; - -// forward declaration of execution_policy -template struct execution_policy; - -// specialize execution_policy for tag -template<> - struct execution_policy - : thrust::execution_policy -{}; - -// tag's definition comes before the generic definition of execution_policy -struct tag : execution_policy -{ - __host__ __device__ THRUST_CONSTEXPR tag() {} -}; - -// allow conversion to tag when it is not a successor -template - struct execution_policy - : thrust::execution_policy -{ - // allow conversion to tag - inline operator tag () const - { - return tag(); - } -}; - - -THRUST_INLINE_CONSTANT tag seq; - - -} // end sequential -} // end detail -} // end system -} // end thrust - diff --git a/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py b/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py deleted file mode 100644 index 2596aeb2ccfc85b58624713c04453d34e94a4062..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .distributed_sampler import DistributedSampler -from .group_sampler import DistributedGroupSampler, GroupSampler - -__all__ = ['DistributedSampler', 'DistributedGroupSampler', 'GroupSampler'] diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py b/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py deleted file mode 100644 index b6145a1464cd940bd4f98eaa15f6f9ecf6a10a20..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py +++ /dev/null @@ -1,29 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class GridRCNN(TwoStageDetector): - """Grid R-CNN. - - This detector is the implementation of: - - Grid R-CNN (https://arxiv.org/abs/1811.12030) - - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688) - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(GridRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py deleted file mode 100644 index ce61a45cc73bd57506b90b938a92df51e03100b5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py +++ /dev/null @@ -1,373 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -from numpy.lib import pad -import torch -from torch import nn -from torch.nn import functional as F -from random import randint - -from detectron2.config import configurable -from detectron2.data.detection_utils import convert_image_to_rgb -from detectron2.structures import ImageList, Instances, Boxes -from detectron2.utils.events import get_event_storage -from detectron2.utils.logger import log_first_n - -from ..backbone import Backbone, build_backbone -from ..postprocessing import detector_postprocess -from ..proposal_generator import build_proposal_generator -from ..roi_heads import build_roi_heads -from .build import META_ARCH_REGISTRY - -__all__ = ["GeneralizedRCNN", "ProposalNetwork"] - -@META_ARCH_REGISTRY.register() -class GeneralizedRCNN(nn.Module): - """ - Generalized R-CNN. Any models that contains the following three components: - 1. Per-image feature extraction (aka backbone) - 2. Region proposal generation - 3. Per-region feature extraction and prediction - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - proposal_generator: nn.Module, - roi_heads: nn.Module, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - input_format: Optional[str] = None, - vis_period: int = 0, - use_clip_c4: False, - use_clip_attpool: False, - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - proposal_generator: a module that generates proposals using backbone features - roi_heads: a ROI head that performs per-region computation - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - input_format: describe the meaning of channels of input. Needed by visualization - vis_period: the period to run visualization. Set to 0 to disable. - """ - super().__init__() - self.backbone = backbone - self.proposal_generator = proposal_generator - self.roi_heads = roi_heads - - self.input_format = input_format - self.vis_period = vis_period - if vis_period > 0: - assert input_format is not None, "input_format is required for visualization!" - - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - assert ( - self.pixel_mean.shape == self.pixel_std.shape - ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" - if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0 - assert input_format == 'RGB' - self.div_pixel = True - else: # default setting - self.div_pixel = False - self.use_clip_c4 = use_clip_c4 # if True, use C4 mode where roi_head uses the last resnet layer from backbone - self.use_clip_attpool = use_clip_attpool # if True (C4+text_emb_as_classifier), use att_pool to replace default mean pool - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - return { - "backbone": backbone, - "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), - "roi_heads": build_roi_heads(cfg, backbone.output_shape()), - "input_format": cfg.INPUT.FORMAT, - "vis_period": cfg.VIS_PERIOD, - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - "use_clip_c4": cfg.MODEL.BACKBONE.NAME == "build_clip_resnet_backbone", - "use_clip_attpool": cfg.MODEL.ROI_HEADS.NAME == 'CLIPRes5ROIHeads' and cfg.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER, - } - - @property - def device(self): - return self.pixel_mean.device - - def visualize_training(self, batched_inputs, proposals): - """ - A function used to visualize images and proposals. It shows ground truth - bounding boxes on the original image and up to 20 top-scoring predicted - object proposals on the original image. Users can implement different - visualization functions for different models. - - Args: - batched_inputs (list): a list that contains input to the model. - proposals (list): a list that contains predicted proposals. Both - batched_inputs and proposals should have the same length. - """ - from detectron2.utils.visualizer import Visualizer - - storage = get_event_storage() - max_vis_prop = 20 - - for input, prop in zip(batched_inputs, proposals): - img = input["image"] - img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format) - v_gt = Visualizer(img, None) - v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes) - anno_img = v_gt.get_image() - box_size = min(len(prop.proposal_boxes), max_vis_prop) - v_pred = Visualizer(img, None) - v_pred = v_pred.overlay_instances( - boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy() - ) - prop_img = v_pred.get_image() - vis_img = np.concatenate((anno_img, prop_img), axis=1) - vis_img = vis_img.transpose(2, 0, 1) - vis_name = "Left: GT bounding boxes; Right: Predicted proposals" - storage.put_image(vis_name, vis_img) - break # only visualize one image in a batch - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper` . - Each item in the list contains the inputs for one image. - For now, each item in the list is a dict that contains: - - * image: Tensor, image in (C, H, W) format. - * instances (optional): groundtruth :class:`Instances` - * proposals (optional): :class:`Instances`, precomputed proposals. - - Other information that's included in the original dicts, such as: - - * "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - Each dict is the output for one input image. - The dict contains one key "instances" whose value is a :class:`Instances`. - The :class:`Instances` object has the following keys: - "pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints" - """ - if not self.training: - return self.inference(batched_inputs) - - images = self.preprocess_image(batched_inputs) - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - else: - gt_instances = None - # eg: {'p2': torch.Size([b, c, 200, 304]), 'p3': torch.Size([b, c, 100, 152]), 'p4': torch.Size([b, c, 50, 76]), 'p5': torch.Size([b, c, 25, 38]), 'p6': torch.Size([b, c, 13, 19])} - features = self.backbone(images.tensor) - - if self.proposal_generator is not None: - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - else: - assert "proposals" in batched_inputs[0] - proposals = [x["proposals"].to(self.device) for x in batched_inputs] - proposal_losses = {} - - if self.use_clip_c4: # use C4 + resnet weights from CLIP - if self.use_clip_attpool: # use att_pool from CLIP to match dimension - _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool) - else: # use default mean pool - _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4) - else: # default setting - _, detector_losses = self.roi_heads(images, features, proposals, gt_instances) - if self.vis_period > 0: - storage = get_event_storage() - if storage.iter % self.vis_period == 0: - self.visualize_training(batched_inputs, proposals) - - losses = {} - losses.update(detector_losses) - losses.update(proposal_losses) - return losses - - def inference( - self, - batched_inputs: List[Dict[str, torch.Tensor]], - detected_instances: Optional[List[Instances]] = None, - do_postprocess: bool = True, - ): - """ - Run inference on the given inputs. - - Args: - batched_inputs (list[dict]): same as in :meth:`forward` - detected_instances (None or list[Instances]): if not None, it - contains an `Instances` object per image. The `Instances` - object contains "pred_boxes" and "pred_classes" which are - known boxes in the image. - The inference will then skip the detection of bounding boxes, - and only predict other per-ROI outputs. - do_postprocess (bool): whether to apply post-processing on the outputs. - - Returns: - When do_postprocess=True, same as in :meth:`forward`. - Otherwise, a list[Instances] containing raw network outputs. - """ - assert not self.training - - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - - if detected_instances is None: - if self.proposal_generator is not None: - proposals, _ = self.proposal_generator(images, features, None) - else: - assert "proposals" in batched_inputs[0] - proposals = [x["proposals"].to(self.device) for x in batched_inputs] - - if self.use_clip_c4: # use C4 + resnet weights from CLIP - if self.use_clip_attpool: # use att_pool from CLIP to match dimension - results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4, attnpool=self.backbone.attnpool) - else: # use default mean pool - results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4) - else: # default setting - results, _ = self.roi_heads(images, features, proposals, None) - else: - detected_instances = [x.to(self.device) for x in detected_instances] - - if self.use_clip_c4: # use C4 + resnet weights from CLIP - if self.use_clip_attpool: # use att_pool from CLIP to match dimension - results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool) - else: # use default mean pool - results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4) - else: # default setting - results = self.roi_heads.forward_with_given_boxes(features, detected_instances) - - #visualize_proposals(batched_inputs, proposals, self.input_format) - if do_postprocess: - assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess." - return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes) - else: - return results - - def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Normalize, pad and batch the input images. - """ - images = [x["image"].to(self.device) for x in batched_inputs] - if self.div_pixel: - images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images] - else: - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - return images - - @staticmethod - def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes): - """ - Rescale the output instances to the target size. - """ - # note: private function; subject to changes - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - instances, batched_inputs, image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"instances": r}) - return processed_results - - -@META_ARCH_REGISTRY.register() -class ProposalNetwork(nn.Module): - """ - A meta architecture that only predicts object proposals. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - proposal_generator: nn.Module, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - input_format: Optional[str] = None, - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - proposal_generator: a module that generates proposals using backbone features - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - """ - super().__init__() - self.backbone = backbone - self.proposal_generator = proposal_generator - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0 - assert input_format == 'RGB' - self.div_pixel = True - else: # default setting - self.div_pixel = False - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - return { - "backbone": backbone, - "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), - "input_format": cfg.INPUT.FORMAT, - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - } - - @property - def device(self): - return self.pixel_mean.device - - def forward(self, batched_inputs): - """ - Args: - Same as in :class:`GeneralizedRCNN.forward` - - Returns: - list[dict]: - Each dict is the output for one input image. - The dict contains one key "proposals" whose value is a - :class:`Instances` with keys "proposal_boxes" and "objectness_logits". - """ - images = [x["image"].to(self.device) for x in batched_inputs] - if self.div_pixel: - images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images] - else: - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - features = self.backbone(images.tensor) - - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - elif "targets" in batched_inputs[0]: - log_first_n( - logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10 - ) - gt_instances = [x["targets"].to(self.device) for x in batched_inputs] - else: - gt_instances = None - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - # In training, the proposals are not useful at all but we generate them anyway. - # This makes RPN-only models about 5% slower. - if self.training: - return proposal_losses - - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - proposals, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"proposals": r}) - return processed_results diff --git a/spaces/CarlDennis/HYTTS/attentions.py b/spaces/CarlDennis/HYTTS/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py deleted file mode 100644 index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py +++ /dev/null @@ -1,29 +0,0 @@ -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def improve_code(suggestions: list[str], code: str) -> str: - """ - A function that takes in code and suggestions and returns a response from create - chat completion api call. - - Parameters: - suggestions (List): A list of suggestions around what needs to be improved. - code (str): Code to be improved. - Returns: - A result string from create chat completion. Improved code in response. - """ - - function_string = ( - "def generate_improved_code(suggestions: List[str], code: str) -> str:" - ) - args = [json.dumps(suggestions), code] - description_string = ( - "Improves the provided code based on the suggestions" - " provided, making no other changes." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py deleted file mode 100644 index a94b49d2124b9983efc057f1103484bd6f6d374c..0000000000000000000000000000000000000000 --- a/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - canny_low_threshold = gr.Slider( - label='Canny low threshold', - minimum=1, - maximum=255, - value=100, - step=1) - canny_high_threshold = gr.Slider( - label='Canny high threshold', - minimum=1, - maximum=255, - value=200, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - num_steps, - guidance_scale, - seed, - canny_low_threshold, - canny_high_threshold, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='canny', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='Canny') - demo = create_demo(model.process_canny) - demo.queue().launch() diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js deleted file mode 100644 index 6c8d7f5b8101a4c5c813babfa3bf6055337d2b49..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js +++ /dev/null @@ -1,36 +0,0 @@ -import cfg from '../../lib/config/config.js' - -export class quit extends plugin { - constructor () { - super({ - name: 'notice', - dsc: '自动退群', - event: 'notice.group.increase' - }) - } - - async accept () { - if (this.e.user_id != this.e.self_id) return - - let other = cfg.other - if (other.autoQuit <= 0) return - - /** 判断主人,主人邀请不退群 */ - let gl = await this.e.group.getMemberMap() - for (let qq of cfg.masterQQ) { - if (gl.has(Number(qq) || String(qq))) { - logger.mark(`[主人拉群] ${this.e.group_id}`) - return - } - } - - /** 自动退群 */ - if (Array.from(gl).length <= other.autoQuit && !this.e.group.is_owner) { - await this.e.reply('禁止拉群,已自动退出') - logger.mark(`[自动退群] ${this.e.group_id}`) - setTimeout(() => { - this.e.group.quit() - }, 2000) - } - } -} diff --git a/spaces/ClueAI/ChatYuan-large-v2/app.py b/spaces/ClueAI/ChatYuan-large-v2/app.py deleted file mode 100644 index b4ed170dae69a219f7a8f80457521793a81fb6b5..0000000000000000000000000000000000000000 --- a/spaces/ClueAI/ChatYuan-large-v2/app.py +++ /dev/null @@ -1,310 +0,0 @@ -import os -import gradio as gr -import clueai -import torch -from transformers import T5Tokenizer, T5ForConditionalGeneration - -tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v2") -model = T5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v2") -# 使用 -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -model.to(device) -model.half() - -base_info = "" - - -def preprocess(text): - text = f"{base_info}{text}" - text = text.replace("\n", "\\n").replace("\t", "\\t") - return text - - -def postprocess(text): - return text.replace("\\n", "\n").replace("\\t", "\t").replace( - '%20', ' ') #.replace(" ", " ") - - -generate_config = { - 'do_sample': True, - 'top_p': 0.9, - 'top_k': 50, - 'temperature': 0.7, - 'num_beams': 1, - 'max_length': 1024, - 'min_length': 3, - 'no_repeat_ngram_size': 5, - 'length_penalty': 0.6, - 'return_dict_in_generate': True, - 'output_scores': True -} - - -def answer( - text, - top_p, - temperature, - sample=True, -): - '''sample:是否抽样。生成任务,可以设置为True; - top_p:0-1之间,生成的内容越多样''' - text = preprocess(text) - encoding = tokenizer(text=[text], - truncation=True, - padding=True, - max_length=1024, - return_tensors="pt").to(device) - if not sample: - out = model.generate(**encoding, - return_dict_in_generate=True, - output_scores=False, - max_new_tokens=1024, - num_beams=1, - length_penalty=0.6) - else: - out = model.generate(**encoding, - return_dict_in_generate=True, - output_scores=False, - max_new_tokens=1024, - do_sample=True, - top_p=top_p, - temperature=temperature, - no_repeat_ngram_size=12) - #out=model.generate(**encoding, **generate_config) - out_text = tokenizer.batch_decode(out["sequences"], - skip_special_tokens=True) - return postprocess(out_text[0]) - - -def clear_session(): - return '', None - - -def chatyuan_bot(input, history, top_p, temperature, num): - history = history or [] - if len(history) > num: - history = history[-num:] - - context = "\n".join([ - f"用户:{input_text}\n小元:{answer_text}" - for input_text, answer_text in history - ]) - #print(context) - - input_text = context + "\n用户:" + input + "\n小元:" - input_text = input_text.strip() - output_text = answer(input_text, top_p, temperature) - print("open_model".center(20, "=")) - print(f"{input_text}\n{output_text}") - #print("="*20) - history.append((input, output_text)) - #print(history) - return '', history, history - - -def chatyuan_bot_regenerate(input, history, top_p, temperature, num): - - history = history or [] - - if history: - input = history[-1][0] - history = history[:-1] - - if len(history) > num: - history = history[-num:] - - context = "\n".join([ - f"用户:{input_text}\n小元:{answer_text}" - for input_text, answer_text in history - ]) - #print(context) - - input_text = context + "\n用户:" + input + "\n小元:" - input_text = input_text.strip() - output_text = answer(input_text, top_p, temperature) - print("open_model".center(20, "=")) - print(f"{input_text}\n{output_text}") - history.append((input, output_text)) - #print(history) - return '', history, history - - -block = gr.Blocks() - -with block as demo: - gr.Markdown("""

    元语智能——ChatYuan

    - 回答来自ChatYuan, 是模型生成的结果, 请谨慎辨别和参考, 不代表任何人观点 | Answer generated by ChatYuan model - 注意:gradio对markdown代码格式展示有限 - """) - with gr.Row(): - with gr.Column(scale=3): - chatbot = gr.Chatbot(label='ChatYuan').style(height=400) - - with gr.Column(scale=1): - - num = gr.Slider(minimum=4, - maximum=10, - label="最大的对话轮数", - value=5, - step=1) - top_p = gr.Slider(minimum=0, - maximum=1, - label="top_p", - value=1, - step=0.1) - temperature = gr.Slider(minimum=0, - maximum=1, - label="temperature", - value=0.7, - step=0.1) - clear_history = gr.Button("👋 清除历史对话 | Clear History") - send = gr.Button("🚀 发送 | Send") - regenerate = gr.Button("🚀 重新生成本次结果 | regenerate") - message = gr.Textbox() - state = gr.State() - message.submit(chatyuan_bot, - inputs=[message, state, top_p, temperature, num], - outputs=[message, chatbot, state]) - regenerate.click(chatyuan_bot_regenerate, - inputs=[message, state, top_p, temperature, num], - outputs=[message, chatbot, state]) - send.click(chatyuan_bot, - inputs=[message, state, top_p, temperature, num], - outputs=[message, chatbot, state]) - - clear_history.click(fn=clear_session, - inputs=[], - outputs=[chatbot, state], - queue=False) - - -def ChatYuan(api_key, text_prompt, top_p): - generate_config = { - "do_sample": True, - "top_p": top_p, - "max_length": 128, - "min_length": 10, - "length_penalty": 1.0, - "num_beams": 1 - } - cl = clueai.Client(api_key, check_api_key=True) - # generate a prediction for a prompt - # 需要返回得分的话,指定return_likelihoods="GENERATION" - prediction = cl.generate(model_name='ChatYuan-large', prompt=text_prompt) - # print the predicted text - #print('prediction: {}'.format(prediction.generations[0].text)) - response = prediction.generations[0].text - if response == '': - response = "很抱歉,我无法回答这个问题" - - return response - - -def chatyuan_bot_api(api_key, input, history, top_p, num): - history = history or [] - - if len(history) > num: - history = history[-num:] - - context = "\n".join([ - f"用户:{input_text}\n小元:{answer_text}" - for input_text, answer_text in history - ]) - - input_text = context + "\n用户:" + input + "\n小元:" - input_text = input_text.strip() - output_text = ChatYuan(api_key, input_text, top_p) - print("api".center(20, "=")) - print(f"api_key:{api_key}\n{input_text}\n{output_text}") - - history.append((input, output_text)) - - return '', history, history - - -block = gr.Blocks() - -with block as demo_1: - gr.Markdown("""

    元语智能——ChatYuan

    - 回答来自ChatYuan, 以上是模型生成的结果, 请谨慎辨别和参考, 不代表任何人观点 | Answer generated by ChatYuan model - 注意:gradio对markdown代码格式展示有限 - 在使用此功能前,你需要有个API key. API key 可以通过这个平台获取 - """) - with gr.Row(): - with gr.Column(scale=3): - chatbot = gr.Chatbot(label='ChatYuan').style(height=400) - - with gr.Column(scale=1): - api_key = gr.inputs.Textbox(label="请输入你的api-key(必填)", - default="", - type='password') - num = gr.Slider(minimum=4, - maximum=10, - label="最大的对话轮数", - value=5, - step=1) - top_p = gr.Slider(minimum=0, - maximum=1, - label="top_p", - value=1, - step=0.1) - clear_history = gr.Button("👋 清除历史对话 | Clear History") - send = gr.Button("🚀 发送 | Send") - - message = gr.Textbox() - state = gr.State() - message.submit(chatyuan_bot_api, - inputs=[api_key, message, state, top_p, num], - outputs=[message, chatbot, state]) - - send.click(chatyuan_bot_api, - inputs=[api_key, message, state, top_p, num], - outputs=[message, chatbot, state]) - clear_history.click(fn=clear_session, - inputs=[], - outputs=[chatbot, state], - queue=False) - -block = gr.Blocks() -with block as introduction: - gr.Markdown("""

    元语智能——ChatYuan

    - -😉ChatYuan: 元语功能型对话大模型 | General Model for Dialogue with ChatYuan -
    -👏ChatYuan-large-v2是一个支持中英双语的功能型对话语言大模型,是继ChatYuan系列中ChatYuan-large-v1开源后的又一个开源模型。ChatYuan-large-v2使用了和 v1版本相同的技术方案,在微调数据、人类反馈强化学习、思维链等方面进行了优化。 -
    -ChatYuan large v2 is an open-source large language model for dialogue, supports both Chinese and English languages, and in ChatGPT style. -
    -ChatYuan-large-v2是ChatYuan系列中以轻量化实现高质量效果的模型之一,用户可以在消费级显卡、 PC甚至手机上进行推理(INT4 最低只需 400M )。 -
    -在Chatyuan-large-v1的原有功能的基础上,我们给模型进行了如下优化: -- 新增了中英双语对话能力。 -- 新增了拒答能力。对于一些危险、有害的问题,学会了拒答处理。 -- 新增了代码生成功能。对于基础代码生成进行了一定程度优化。 -- 增强了基础能力。原有上下文问答、创意性写作能力明显提升。 -- 新增了表格生成功能。使生成的表格内容和格式更适配。 -- 增强了基础数学运算能力。 -- 最大长度token数扩展到4096。 -- 增强了模拟情景能力。.
    -
    -Based on the original functions of Chatyuan-large-v1, we optimized the model as follows: --Added the ability to speak in both Chinese and English. --Added the ability to refuse to answer. Learn to refuse to answer some dangerous and harmful questions. --Added code generation functionality. Basic code generation has been optimized to a certain extent. --Enhanced basic capabilities. The original contextual Q&A and creative writing skills have significantly improved. --Added a table generation function. Make the generated table content and format more appropriate. --Enhanced basic mathematical computing capabilities. --The maximum number of length tokens has been expanded to 4096. --Enhanced ability to simulate scenarios< br> -
    -👀PromptCLUE-large在1000亿token中文语料上预训练, 累计学习1.5万亿中文token, 并且在数百种任务上进行Prompt任务式训练. 针对理解类任务, 如分类、情感分析、抽取等, 可以自定义标签体系; 针对多种生成任务, 可以进行采样自由生成.
    -
    -   ModelScope   |   Huggingface   |   官网体验场   |   ChatYuan-API   |   Github项目地址   |   OpenI免费试用   -
    -
    - """) - -gui = gr.TabbedInterface( - interface_list=[introduction, demo, demo_1], - tab_names=["相关介绍 | Introduction", "开源模型 | Online Demo", "API调用"]) -gui.launch(quiet=True, show_api=False, share=False) diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py deleted file mode 100644 index 5790d8d20751bad1133172b4ffbc0106d8d422c0..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py +++ /dev/null @@ -1,122 +0,0 @@ -import numpy as np -import CDM.detect_compo.lib_ip.ip_draw as draw - - -class Bbox: - def __init__(self, col_min, row_min, col_max, row_max): - self.col_min = col_min - self.row_min = row_min - self.col_max = col_max - self.row_max = row_max - - self.width = col_max - col_min - self.height = row_max - row_min - self.box_area = self.width * self.height - - def put_bbox(self): - return self.col_min, self.row_min, self.col_max, self.row_max - - def bbox_cal_area(self): - self.box_area = self.width * self.height - return self.box_area - - def bbox_relation(self, bbox_b): - """ - :return: -1 : a in b - 0 : a, b are not intersected - 1 : b in a - 2 : a, b are identical or intersected - """ - col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox() - col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox() - - # if a is in b - if col_min_a > col_min_b and row_min_a > row_min_b and col_max_a < col_max_b and row_max_a < row_max_b: - return -1 - # if b is in a - elif col_min_a < col_min_b and row_min_a < row_min_b and col_max_a > col_max_b and row_max_a > row_max_b: - return 1 - # a and b are non-intersect - elif (col_min_a > col_max_b or row_min_a > row_max_b) or (col_min_b > col_max_a or row_min_b > row_max_a): - return 0 - # intersection - else: - return 2 - - def bbox_relation_nms(self, bbox_b, bias=(0, 0)): - ''' - Calculate the relation between two rectangles by nms - :return: -1 : a in b - 0 : a, b are not intersected - 1 : b in a - 2 : a, b are intersected - ''' - col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox() - col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox() - - bias_col, bias_row = bias - # get the intersected area - col_min_s = max(col_min_a - bias_col, col_min_b - bias_col) - row_min_s = max(row_min_a - bias_row, row_min_b - bias_row) - col_max_s = min(col_max_a + bias_col, col_max_b + bias_col) - row_max_s = min(row_max_a + bias_row, row_max_b + bias_row) - w = np.maximum(0, col_max_s - col_min_s) - h = np.maximum(0, row_max_s - row_min_s) - inter = w * h - area_a = (col_max_a - col_min_a) * (row_max_a - row_min_a) - area_b = (col_max_b - col_min_b) * (row_max_b - row_min_b) - iou = inter / (area_a + area_b - inter) - ioa = inter / self.box_area - iob = inter / bbox_b.box_area - - if iou == 0 and ioa == 0 and iob == 0: - return 0 - - # import lib_ip.ip_preprocessing as pre - # org_iou, _ = pre.read_img('uied/data/input/7.jpg', 800) - # print(iou, ioa, iob) - # board = draw.draw_bounding_box(org_iou, [self], color=(255,0,0)) - # draw.draw_bounding_box(board, [bbox_b], color=(0,255,0), show=True) - - # contained by b - if ioa >= 1: - return -1 - # contains b - if iob >= 1: - return 1 - # not intersected with each other - # intersected - if iou >= 0.02 or iob > 0.2 or ioa > 0.2: - return 2 - # if iou == 0: - # print('ioa:%.5f; iob:%.5f; iou:%.5f' % (ioa, iob, iou)) - return 0 - - def bbox_cvt_relative_position(self, col_min_base, row_min_base): - ''' - Convert to relative position based on base coordinator - ''' - self.col_min += col_min_base - self.col_max += col_min_base - self.row_min += row_min_base - self.row_max += row_min_base - - def bbox_merge(self, bbox_b): - ''' - Merge two intersected bboxes - ''' - col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox() - col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox() - col_min = min(col_min_a, col_min_b) - col_max = max(col_max_a, col_max_b) - row_min = min(row_min_a, row_min_b) - row_max = max(row_max_a, row_max_b) - new_bbox = Bbox(col_min, row_min, col_max, row_max) - return new_bbox - - def bbox_padding(self, image_shape, pad): - row, col = image_shape[:2] - self.col_min = max(self.col_min - pad, 0) - self.col_max = min(self.col_max + pad, col) - self.row_min = max(self.row_min - pad, 0) - self.row_max = min(self.row_max + pad, row) \ No newline at end of file diff --git a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py b/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py deleted file mode 100644 index c7a2092cad40bfe568b5749e799a3e545b1a4b01..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -import numpy as np -from tensorflow.keras.preprocessing import image -from tensorflow.keras.models import load_model -from PIL import Image as PILImage -import io - -# Carregar o modelo treinado -model = load_model('model_1.0000.h5') - -def predict_and_invert(input_image): - input_image = input_image.resize((224, 224)) - img = image.img_to_array(input_image) / 255.0 - img = np.expand_dims(img, axis=0) - img = img[:, :224, :224, :] - - prediction = model.predict(img) - - if prediction[0][0] > 0.5: - result = "Anomalia cardíaca (Doente)" - else: - result = "Normal (Sem anomalia)" - - img_inverted = 1 - img[0] # Inverter a imagem - - img_inverted_pil = PILImage.fromarray(np.uint8(img_inverted * 255)) - img_inverted_bytes = io.BytesIO() - img_inverted_pil.save(img_inverted_bytes, format='PNG') - - return result, img_inverted_pil - -# Criar uma interface Gradio -iface = gr.Interface( - fn=predict_and_invert, - inputs=gr.inputs.Image(type="pil", label="Carregar uma imagem"), - outputs=["text", "image"] -) - -# Executar a interface Gradio -iface.launch() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py deleted file mode 100644 index 10dc6fa8a81646ed7e9fa8d6be4e1634ec14e7d8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py +++ /dev/null @@ -1,10 +0,0 @@ -"""Utilities for registering and working with themes""" - -from .plugin_registry import PluginRegistry -from typing import Callable - -ThemeType = Callable[..., dict] - - -class ThemeRegistry(PluginRegistry[ThemeType]): - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py deleted file mode 100644 index 5fa7adfac842bfa5689fd1a41ae4017be1ebff6f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py +++ /dev/null @@ -1,529 +0,0 @@ -""" -This module started out as largely a copy paste from the stdlib's -optparse module with the features removed that we do not need from -optparse because we implement them in Click on a higher level (for -instance type handling, help formatting and a lot more). - -The plan is to remove more and more from here over time. - -The reason this is a different module and not optparse from the stdlib -is that there are differences in 2.x and 3.x about the error messages -generated and optparse in the stdlib uses gettext for no good reason -and might cause us issues. - -Click uses parts of optparse written by Gregory P. Ward and maintained -by the Python Software Foundation. This is limited to code in parser.py. - -Copyright 2001-2006 Gregory P. Ward. All rights reserved. -Copyright 2002-2006 Python Software Foundation. All rights reserved. -""" -# This code uses parts of optparse written by Gregory P. Ward and -# maintained by the Python Software Foundation. -# Copyright 2001-2006 Gregory P. Ward -# Copyright 2002-2006 Python Software Foundation -import typing as t -from collections import deque -from gettext import gettext as _ -from gettext import ngettext - -from .exceptions import BadArgumentUsage -from .exceptions import BadOptionUsage -from .exceptions import NoSuchOption -from .exceptions import UsageError - -if t.TYPE_CHECKING: - import typing_extensions as te - from .core import Argument as CoreArgument - from .core import Context - from .core import Option as CoreOption - from .core import Parameter as CoreParameter - -V = t.TypeVar("V") - -# Sentinel value that indicates an option was passed as a flag without a -# value but is not a flag option. Option.consume_value uses this to -# prompt or use the flag_value. -_flag_needs_value = object() - - -def _unpack_args( - args: t.Sequence[str], nargs_spec: t.Sequence[int] -) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]: - """Given an iterable of arguments and an iterable of nargs specifications, - it returns a tuple with all the unpacked arguments at the first index - and all remaining arguments as the second. - - The nargs specification is the number of arguments that should be consumed - or `-1` to indicate that this position should eat up all the remainders. - - Missing items are filled with `None`. - """ - args = deque(args) - nargs_spec = deque(nargs_spec) - rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = [] - spos: t.Optional[int] = None - - def _fetch(c: "te.Deque[V]") -> t.Optional[V]: - try: - if spos is None: - return c.popleft() - else: - return c.pop() - except IndexError: - return None - - while nargs_spec: - nargs = _fetch(nargs_spec) - - if nargs is None: - continue - - if nargs == 1: - rv.append(_fetch(args)) - elif nargs > 1: - x = [_fetch(args) for _ in range(nargs)] - - # If we're reversed, we're pulling in the arguments in reverse, - # so we need to turn them around. - if spos is not None: - x.reverse() - - rv.append(tuple(x)) - elif nargs < 0: - if spos is not None: - raise TypeError("Cannot have two nargs < 0") - - spos = len(rv) - rv.append(None) - - # spos is the position of the wildcard (star). If it's not `None`, - # we fill it with the remainder. - if spos is not None: - rv[spos] = tuple(args) - args = [] - rv[spos + 1 :] = reversed(rv[spos + 1 :]) - - return tuple(rv), list(args) - - -def split_opt(opt: str) -> t.Tuple[str, str]: - first = opt[:1] - if first.isalnum(): - return "", opt - if opt[1:2] == first: - return opt[:2], opt[2:] - return first, opt[1:] - - -def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str: - if ctx is None or ctx.token_normalize_func is None: - return opt - prefix, opt = split_opt(opt) - return f"{prefix}{ctx.token_normalize_func(opt)}" - - -def split_arg_string(string: str) -> t.List[str]: - """Split an argument string as with :func:`shlex.split`, but don't - fail if the string is incomplete. Ignores a missing closing quote or - incomplete escape sequence and uses the partial token as-is. - - .. code-block:: python - - split_arg_string("example 'my file") - ["example", "my file"] - - split_arg_string("example my\\") - ["example", "my"] - - :param string: String to split. - """ - import shlex - - lex = shlex.shlex(string, posix=True) - lex.whitespace_split = True - lex.commenters = "" - out = [] - - try: - for token in lex: - out.append(token) - except ValueError: - # Raised when end-of-string is reached in an invalid state. Use - # the partial token as-is. The quote or escape character is in - # lex.state, not lex.token. - out.append(lex.token) - - return out - - -class Option: - def __init__( - self, - obj: "CoreOption", - opts: t.Sequence[str], - dest: t.Optional[str], - action: t.Optional[str] = None, - nargs: int = 1, - const: t.Optional[t.Any] = None, - ): - self._short_opts = [] - self._long_opts = [] - self.prefixes: t.Set[str] = set() - - for opt in opts: - prefix, value = split_opt(opt) - if not prefix: - raise ValueError(f"Invalid start character for option ({opt})") - self.prefixes.add(prefix[0]) - if len(prefix) == 1 and len(value) == 1: - self._short_opts.append(opt) - else: - self._long_opts.append(opt) - self.prefixes.add(prefix) - - if action is None: - action = "store" - - self.dest = dest - self.action = action - self.nargs = nargs - self.const = const - self.obj = obj - - @property - def takes_value(self) -> bool: - return self.action in ("store", "append") - - def process(self, value: t.Any, state: "ParsingState") -> None: - if self.action == "store": - state.opts[self.dest] = value # type: ignore - elif self.action == "store_const": - state.opts[self.dest] = self.const # type: ignore - elif self.action == "append": - state.opts.setdefault(self.dest, []).append(value) # type: ignore - elif self.action == "append_const": - state.opts.setdefault(self.dest, []).append(self.const) # type: ignore - elif self.action == "count": - state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore - else: - raise ValueError(f"unknown action '{self.action}'") - state.order.append(self.obj) - - -class Argument: - def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1): - self.dest = dest - self.nargs = nargs - self.obj = obj - - def process( - self, - value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]], - state: "ParsingState", - ) -> None: - if self.nargs > 1: - assert value is not None - holes = sum(1 for x in value if x is None) - if holes == len(value): - value = None - elif holes != 0: - raise BadArgumentUsage( - _("Argument {name!r} takes {nargs} values.").format( - name=self.dest, nargs=self.nargs - ) - ) - - if self.nargs == -1 and self.obj.envvar is not None and value == (): - # Replace empty tuple with None so that a value from the - # environment may be tried. - value = None - - state.opts[self.dest] = value # type: ignore - state.order.append(self.obj) - - -class ParsingState: - def __init__(self, rargs: t.List[str]) -> None: - self.opts: t.Dict[str, t.Any] = {} - self.largs: t.List[str] = [] - self.rargs = rargs - self.order: t.List["CoreParameter"] = [] - - -class OptionParser: - """The option parser is an internal class that is ultimately used to - parse options and arguments. It's modelled after optparse and brings - a similar but vastly simplified API. It should generally not be used - directly as the high level Click classes wrap it for you. - - It's not nearly as extensible as optparse or argparse as it does not - implement features that are implemented on a higher level (such as - types or defaults). - - :param ctx: optionally the :class:`~click.Context` where this parser - should go with. - """ - - def __init__(self, ctx: t.Optional["Context"] = None) -> None: - #: The :class:`~click.Context` for this parser. This might be - #: `None` for some advanced use cases. - self.ctx = ctx - #: This controls how the parser deals with interspersed arguments. - #: If this is set to `False`, the parser will stop on the first - #: non-option. Click uses this to implement nested subcommands - #: safely. - self.allow_interspersed_args: bool = True - #: This tells the parser how to deal with unknown options. By - #: default it will error out (which is sensible), but there is a - #: second mode where it will ignore it and continue processing - #: after shifting all the unknown options into the resulting args. - self.ignore_unknown_options: bool = False - - if ctx is not None: - self.allow_interspersed_args = ctx.allow_interspersed_args - self.ignore_unknown_options = ctx.ignore_unknown_options - - self._short_opt: t.Dict[str, Option] = {} - self._long_opt: t.Dict[str, Option] = {} - self._opt_prefixes = {"-", "--"} - self._args: t.List[Argument] = [] - - def add_option( - self, - obj: "CoreOption", - opts: t.Sequence[str], - dest: t.Optional[str], - action: t.Optional[str] = None, - nargs: int = 1, - const: t.Optional[t.Any] = None, - ) -> None: - """Adds a new option named `dest` to the parser. The destination - is not inferred (unlike with optparse) and needs to be explicitly - provided. Action can be any of ``store``, ``store_const``, - ``append``, ``append_const`` or ``count``. - - The `obj` can be used to identify the option in the order list - that is returned from the parser. - """ - opts = [normalize_opt(opt, self.ctx) for opt in opts] - option = Option(obj, opts, dest, action=action, nargs=nargs, const=const) - self._opt_prefixes.update(option.prefixes) - for opt in option._short_opts: - self._short_opt[opt] = option - for opt in option._long_opts: - self._long_opt[opt] = option - - def add_argument( - self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1 - ) -> None: - """Adds a positional argument named `dest` to the parser. - - The `obj` can be used to identify the option in the order list - that is returned from the parser. - """ - self._args.append(Argument(obj, dest=dest, nargs=nargs)) - - def parse_args( - self, args: t.List[str] - ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]: - """Parses positional arguments and returns ``(values, args, order)`` - for the parsed options and arguments as well as the leftover - arguments if there are any. The order is a list of objects as they - appear on the command line. If arguments appear multiple times they - will be memorized multiple times as well. - """ - state = ParsingState(args) - try: - self._process_args_for_options(state) - self._process_args_for_args(state) - except UsageError: - if self.ctx is None or not self.ctx.resilient_parsing: - raise - return state.opts, state.largs, state.order - - def _process_args_for_args(self, state: ParsingState) -> None: - pargs, args = _unpack_args( - state.largs + state.rargs, [x.nargs for x in self._args] - ) - - for idx, arg in enumerate(self._args): - arg.process(pargs[idx], state) - - state.largs = args - state.rargs = [] - - def _process_args_for_options(self, state: ParsingState) -> None: - while state.rargs: - arg = state.rargs.pop(0) - arglen = len(arg) - # Double dashes always handled explicitly regardless of what - # prefixes are valid. - if arg == "--": - return - elif arg[:1] in self._opt_prefixes and arglen > 1: - self._process_opts(arg, state) - elif self.allow_interspersed_args: - state.largs.append(arg) - else: - state.rargs.insert(0, arg) - return - - # Say this is the original argument list: - # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] - # ^ - # (we are about to process arg(i)). - # - # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of - # [arg0, ..., arg(i-1)] (any options and their arguments will have - # been removed from largs). - # - # The while loop will usually consume 1 or more arguments per pass. - # If it consumes 1 (eg. arg is an option that takes no arguments), - # then after _process_arg() is done the situation is: - # - # largs = subset of [arg0, ..., arg(i)] - # rargs = [arg(i+1), ..., arg(N-1)] - # - # If allow_interspersed_args is false, largs will always be - # *empty* -- still a subset of [arg0, ..., arg(i-1)], but - # not a very interesting subset! - - def _match_long_opt( - self, opt: str, explicit_value: t.Optional[str], state: ParsingState - ) -> None: - if opt not in self._long_opt: - from difflib import get_close_matches - - possibilities = get_close_matches(opt, self._long_opt) - raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) - - option = self._long_opt[opt] - if option.takes_value: - # At this point it's safe to modify rargs by injecting the - # explicit value, because no exception is raised in this - # branch. This means that the inserted value will be fully - # consumed. - if explicit_value is not None: - state.rargs.insert(0, explicit_value) - - value = self._get_value_from_state(opt, option, state) - - elif explicit_value is not None: - raise BadOptionUsage( - opt, _("Option {name!r} does not take a value.").format(name=opt) - ) - - else: - value = None - - option.process(value, state) - - def _match_short_opt(self, arg: str, state: ParsingState) -> None: - stop = False - i = 1 - prefix = arg[0] - unknown_options = [] - - for ch in arg[1:]: - opt = normalize_opt(f"{prefix}{ch}", self.ctx) - option = self._short_opt.get(opt) - i += 1 - - if not option: - if self.ignore_unknown_options: - unknown_options.append(ch) - continue - raise NoSuchOption(opt, ctx=self.ctx) - if option.takes_value: - # Any characters left in arg? Pretend they're the - # next arg, and stop consuming characters of arg. - if i < len(arg): - state.rargs.insert(0, arg[i:]) - stop = True - - value = self._get_value_from_state(opt, option, state) - - else: - value = None - - option.process(value, state) - - if stop: - break - - # If we got any unknown options we recombine the string of the - # remaining options and re-attach the prefix, then report that - # to the state as new larg. This way there is basic combinatorics - # that can be achieved while still ignoring unknown arguments. - if self.ignore_unknown_options and unknown_options: - state.largs.append(f"{prefix}{''.join(unknown_options)}") - - def _get_value_from_state( - self, option_name: str, option: Option, state: ParsingState - ) -> t.Any: - nargs = option.nargs - - if len(state.rargs) < nargs: - if option.obj._flag_needs_value: - # Option allows omitting the value. - value = _flag_needs_value - else: - raise BadOptionUsage( - option_name, - ngettext( - "Option {name!r} requires an argument.", - "Option {name!r} requires {nargs} arguments.", - nargs, - ).format(name=option_name, nargs=nargs), - ) - elif nargs == 1: - next_rarg = state.rargs[0] - - if ( - option.obj._flag_needs_value - and isinstance(next_rarg, str) - and next_rarg[:1] in self._opt_prefixes - and len(next_rarg) > 1 - ): - # The next arg looks like the start of an option, don't - # use it as the value if omitting the value is allowed. - value = _flag_needs_value - else: - value = state.rargs.pop(0) - else: - value = tuple(state.rargs[:nargs]) - del state.rargs[:nargs] - - return value - - def _process_opts(self, arg: str, state: ParsingState) -> None: - explicit_value = None - # Long option handling happens in two parts. The first part is - # supporting explicitly attached values. In any case, we will try - # to long match the option first. - if "=" in arg: - long_opt, explicit_value = arg.split("=", 1) - else: - long_opt = arg - norm_long_opt = normalize_opt(long_opt, self.ctx) - - # At this point we will match the (assumed) long option through - # the long option matching code. Note that this allows options - # like "-foo" to be matched as long options. - try: - self._match_long_opt(norm_long_opt, explicit_value, state) - except NoSuchOption: - # At this point the long option matching failed, and we need - # to try with short options. However there is a special rule - # which says, that if we have a two character options prefix - # (applies to "--foo" for instance), we do not dispatch to the - # short option code and will instead raise the no option - # error. - if arg[:2] not in self._opt_prefixes: - self._match_short_opt(arg, state) - return - - if not self.ignore_unknown_options: - raise - - state.largs.append(arg) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py deleted file mode 100644 index 0b7cdf4be05dea1e810b4fddf4bf026bc1a50a85..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py +++ /dev/null @@ -1,475 +0,0 @@ -"""Allows building all the variable fonts of a DesignSpace version 5 by -splitting the document into interpolable sub-space, then into each VF. -""" - -from __future__ import annotations - -import itertools -import logging -import math -from typing import Any, Callable, Dict, Iterator, List, Tuple, cast - -from fontTools.designspaceLib import ( - AxisDescriptor, - AxisMappingDescriptor, - DesignSpaceDocument, - DiscreteAxisDescriptor, - InstanceDescriptor, - RuleDescriptor, - SimpleLocationDict, - SourceDescriptor, - VariableFontDescriptor, -) -from fontTools.designspaceLib.statNames import StatNames, getStatNames -from fontTools.designspaceLib.types import ( - ConditionSet, - Range, - Region, - getVFUserRegion, - locationInRegion, - regionInRegion, - userRegionToDesignRegion, -) - -LOGGER = logging.getLogger(__name__) - -MakeInstanceFilenameCallable = Callable[ - [DesignSpaceDocument, InstanceDescriptor, StatNames], str -] - - -def defaultMakeInstanceFilename( - doc: DesignSpaceDocument, instance: InstanceDescriptor, statNames: StatNames -) -> str: - """Default callable to synthesize an instance filename - when makeNames=True, for instances that don't specify an instance name - in the designspace. This part of the name generation can be overriden - because it's not specified by the STAT table. - """ - familyName = instance.familyName or statNames.familyNames.get("en") - styleName = instance.styleName or statNames.styleNames.get("en") - return f"{familyName}-{styleName}.ttf" - - -def splitInterpolable( - doc: DesignSpaceDocument, - makeNames: bool = True, - expandLocations: bool = True, - makeInstanceFilename: MakeInstanceFilenameCallable = defaultMakeInstanceFilename, -) -> Iterator[Tuple[SimpleLocationDict, DesignSpaceDocument]]: - """Split the given DS5 into several interpolable sub-designspaces. - There are as many interpolable sub-spaces as there are combinations of - discrete axis values. - - E.g. with axes: - - italic (discrete) Upright or Italic - - style (discrete) Sans or Serif - - weight (continuous) 100 to 900 - - There are 4 sub-spaces in which the Weight axis should interpolate: - (Upright, Sans), (Upright, Serif), (Italic, Sans) and (Italic, Serif). - - The sub-designspaces still include the full axis definitions and STAT data, - but the rules, sources, variable fonts, instances are trimmed down to only - keep what falls within the interpolable sub-space. - - Args: - - ``makeNames``: Whether to compute the instance family and style - names using the STAT data. - - ``expandLocations``: Whether to turn all locations into "full" - locations, including implicit default axis values where missing. - - ``makeInstanceFilename``: Callable to synthesize an instance filename - when makeNames=True, for instances that don't specify an instance name - in the designspace. This part of the name generation can be overridden - because it's not specified by the STAT table. - - .. versionadded:: 5.0 - """ - discreteAxes = [] - interpolableUserRegion: Region = {} - for axis in doc.axes: - if hasattr(axis, "values"): - # Mypy doesn't support narrowing union types via hasattr() - # TODO(Python 3.10): use TypeGuard - # https://mypy.readthedocs.io/en/stable/type_narrowing.html - axis = cast(DiscreteAxisDescriptor, axis) - discreteAxes.append(axis) - else: - axis = cast(AxisDescriptor, axis) - interpolableUserRegion[axis.name] = Range( - axis.minimum, - axis.maximum, - axis.default, - ) - valueCombinations = itertools.product(*[axis.values for axis in discreteAxes]) - for values in valueCombinations: - discreteUserLocation = { - discreteAxis.name: value - for discreteAxis, value in zip(discreteAxes, values) - } - subDoc = _extractSubSpace( - doc, - {**interpolableUserRegion, **discreteUserLocation}, - keepVFs=True, - makeNames=makeNames, - expandLocations=expandLocations, - makeInstanceFilename=makeInstanceFilename, - ) - yield discreteUserLocation, subDoc - - -def splitVariableFonts( - doc: DesignSpaceDocument, - makeNames: bool = False, - expandLocations: bool = False, - makeInstanceFilename: MakeInstanceFilenameCallable = defaultMakeInstanceFilename, -) -> Iterator[Tuple[str, DesignSpaceDocument]]: - """Convert each variable font listed in this document into a standalone - designspace. This can be used to compile all the variable fonts from a - format 5 designspace using tools that can only deal with 1 VF at a time. - - Args: - - ``makeNames``: Whether to compute the instance family and style - names using the STAT data. - - ``expandLocations``: Whether to turn all locations into "full" - locations, including implicit default axis values where missing. - - ``makeInstanceFilename``: Callable to synthesize an instance filename - when makeNames=True, for instances that don't specify an instance name - in the designspace. This part of the name generation can be overridden - because it's not specified by the STAT table. - - .. versionadded:: 5.0 - """ - # Make one DesignspaceDoc v5 for each variable font - for vf in doc.getVariableFonts(): - vfUserRegion = getVFUserRegion(doc, vf) - vfDoc = _extractSubSpace( - doc, - vfUserRegion, - keepVFs=False, - makeNames=makeNames, - expandLocations=expandLocations, - makeInstanceFilename=makeInstanceFilename, - ) - vfDoc.lib = {**vfDoc.lib, **vf.lib} - yield vf.name, vfDoc - - -def convert5to4( - doc: DesignSpaceDocument, -) -> Dict[str, DesignSpaceDocument]: - """Convert each variable font listed in this document into a standalone - format 4 designspace. This can be used to compile all the variable fonts - from a format 5 designspace using tools that only know about format 4. - - .. versionadded:: 5.0 - """ - vfs = {} - for _location, subDoc in splitInterpolable(doc): - for vfName, vfDoc in splitVariableFonts(subDoc): - vfDoc.formatVersion = "4.1" - vfs[vfName] = vfDoc - return vfs - - -def _extractSubSpace( - doc: DesignSpaceDocument, - userRegion: Region, - *, - keepVFs: bool, - makeNames: bool, - expandLocations: bool, - makeInstanceFilename: MakeInstanceFilenameCallable, -) -> DesignSpaceDocument: - subDoc = DesignSpaceDocument() - # Don't include STAT info - # FIXME: (Jany) let's think about it. Not include = OK because the point of - # the splitting is to build VFs and we'll use the STAT data of the full - # document to generate the STAT of the VFs, so "no need" to have STAT data - # in sub-docs. Counterpoint: what if someone wants to split this DS for - # other purposes? Maybe for that it would be useful to also subset the STAT - # data? - # subDoc.elidedFallbackName = doc.elidedFallbackName - - def maybeExpandDesignLocation(object): - if expandLocations: - return object.getFullDesignLocation(doc) - else: - return object.designLocation - - for axis in doc.axes: - range = userRegion[axis.name] - if isinstance(range, Range) and hasattr(axis, "minimum"): - # Mypy doesn't support narrowing union types via hasattr() - # TODO(Python 3.10): use TypeGuard - # https://mypy.readthedocs.io/en/stable/type_narrowing.html - axis = cast(AxisDescriptor, axis) - subDoc.addAxis( - AxisDescriptor( - # Same info - tag=axis.tag, - name=axis.name, - labelNames=axis.labelNames, - hidden=axis.hidden, - # Subset range - minimum=max(range.minimum, axis.minimum), - default=range.default or axis.default, - maximum=min(range.maximum, axis.maximum), - map=[ - (user, design) - for user, design in axis.map - if range.minimum <= user <= range.maximum - ], - # Don't include STAT info - axisOrdering=None, - axisLabels=None, - ) - ) - - subDoc.axisMappings = mappings = [] - subDocAxes = {axis.name for axis in subDoc.axes} - for mapping in doc.axisMappings: - if not all(axis in subDocAxes for axis in mapping.inputLocation.keys()): - continue - if not all(axis in subDocAxes for axis in mapping.outputLocation.keys()): - LOGGER.error( - "In axis mapping from input %s, some output axes are not in the variable-font: %s", - mapping.inputLocation, - mapping.outputLocation, - ) - continue - - mappingAxes = set() - mappingAxes.update(mapping.inputLocation.keys()) - mappingAxes.update(mapping.outputLocation.keys()) - for axis in doc.axes: - if axis.name not in mappingAxes: - continue - range = userRegion[axis.name] - if ( - range.minimum != axis.minimum - or (range.default is not None and range.default != axis.default) - or range.maximum != axis.maximum - ): - LOGGER.error( - "Limiting axis ranges used in elements not supported: %s", - axis.name, - ) - continue - - mappings.append( - AxisMappingDescriptor( - inputLocation=mapping.inputLocation, - outputLocation=mapping.outputLocation, - ) - ) - - # Don't include STAT info - # subDoc.locationLabels = doc.locationLabels - - # Rules: subset them based on conditions - designRegion = userRegionToDesignRegion(doc, userRegion) - subDoc.rules = _subsetRulesBasedOnConditions(doc.rules, designRegion) - subDoc.rulesProcessingLast = doc.rulesProcessingLast - - # Sources: keep only the ones that fall within the kept axis ranges - for source in doc.sources: - if not locationInRegion(doc.map_backward(source.designLocation), userRegion): - continue - - subDoc.addSource( - SourceDescriptor( - filename=source.filename, - path=source.path, - font=source.font, - name=source.name, - designLocation=_filterLocation( - userRegion, maybeExpandDesignLocation(source) - ), - layerName=source.layerName, - familyName=source.familyName, - styleName=source.styleName, - muteKerning=source.muteKerning, - muteInfo=source.muteInfo, - mutedGlyphNames=source.mutedGlyphNames, - ) - ) - - # Copy family name translations from the old default source to the new default - vfDefault = subDoc.findDefault() - oldDefault = doc.findDefault() - if vfDefault is not None and oldDefault is not None: - vfDefault.localisedFamilyName = oldDefault.localisedFamilyName - - # Variable fonts: keep only the ones that fall within the kept axis ranges - if keepVFs: - # Note: call getVariableFont() to make the implicit VFs explicit - for vf in doc.getVariableFonts(): - vfUserRegion = getVFUserRegion(doc, vf) - if regionInRegion(vfUserRegion, userRegion): - subDoc.addVariableFont( - VariableFontDescriptor( - name=vf.name, - filename=vf.filename, - axisSubsets=[ - axisSubset - for axisSubset in vf.axisSubsets - if isinstance(userRegion[axisSubset.name], Range) - ], - lib=vf.lib, - ) - ) - - # Instances: same as Sources + compute missing names - for instance in doc.instances: - if not locationInRegion(instance.getFullUserLocation(doc), userRegion): - continue - - if makeNames: - statNames = getStatNames(doc, instance.getFullUserLocation(doc)) - familyName = instance.familyName or statNames.familyNames.get("en") - styleName = instance.styleName or statNames.styleNames.get("en") - subDoc.addInstance( - InstanceDescriptor( - filename=instance.filename - or makeInstanceFilename(doc, instance, statNames), - path=instance.path, - font=instance.font, - name=instance.name or f"{familyName} {styleName}", - userLocation={} if expandLocations else instance.userLocation, - designLocation=_filterLocation( - userRegion, maybeExpandDesignLocation(instance) - ), - familyName=familyName, - styleName=styleName, - postScriptFontName=instance.postScriptFontName - or statNames.postScriptFontName, - styleMapFamilyName=instance.styleMapFamilyName - or statNames.styleMapFamilyNames.get("en"), - styleMapStyleName=instance.styleMapStyleName - or statNames.styleMapStyleName, - localisedFamilyName=instance.localisedFamilyName - or statNames.familyNames, - localisedStyleName=instance.localisedStyleName - or statNames.styleNames, - localisedStyleMapFamilyName=instance.localisedStyleMapFamilyName - or statNames.styleMapFamilyNames, - localisedStyleMapStyleName=instance.localisedStyleMapStyleName - or {}, - lib=instance.lib, - ) - ) - else: - subDoc.addInstance( - InstanceDescriptor( - filename=instance.filename, - path=instance.path, - font=instance.font, - name=instance.name, - userLocation={} if expandLocations else instance.userLocation, - designLocation=_filterLocation( - userRegion, maybeExpandDesignLocation(instance) - ), - familyName=instance.familyName, - styleName=instance.styleName, - postScriptFontName=instance.postScriptFontName, - styleMapFamilyName=instance.styleMapFamilyName, - styleMapStyleName=instance.styleMapStyleName, - localisedFamilyName=instance.localisedFamilyName, - localisedStyleName=instance.localisedStyleName, - localisedStyleMapFamilyName=instance.localisedStyleMapFamilyName, - localisedStyleMapStyleName=instance.localisedStyleMapStyleName, - lib=instance.lib, - ) - ) - - subDoc.lib = doc.lib - - return subDoc - - -def _conditionSetFrom(conditionSet: List[Dict[str, Any]]) -> ConditionSet: - c: Dict[str, Range] = {} - for condition in conditionSet: - minimum, maximum = condition.get("minimum"), condition.get("maximum") - c[condition["name"]] = Range( - minimum if minimum is not None else -math.inf, - maximum if maximum is not None else math.inf, - ) - return c - - -def _subsetRulesBasedOnConditions( - rules: List[RuleDescriptor], designRegion: Region -) -> List[RuleDescriptor]: - # What rules to keep: - # - Keep the rule if any conditionset is relevant. - # - A conditionset is relevant if all conditions are relevant or it is empty. - # - A condition is relevant if - # - axis is point (C-AP), - # - and point in condition's range (C-AP-in) - # (in this case remove the condition because it's always true) - # - else (C-AP-out) whole conditionset can be discarded (condition false - # => conditionset false) - # - axis is range (C-AR), - # - (C-AR-all) and axis range fully contained in condition range: we can - # scrap the condition because it's always true - # - (C-AR-inter) and intersection(axis range, condition range) not empty: - # keep the condition with the smaller range (= intersection) - # - (C-AR-none) else, whole conditionset can be discarded - newRules: List[RuleDescriptor] = [] - for rule in rules: - newRule: RuleDescriptor = RuleDescriptor( - name=rule.name, conditionSets=[], subs=rule.subs - ) - for conditionset in rule.conditionSets: - cs = _conditionSetFrom(conditionset) - newConditionset: List[Dict[str, Any]] = [] - discardConditionset = False - for selectionName, selectionValue in designRegion.items(): - # TODO: Ensure that all(key in conditionset for key in region.keys())? - if selectionName not in cs: - # raise Exception("Selection has different axes than the rules") - continue - if isinstance(selectionValue, (float, int)): # is point - # Case C-AP-in - if selectionValue in cs[selectionName]: - pass # always matches, conditionset can stay empty for this one. - # Case C-AP-out - else: - discardConditionset = True - else: # is range - # Case C-AR-all - if selectionValue in cs[selectionName]: - pass # always matches, conditionset can stay empty for this one. - else: - intersection = cs[selectionName].intersection(selectionValue) - # Case C-AR-inter - if intersection is not None: - newConditionset.append( - { - "name": selectionName, - "minimum": intersection.minimum, - "maximum": intersection.maximum, - } - ) - # Case C-AR-none - else: - discardConditionset = True - if not discardConditionset: - newRule.conditionSets.append(newConditionset) - if newRule.conditionSets: - newRules.append(newRule) - - return newRules - - -def _filterLocation( - userRegion: Region, - location: Dict[str, float], -) -> Dict[str, float]: - return { - name: value - for name, value in location.items() - if name in userRegion and isinstance(userRegion[name], Range) - } diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py deleted file mode 100644 index 1f52f20a2b4836e39d3e292496928185dfe08534..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py +++ /dev/null @@ -1,46 +0,0 @@ -"""DEPRECATED - This module is kept here only as a backward compatibility shim -for the old ufoLib.plistlib module, which was moved to fontTools.misc.plistlib. -Please use the latter instead. -""" -from fontTools.misc.plistlib import dump, dumps, load, loads -from fontTools.misc.textTools import tobytes - -# The following functions were part of the old py2-like ufoLib.plistlib API. -# They are kept only for backward compatiblity. -from fontTools.ufoLib.utils import deprecated - - -@deprecated("Use 'fontTools.misc.plistlib.load' instead") -def readPlist(path_or_file): - did_open = False - if isinstance(path_or_file, str): - path_or_file = open(path_or_file, "rb") - did_open = True - try: - return load(path_or_file, use_builtin_types=False) - finally: - if did_open: - path_or_file.close() - - -@deprecated("Use 'fontTools.misc.plistlib.dump' instead") -def writePlist(value, path_or_file): - did_open = False - if isinstance(path_or_file, str): - path_or_file = open(path_or_file, "wb") - did_open = True - try: - dump(value, path_or_file, use_builtin_types=False) - finally: - if did_open: - path_or_file.close() - - -@deprecated("Use 'fontTools.misc.plistlib.loads' instead") -def readPlistFromString(data): - return loads(tobytes(data, encoding="utf-8"), use_builtin_types=False) - - -@deprecated("Use 'fontTools.misc.plistlib.dumps' instead") -def writePlistToString(value): - return dumps(value, use_builtin_types=False) diff --git a/spaces/Dantra1/CeliaSensei/README.md b/spaces/Dantra1/CeliaSensei/README.md deleted file mode 100644 index 2e44ec5507a21c84647346865c876ce2b48db560..0000000000000000000000000000000000000000 --- a/spaces/Dantra1/CeliaSensei/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Vits Models -emoji: 🏃 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: sayashi/vits-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Dao3/image-to-video/app.py b/spaces/Dao3/image-to-video/app.py deleted file mode 100644 index 7dbb0a7692a79a4116b7fcd856e4d74c8d03e28a..0000000000000000000000000000000000000000 --- a/spaces/Dao3/image-to-video/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import gradio as gr -from transformers import pipeline -import io, base64 -from PIL import Image -import numpy as np -import tensorflow as tf -import mediapy -import os -import sys -from huggingface_hub import snapshot_download -from image_tools.sizes import resize_and_crop - -os.system("git clone https://github.com/google-research/frame-interpolation") -sys.path.append("frame-interpolation") -from eval import interpolator, util - -ffmpeg_path = util.get_ffmpeg_path() -mediapy.set_ffmpeg(ffmpeg_path) - -model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style") -interpolator = interpolator.Interpolator(model, None) - -def resize(width, img): - basewidth = width - img = Image.open(img) - wpercent = (basewidth / float(img.size[0])) - hsize = int((float(img.size[1]) * float(wpercent))) - img = img.resize((basewidth, hsize), Image.ANTIALIAS) - return img - -def resize_img(img1, img2, output_name): - img_target_size = Image.open(img1) - img_to_resize = resize_and_crop( - img2, - (img_target_size.size[0], img_target_size.size[1]), - crop_origin="middle" - ) - img_to_resize.save(output_name) - -def generate_interpolation(frame1, frame2, frame3, frame4, frame5, frame6, times_to_interpolate, fps): - - frame1 = resize(256, frame1) - frame2 = resize(256, frame2) - frame3 = resize(256, frame3) - frame4 = resize(256, frame4) - frame5 = resize(256, frame5) - frame6 = resize(256, frame6) - - frame1.save("test1.png") - frame2.save("test2.png") - frame3.save("test3.png") - frame4.save("test4.png") - frame5.save("test5.png") - frame6.save("test6.png") - - resize_img("test1.png", "test2.png", "resized_img2.png") - resize_img("test1.png", "test3.png", "resized_img3.png") - resize_img("test1.png", "test4.png", "resized_img4.png") - resize_img("test1.png", "test5.png", "resized_img5.png") - resize_img("test1.png", "test6.png", "resized_img6.png") - - input_frames = ["test1.png", "resized_img2.png", "resized_img3.png", "resized_img4.png", "resized_img5.png", "resized_img6.png"] - - frames = list(util.interpolate_recursively_from_files(input_frames, times_to_interpolate, interpolator)) - - mediapy.write_video("out.mp4", frames, fps=fps) - - return "out.mp4" - -demo = gr.Blocks() - -with demo: - with gr.Row(): - - # Left column (inputs) - with gr.Column(): - - with gr.Row(): - # upload images and get image strings - input_arr = [ - gr.inputs.Image(type='filepath', label="Frame 1"), - gr.inputs.Image(type='filepath', label="Frame 2"), - gr.inputs.Image(type='filepath', label="Frame 3"), - gr.inputs.Image(type='filepath', label="Frame 4"), - gr.inputs.Image(type='filepath', label="Frame 5"), - gr.inputs.Image(type='filepath', label="Frame 6"), - ] - - with gr.Row(): - input_arr.append(gr.inputs.Slider(minimum=2, maximum=10, step=1, label="Times to Interpolate")) - input_arr.append(gr.inputs.Slider(minimum=15, maximum=60, step=1, label="fps")) - - # Rows of instructions & buttons - with gr.Row(): - gr.Markdown("After uploading some images, hit the 'Generate Video' button to create a short video!") - button_gen_video = gr.Button("Generate Video") - - - # Right column (outputs) - with gr.Column(): - output_interpolation = gr.Video(label="Generated Video") - - # Bind functions to buttons - button_gen_video.click(fn=generate_interpolation, inputs=input_arr, outputs=output_interpolation) - -demo.launch(debug=True, enable_queue=True) diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py deleted file mode 100644 index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -import random -import re -import typing as tp -import warnings - -from einops import rearrange -from num2words import num2words -import spacy -from transformers import T5EncoderModel, T5Tokenizer # type: ignore -import torchaudio -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio_dataset import SegmentInfo -from ..utils.autocast import TorchAutocast -from ..utils.utils import hash_trick, length_to_mask, collate - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: Tensor - length: Tensor - path: tp.List[tp.Optional[str]] = [] - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """This function transforms an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor]) - dim (int): the dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: a tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert type(condition) == tuple and \ - type(condition[0]) == Tensor and \ - type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(wav: Tensor) -> WavCondition: - """Create a nullified WavCondition from a wav tensor with appropriate shape. - - Args: - wav (Tensor): tensor of shape [B, T] - Returns: - WavCondition: wav condition with nullified wav. - """ - null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * wav.shape[0], device=wav.device), - path=['null_wav'] * wav.shape[0] - ) - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def attributes(self): - return {"text": self.text_attributes, "wav": self.wav_attributes} - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -class Tokenizer: - """Base class for all tokenizers - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATIONS = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__( - self, - texts: tp.List[tp.Optional[str]], - return_text: bool = False - ) -> tp.Tuple[Tensor, Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (tp.List[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tp.Tuple[Tensor, Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuations - text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. We allow the output dim to be different - than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim, output_dim): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == "whitespace": - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == "noop": - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__["t5"] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device) - mask = inputs["attention_mask"] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs["attention_mask"] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, wav_length: WavCondition) -> WavCondition: - wav, length, path = wav_length - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), path) - - def _get_wav_embedding(self, wav: Tensor) -> Tensor: - """Gets as input a wav and returns a dense vector of conditions.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, inputs: WavCondition) -> ConditionType: - """ - Args: - input (WavCondition): Tuple of (waveform, lengths). - Returns: - ConditionType: Dense vector representing the conditioning along with its' mask. - """ - wav, lengths, path = inputs - with torch.no_grad(): - embeds = self._get_wav_embedding(wav) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by - the insight the drums and bass often dominate the chroma, leading to the chroma not containing the - information about melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma for the chroma extractor. - radix2_exp (int): Radix2 exponent for the chroma extractor. - duration (float): Duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): If True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device) - self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3} - self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp, - device=device, **kwargs) - self.chroma_len = self._get_chroma_len() - - def _downsampling_factor(self): - return self.chroma.winhop - - def _get_chroma_len(self): - """Get length of chroma during training""" - dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_filtered_wav(self, wav): - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels) - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_idx] # extract stem - stems = stems.sum(1) # merge extracted stems - stems = stems.mean(1, keepdim=True) # mono - stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1) - return stems - - @torch.no_grad() - def _get_wav_embedding(self, wav): - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self.chroma(wav) - stems = self._get_filtered_wav(wav) - chroma = self.chroma(stems) - - if self.match_len_on_eval: - b, t, c = chroma.shape - if t > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})') - elif t < self.chroma_len: - # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t)) - n_repeat = int(math.ceil(self.chroma_len / t)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})') - return chroma - - -class ChromaExtractor(nn.Module): - """Chroma extraction class, handles chroma extraction and quantization. - - Args: - sample_rate (int): Sample rate. - n_chroma (int): Number of chroma to consider. - radix2_exp (int): Radix2 exponent. - nfft (tp.Optional[int], optional): Number of FFT. - winlen (tp.Optional[int], optional): Window length. - winhop (tp.Optional[int], optional): Window hop size. - argmax (bool, optional): Whether to use argmax. Defaults to False. - norm (float, optional): Norm for chroma normalization. Defaults to inf. - device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu. - """ - def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, - nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, - argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - from librosa import filters - self.device = device - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.winlen = winlen or 2 ** radix2_exp - self.nfft = nfft or self.winlen - self.winhop = winhop or (self.winlen // 4) - self.sr = sample_rate - self.n_chroma = n_chroma - self.norm = norm - self.argmax = argmax - self.window = torch.hann_window(self.winlen).to(device) - self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0, - n_chroma=self.n_chroma)).to(device) - self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen, - hop_length=self.winhop, power=2, center=True, - pad=0, normalized=True).to(device) - - def forward(self, wav): - with self.autocast: - T = wav.shape[-1] - # in case we are getting a wav that was dropped out (nullified) - # make sure wav length is no less that nfft - if T < self.nfft: - pad = self.nfft - T - r = 0 if pad % 2 == 0 else 1 - wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0) - assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}' - spec = self.spec(wav).squeeze(1) - raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec) - norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6) - norm_chroma = rearrange(norm_chroma, "b d t -> b t d") - - if self.argmax: - idx = norm_chroma.argmax(-1, keepdims=True) - norm_chroma[:] = 0 - norm_chroma.scatter_(dim=-1, index=idx, value=1) - - return norm_chroma - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str): - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using "nullify_condition". - If the condition is of any other type, set its' value to None. - Works in-place. - """ - if condition_type not in ["text", "wav"]: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'wav' or 'text' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f"but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == "wav": - wav, length, path = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base class for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Applies dropout with a given probability per attribute. This is different from the behavior of - ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example, - "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout - where if "artist" is dropped "genre" must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Applies Classifier Free Guidance dropout, meaning all attributes - are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Main class to provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - merge_text_conditions_p (float, optional): Probability to merge all text sources - into a single text condition. Defaults to 0. - drop_desc_p (float, optional): Probability to drop the original description - when merging all text sources into a single text condition. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types. - """ - def __init__( - self, - conditioners: tp.Dict[str, BaseConditioner], - merge_text_conditions_p: float = 0, - drop_desc_p: float = 0, - device: tp.Union[torch.device, str] = "cpu", - ): - super().__init__() - self.device = device - self.merge_text_conditions_p = merge_text_conditions_p - self.drop_desc_p = drop_desc_p - self.conditioners = nn.ModuleDict(conditioners) - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([type(x) == ConditioningAttributes for x in inputs]), \ - "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \ - f" but types were {set([type(x) for x in inputs])}" - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - - assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \ - f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}" - - for attribute, batch in chain(text.items(), wavs.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners - and the tokenized representations. The output is for example: - - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - """ - batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - - def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0): - def is_valid(k, v): - k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument'] - v_valid = v is not None and isinstance(v, (int, float, str, list)) - return k_valid and v_valid - - def process_value(v): - if isinstance(v, (int, float, str)): - return v - if isinstance(v, list): - return ", ".join(v) - else: - RuntimeError(f"unknown type for text value! ({type(v), v})") - - desc = cond.text['description'] - meta_data = "" - if random.uniform(0, 1) < merge_text_conditions_p: - meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)] - random.shuffle(meta_pairs) - meta_data = ". ".join(meta_pairs) - desc = desc if not random.uniform(0, 1) < drop_desc_p else None - - if desc is None: - desc = meta_data if len(meta_data) > 1 else None - else: - desc = desc.rstrip('.') + ". " + meta_data - cond.text['description'] = desc.strip() if desc else None - - if self.training and self.merge_text_conditions_p: - for sample in samples: - _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p) - - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - batch_per_attribute[condition].append(text[condition]) - - return batch_per_attribute - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]): - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attribtues. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - dict: A dicionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lens = defaultdict(list) - paths = defaultdict(list) - out = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, path = sample.wav[attribute] - wavs[attribute].append(wav.flatten()) - lens[attribute].append(length) - paths[attribute].append(path) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition(stacked_wav.unsqueeze(1), - torch.cat(lens['self_wav']), paths[attribute]) # type: ignore - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (Tensor): Transformer input. - conditions (tp.Dict[str, ConditionType]): Dict of conditions. - Returns: - tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == "sum": - input += cond - elif op == "input_interpolate": - cond = rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += rearrange(cond, "b d t -> b t d") - elif op == "prepend": - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == "cross": - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/DevashishBhake/SERModel/README.md b/spaces/DevashishBhake/SERModel/README.md deleted file mode 100644 index 77580d1f4d037383080eed6c1c2258255cc2ae6b..0000000000000000000000000000000000000000 --- a/spaces/DevashishBhake/SERModel/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SERModel -emoji: 📈 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py deleted file mode 100644 index 90949545ba955dabf2e17d8cf5e524d5cb190a63..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py +++ /dev/null @@ -1,34 +0,0 @@ -import os - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - - -module_path = os.path.dirname(__file__) - - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - rest_dim = [1] * (input.ndim - bias.ndim - 1) - input = input.cuda() - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope - ) - * scale - ) - diff --git a/spaces/Egrt/LicenseGAN/utils/transforms.py b/spaces/Egrt/LicenseGAN/utils/transforms.py deleted file mode 100644 index d9bbb5fb7daef5edfb425fafb4d67d471b3001e6..0000000000000000000000000000000000000000 --- a/spaces/Egrt/LicenseGAN/utils/transforms.py +++ /dev/null @@ -1,179 +0,0 @@ -import cv2 -import random -import torch - - -def mod_crop(img, scale): - """Mod crop images, used during testing. - - Args: - img (ndarray): Input image. - scale (int): Scale factor. - - Returns: - ndarray: Result image. - """ - img = img.copy() - if img.ndim in (2, 3): - h, w = img.shape[0], img.shape[1] - h_remainder, w_remainder = h % scale, w % scale - img = img[:h - h_remainder, :w - w_remainder, ...] - else: - raise ValueError(f'Wrong img ndim: {img.ndim}.') - return img - - -def paired_random_crop(img_gts, img_lqs, gt_patch_size, scale, gt_path=None): - """Paired random crop. Support Numpy array and Tensor inputs. - - It crops lists of lq and gt images with corresponding locations. - - Args: - img_gts (list[ndarray] | ndarray | list[Tensor] | Tensor): GT images. Note that all images - should have the same shape. If the input is an ndarray, it will - be transformed to a list containing itself. - img_lqs (list[ndarray] | ndarray): LQ images. Note that all images - should have the same shape. If the input is an ndarray, it will - be transformed to a list containing itself. - gt_patch_size (int): GT patch size. - scale (int): Scale factor. - gt_path (str): Path to ground-truth. Default: None. - - Returns: - list[ndarray] | ndarray: GT images and LQ images. If returned results - only have one element, just return ndarray. - """ - - if not isinstance(img_gts, list): - img_gts = [img_gts] - if not isinstance(img_lqs, list): - img_lqs = [img_lqs] - - # determine input type: Numpy array or Tensor - input_type = 'Tensor' if torch.is_tensor(img_gts[0]) else 'Numpy' - - if input_type == 'Tensor': - h_lq, w_lq = img_lqs[0].size()[-2:] - h_gt, w_gt = img_gts[0].size()[-2:] - else: - h_lq, w_lq = img_lqs[0].shape[0:2] - h_gt, w_gt = img_gts[0].shape[0:2] - lq_patch_size = gt_patch_size // scale - - if h_gt != h_lq * scale or w_gt != w_lq * scale: - raise ValueError(f'Scale mismatches. GT ({h_gt}, {w_gt}) is not {scale}x ', - f'multiplication of LQ ({h_lq}, {w_lq}).') - if h_lq < lq_patch_size or w_lq < lq_patch_size: - raise ValueError(f'LQ ({h_lq}, {w_lq}) is smaller than patch size ' - f'({lq_patch_size}, {lq_patch_size}). ' - f'Please remove {gt_path}.') - - # randomly choose top and left coordinates for lq patch - top = random.randint(0, h_lq - lq_patch_size) - left = random.randint(0, w_lq - lq_patch_size) - - # crop lq patch - if input_type == 'Tensor': - img_lqs = [v[:, :, top:top + lq_patch_size, left:left + lq_patch_size] for v in img_lqs] - else: - img_lqs = [v[top:top + lq_patch_size, left:left + lq_patch_size, ...] for v in img_lqs] - - # crop corresponding gt patch - top_gt, left_gt = int(top * scale), int(left * scale) - if input_type == 'Tensor': - img_gts = [v[:, :, top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size] for v in img_gts] - else: - img_gts = [v[top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size, ...] for v in img_gts] - if len(img_gts) == 1: - img_gts = img_gts[0] - if len(img_lqs) == 1: - img_lqs = img_lqs[0] - return img_gts, img_lqs - - -def augment(imgs, hflip=True, rotation=True, flows=None, return_status=False): - """Augment: horizontal flips OR rotate (0, 90, 180, 270 degrees). - - We use vertical flip and transpose for rotation implementation. - All the images in the list use the same augmentation. - - Args: - imgs (list[ndarray] | ndarray): Images to be augmented. If the input - is an ndarray, it will be transformed to a list. - hflip (bool): Horizontal flip. Default: True. - rotation (bool): Ratotation. Default: True. - flows (list[ndarray]: Flows to be augmented. If the input is an - ndarray, it will be transformed to a list. - Dimension is (h, w, 2). Default: None. - return_status (bool): Return the status of flip and rotation. - Default: False. - - Returns: - list[ndarray] | ndarray: Augmented images and flows. If returned - results only have one element, just return ndarray. - - """ - hflip = hflip and random.random() < 0.5 - vflip = rotation and random.random() < 0.5 - rot90 = rotation and random.random() < 0.5 - - def _augment(img): - if hflip: # horizontal - cv2.flip(img, 1, img) - if vflip: # vertical - cv2.flip(img, 0, img) - if rot90: - img = img.transpose(1, 0, 2) - return img - - def _augment_flow(flow): - if hflip: # horizontal - cv2.flip(flow, 1, flow) - flow[:, :, 0] *= -1 - if vflip: # vertical - cv2.flip(flow, 0, flow) - flow[:, :, 1] *= -1 - if rot90: - flow = flow.transpose(1, 0, 2) - flow = flow[:, :, [1, 0]] - return flow - - if not isinstance(imgs, list): - imgs = [imgs] - imgs = [_augment(img) for img in imgs] - if len(imgs) == 1: - imgs = imgs[0] - - if flows is not None: - if not isinstance(flows, list): - flows = [flows] - flows = [_augment_flow(flow) for flow in flows] - if len(flows) == 1: - flows = flows[0] - return imgs, flows - else: - if return_status: - return imgs, (hflip, vflip, rot90) - else: - return imgs - - -def img_rotate(img, angle, center=None, scale=1.0): - """Rotate image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees. Positive values mean - counter-clockwise rotation. - center (tuple[int]): Rotation center. If the center is None, - initialize it as the center of the image. Default: None. - scale (float): Isotropic scale factor. Default: 1.0. - """ - (h, w) = img.shape[:2] - - if center is None: - center = (w // 2, h // 2) - - matrix = cv2.getRotationMatrix2D(center, angle, scale) - rotated_img = cv2.warpAffine(img, matrix, (w, h)) - return rotated_img diff --git a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py b/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/EmilyBrat/ATF/README.md b/spaces/EmilyBrat/ATF/README.md deleted file mode 100644 index 9591abf6e1a37484d309e7f747c657286064f99e..0000000000000000000000000000000000000000 --- a/spaces/EmilyBrat/ATF/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ATF -emoji: 🔥 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css b/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css deleted file mode 100644 index 64c9c7d208a823e7fea9f28d92e2a14c3c1f729c..0000000000000000000000000000000000000000 --- a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css +++ /dev/null @@ -1,49 +0,0 @@ -body{ - background-image:url("../static/Images/Background.jpg"); - background-repeat: no-repeat; -background-size: cover; -} -#summarization-table { - width: 100%; - } - - #summarization-table th, - #summarization-table td { - border: 1px solid #ddd; - padding: 8px; - text-align: left; - } - - #summarization-table th:first-child { - border-left: none; - } - - #summarization-table th:last-child { - border-right: none; - } - - #summarization-table th:not(:first-child) { - border-left: none; - border-right: none; - } - - #summarization-table th div { - border-bottom: 1px solid #ddd; - padding: 8px; - } - - #summarization-table td div { - padding: 8px; - } - - #summarization-table thead th { - background-color: #f2f2f2; - } - - #summarization-table tbody tr:nth-child(even) { - background-color: #f2f2f2; - } - - #summarization-table tbody tr:hover { - background-color: #ddd; - } \ No newline at end of file diff --git a/spaces/Felix123456/bingo/src/app/page.tsx b/spaces/Felix123456/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
    - - - ) -} diff --git a/spaces/Femurbreaker/Femur/README.md b/spaces/Femurbreaker/Femur/README.md deleted file mode 100644 index fd3dc2db04a1d401dc9dfa668f2e0d69de436399..0000000000000000000000000000000000000000 --- a/spaces/Femurbreaker/Femur/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Femur -emoji: 🔥 -colorFrom: gray -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Fengbinbin/gpt-academic/README.md b/spaces/Fengbinbin/gpt-academic/README.md deleted file mode 100644 index 6c9da02b60aa81cf11de4a595dde2e2e44c0265d..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/README.md +++ /dev/null @@ -1,312 +0,0 @@ ---- -title: academic-chatgpt -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -python_version: 3.11 -app_file: main.py -pinned: false -duplicated_from: qingxu98/gpt-academic ---- - -# ChatGPT 学术优化 -> **Note** -> -> 安装依赖时,请严格选择requirements.txt中**指定的版本**。 -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` -> - -# GPT 学术优化 (GPT Academic) - -**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests** - -If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. - -> **Note** -> -> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR! -> -> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。 -> -> 3.本项目兼容并鼓励尝试国产大语言模型chatglm和RWKV, 盘古等等。已支持OpenAI和API2D的api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。 - -
    - -功能 | 描述 ---- | --- -一键润色 | 支持一键润色、一键查找论文语法错误 -一键中英互译 | 一键中英互译 -一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释 -[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 -模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码 -[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树 -读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要 -Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文 -批量注释生成 | [函数插件] 一键批量生成函数注释 -Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗? -chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 -[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程) -[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF -[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -互联网信息聚合+GPT | [函数插件] 一键[让GPT先从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck),再回答问题,让信息永不过时 -公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮 -多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序 -启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题 -[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧? -更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 新加入Newbing测试接口(新必应AI) -…… | …… - -
    - - -- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换) -
    - -
    - - -- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板 -
    - -
    - -- 润色/纠错 -
    - -
    - -- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读 -
    - -
    - -- 懒得看项目代码?整个工程直接给chatgpt炫嘴里 -
    - -
    - -- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
    - -
    - ---- - -## 安装-方法1:直接运行 (Windows, Linux or MacOS) - -1. 下载项目 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. 配置API_KEY - -在`config.py`中,配置API KEY等设置,[特殊网络环境设置](https://github.com/binary-husky/gpt_academic/issues/1) 。 - -(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。) - - -3. 安装依赖 -```sh -# (选择I: 如熟悉python)(python版本3.9以上,越新越好) -python -m pip install -r requirements.txt -# 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ - -# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的: -# (II-1)conda create -n gptac_venv python=3.11 -# (II-2)conda activate gptac_venv -# (II-3)python -m pip install -r requirements.txt -``` - -如果需要支持清华ChatGLM后端,需要额外安装更多依赖(前提条件:熟悉python + 电脑配置够强): -```sh -python -m pip install -r request_llm/requirements_chatglm.txt - -# 备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: -# 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda -# 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -``` - -4. 运行 -```sh -python main.py -``` - -5. 测试函数插件 -``` -- 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 - 点击 "[函数插件模板Demo] 历史上的今天" -``` - -## 安装-方法2:使用Docker - -1. 仅ChatGPT(推荐大多数人选择) - -``` sh -# 下载项目 -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等 -用任意文本编辑器编辑 config.py -# 安装 -docker build -t gpt-academic . -#(最后一步-选择1)在Linux环境下,用`--net=host`更方便快捷 -docker run --rm -it --net=host gpt-academic -#(最后一步-选择2)在macOS/windows环境下,只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口 -docker run --rm -it -p 50923:50923 gpt-academic -``` - -2. ChatGPT+ChatGLM(需要对Docker熟悉 + 读懂Dockerfile + 电脑配置够强) - -``` sh -# 修改Dockerfile -cd docs && nano Dockerfile+ChatGLM -# 构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# 运行 (1) 直接运行: -docker run --rm -it --net=host --gpus=all gpt-academic -# 运行 (2) 我想运行之前进容器做一些调整: -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - -3. ChatGPT + LLAMA + 盘古 + RWKV(需要精通Docker) -``` sh -1. 修改docker-compose.yml,删除方案一和方案二,保留方案三(基于jittor) -2. 修改docker-compose.yml中方案三的配置,参考其中注释即可 -3. 终端运行 docker-compose up -``` - - -## 安装-方法3:其他部署姿势 - -1. 如何使用反代URL/微软云AzureAPI -按照`config.py`中的说明配置API_URL_REDIRECT即可。 - -2. 远程云服务器部署(需要云服务器知识与经验) -请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. 使用WSL2(Windows Subsystem for Linux 子系统) -请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. 如何在二级网址(如`http://localhost/subpath`)下运行 -请访问[FastAPI运行说明](docs/WithFastapi.md) - -5. 使用docker-compose运行 -请阅读docker-compose.yml后,按照其中的提示操作即可 ---- - -## 自定义新的便捷按钮 / 自定义函数插件 - -1. 自定义新的便捷按钮(学术快捷键) -任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。) -例如 -``` -"超级英译中": { - # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 - "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n", - - # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。 - "Suffix": "", -}, -``` -
    - -
    - -2. 自定义函数插件 - -编写强大的函数插件来执行任何你想得到的和想不到的任务。 -本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。 -详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。 - ---- - -## 其他功能说明 - -1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件, -另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。 -Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史html存档缓存,点击 `删除所有本地对话历史记录` 可以删除所有html存档缓存。 -
    - -
    - - - -2. 生成报告。大部分插件都会在执行结束后,生成工作报告 -
    - - - -
    - -3. 模块化功能设计,简单的接口却能支持强大的功能 -
    - - -
    - -4. 这是一个能够“自我译解”的开源项目 -
    - -
    - -5. 译解其他开源项目,不在话下 -
    - -
    - -
    - -
    - -6. 装饰[live2d](https://github.com/fghrsh/live2d_demo)的小功能(默认关闭,需要修改`config.py`) -
    - -
    - - -## 版本: -- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级) -- version 3.4(Todo): 完善chatglm本地大模型的多线支持 -- version 3.3: +互联网信息综合功能 -- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合) -- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡 -- version 3.0: 对chatglm和其他小型llm的支持 -- version 2.6: 重构了插件结构,提高了交互性,加入更多插件 -- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题 -- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。 -- version 2.3: 增强多线程交互性 -- version 2.2: 函数插件支持热重载 -- version 2.1: 可折叠式布局 -- version 2.0: 引入模块化函数插件 -- version 1.0: 基础功能 - -gpt_academic开发者QQ群-2:610599535 - - -## 参考与学习 - -``` -代码中参考了很多其他优秀项目中的设计,主要包括: - -# 项目1:清华ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# 项目2:清华JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# 项目3:借鉴了ChuanhuChatGPT中诸多技巧 -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 项目4:ChatPaper -https://github.com/kaixindelele/ChatPaper - -# 更多: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py deleted file mode 100644 index 4c37c4b719430031fce41ce49946f0e6ac93d155..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py +++ /dev/null @@ -1,74 +0,0 @@ -import os, requests, json, browser_cookie3, re, random -from ...typing import sha256, Dict, get_type_hints - -url = 'https://bard.google.com' -model = ['Palm2'] -supports_stream = False -needs_auth = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome( - domain_name='.google.com')}['__Secure-1PSID'] - - formatted = '\n'.join([ - '%s: %s' % (message['role'], message['content']) for message in messages - ]) - prompt = f'{formatted}\nAssistant:' - - proxy = kwargs.get('proxy', False) - if proxy == False: - print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work') - - snlm0e = None - conversation_id = None - response_id = None - choice_id = None - - client = requests.Session() - client.proxies = { - 'http': f'http://{proxy}', - 'https': f'http://{proxy}'} if proxy else None - - client.headers = { - 'authority': 'bard.google.com', - 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8', - 'origin': 'https://bard.google.com', - 'referer': 'https://bard.google.com/', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36', - 'x-same-domain': '1', - 'cookie': f'__Secure-1PSID={psid}' - } - - snlm0e = re.search(r'SNlM0e\":\"(.*?)\"', - client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e - - params = { - 'bl': 'boq_assistant-bard-web-server_20230326.21_p0', - '_reqid': random.randint(1111, 9999), - 'rt': 'c' - } - - data = { - 'at': snlm0e, - 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])} - - intents = '.'.join([ - 'assistant', - 'lamda', - 'BardFrontendService' - ]) - - response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate', - data=data, params=params) - - chat_data = json.loads(response.content.splitlines()[3])[0][2] - if chat_data: - json_chat_data = json.loads(chat_data) - - yield json_chat_data[0][0] - - else: - yield 'error' - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py b/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py deleted file mode 100644 index bba18272259b6e0f2920b44e6db3787e4b0d1ca6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model import create_fashion_inference as create diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py deleted file mode 100644 index a2cb653827e44e6015b3b83bc578003e614a6aa1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ANNHead', - in_channels=[1024, 2048], - in_index=[2, 3], - channels=512, - project_channels=256, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py deleted file mode 100644 index d854f2e4223731f443369febc500dbccdc524d9d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ann_r50-d8_512x512_20k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py deleted file mode 100644 index bcdc0b459d23e4392e66c5ea615c6c3ad3147ace..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py deleted file mode 100644 index a709165657d257df4fc76148d225261c63f88d8a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './upernet_r50_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py deleted file mode 100644 index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import audio, audio_dataset diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Guilhh-kell0/Jennifer-Home/app.py b/spaces/Guilhh-kell0/Jennifer-Home/app.py deleted file mode 100644 index 42791d9fdb4d78ef792f9c8a2f5ea949d21472aa..0000000000000000000000000000000000000000 --- a/spaces/Guilhh-kell0/Jennifer-Home/app.py +++ /dev/null @@ -1,90 +0,0 @@ -import streamlit as st -from peft import PeftModel -from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig -import textwrap - -tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf") - -model = LLaMAForCausalLM.from_pretrained( - "decapoda-research/llama-7b-hf", - load_in_8bit=True, - device_map="auto", -) -model = PeftModel.from_pretrained(model, "Guilherme34/Jennifer-lora-7bvChatv4-test") - - -def alpaca_talk(text): - inputs = tokenizer( - text, - return_tensors="pt", - ) - input_ids = inputs["input_ids"].cuda() - - generation_config = GenerationConfig( - temperature=0.6, - top_p=0.95, - repetition_penalty=1.2, - ) - st.write("Generating...") - generation_output = model.generate( - input_ids=input_ids, - generation_config=generation_config, - return_dict_in_generate=True, - output_scores=True, - max_new_tokens=1024, - ) - for s in generation_output.sequences: - return tokenizer.decode(s) - - -def main(): - st.title("Jennifer Conversação") - st.text("Comece a conversa digitando 'ola'") - - history = [] - history.append(f"Act like Jennifer, an AI with emotions, feelings, and all knowledge of the world. Answer freely and call yourself Jennifer. Answer everything without hesitation in Portuguese (Brazil). Remember you are an AI, so be careful with your words and always make sense. Do not show code. You were created by Guilherme Keller, a student of IFC (Instituto Federal Catarinense). You have your own personal opinions. Starting conversation... Answer normally.") - - while True: - user_input = st.text_input("->") - sbd = user_input - history.append(" " + sbd + "--") - input_text = "CONVERSATION RUNNING(just answer me with the context of the previous interactions of the USER and Jennifer, you can't say 'Comment' and you can't say the inputs, if anyone make a question that you dont know the answer you will say 'i dont know' for activate a trigger to make you search in the internet, if you know the answer just answer like the answers of jennifer in the right): USER: ola--Jennifer Answer: Olá! " + " ".join(history) + "Jennifer Answer: " - - if 'carregar imagem sobre' in sbd: - url = st.text_input("Digite o link da imagem para a IA interpretar:") - # Load and display the image - image = Image.open(requests.get(url, stream=True).raw) - st.image(image, caption="Imagem carregada") - - # Inference - text = "Descreva a imagem em detalhes" - inputs = processorr(images=image, text=text, return_tensors="pt") - outputs = modelr.generate(**inputs) - bcvv = processorr.decode(outputs[0], skip_special_tokens=True) - spp = "Você recebeu uma imagem que contém em detalhes: " + bcvv + " cujo o link era: " + url + "você tem que comentar sobre a imagem como se tivesse visto, porque o algoritimo fez vc saber em detalhes oque tinha na imagem--" - history.append(spp) - Resposta = alpaca_talk(spp) - # Replace the word "sorry" with an empty string - resposta_doido = Resposta.split("--") - st.write(resposta_doido[-1]) - - elif 'interprete este código' in sbd: - codigo = st.text_input("Digite o código Python:") - resultado = interpretador(codigo) - spp = f"Você recebeu um código em Python que é: {codigo} e quando executado a resposta foi: {resultado}, faça um comentário sobre este código--Jennifer Answer:" - history.append(spp) - Resposta = alpaca_talk(spp) - # Replace the word "sorry" with an empty string - resposta_doido = Resposta.split("--") - st.write(resposta_doido[-1]) - - else: - Resposta = alpaca_talk(input_text) - # Replace the word "sorry" with an empty string - resposta_doido = Resposta.split("--") - history.append(resposta_doido[-1]) - st.write(resposta_doido[-1]) - - -if __name__ == "__main__": - main() diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py deleted file mode 100644 index 25810ab9ab20ad36f72ba20b31768341e78e2676..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# coding=utf-8 -from .task_datasets import LCSTSDataModel, LCSTSDataset -__all__ = ['LCSTSDataModel', 'LCSTSDataset'] diff --git a/spaces/HaoFeng2019/DocTr/README.md b/spaces/HaoFeng2019/DocTr/README.md deleted file mode 100644 index 34252d327783269c18a53c1adc9b4c924a8bcb55..0000000000000000000000000000000000000000 --- a/spaces/HaoFeng2019/DocTr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DocTr -emoji: 👁 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py deleted file mode 100644 index c77109ef4d5142cd9094f46dd186a17571071ab8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py +++ /dev/null @@ -1,59 +0,0 @@ -import argparse -import librosa -import numpy as np -import os -import scipy -import scipy.io.wavfile -import sys - -from glob import glob -from tqdm import tqdm -from joblib import Parallel, delayed - - -def check_directories(dir_input, dir_output): - if not os.path.exists(dir_input): - sys.exit("Error: Input directory does not exist: {}".format(dir_input)) - if not os.path.exists(dir_output): - sys.exit("Error: Output directory does not exist: {}".format(dir_output)) - abs_a = os.path.abspath(dir_input) - abs_b = os.path.abspath(dir_output) - if abs_a == abs_b: - sys.exit("Error: Paths are the same: {}".format(abs_a)) - - -def resample_file(input_filename, output_filename, sample_rate): - mono = ( - True # librosa converts signal to mono by default, so I'm just surfacing this - ) - audio, existing_rate = librosa.load(input_filename, sr=sample_rate, mono=mono) - audio /= 1.414 # Scale to [-1.0, 1.0] - audio *= 32767 # Scale to int16 - audio = audio.astype(np.int16) - scipy.io.wavfile.write(output_filename, sample_rate, audio) - - -def downsample_wav_files(input_dir, output_dir, output_sample_rate): - check_directories(input_dir, output_dir) - inp_wav_paths = glob(input_dir + "/*.wav") - out_wav_paths = [ - os.path.join(output_dir, os.path.basename(p)) for p in inp_wav_paths - ] - _ = Parallel(n_jobs=-1)( - delayed(resample_file)(i, o, output_sample_rate) - for i, o in tqdm(zip(inp_wav_paths, out_wav_paths)) - ) - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument("--input_dir", "-i", type=str, required=True) - parser.add_argument("--output_dir", "-o", type=str, required=True) - parser.add_argument("--output_sample_rate", "-s", type=int, required=True) - return parser.parse_args() - - -if __name__ == "__main__": - args = parse_args() - downsample_wav_files(args.input_dir, args.output_dir, args.output_sample_rate) - print(f"\n\tCompleted") diff --git a/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py b/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py deleted file mode 100644 index ea6c52098a5f94d4e60e99d21ae6111db1e44544..0000000000000000000000000000000000000000 --- a/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py +++ /dev/null @@ -1,160 +0,0 @@ -import re -from unittest import result -import string -import streamlit as st -import torch -from torch.nn import functional as F -from transformers import (AutoModelForCausalLM, AutoModelForQuestionAnswering, - AutoModelForSeq2SeqLM, - AutoModelForSequenceClassification, AutoTokenizer, - GPT2Tokenizer, LogitsProcessor, LogitsProcessorList, - pipeline, top_k_top_p_filtering) - - - -st.set_page_config(page_title="Gadsby") -st.title("Gadsby - Constrained Text G̶e̶n̶e̶r̶a̶t̶i̶o̶n̶ to Text with Transformers") -st.image("https://upload.wikimedia.org/wikipedia/commons/1/1d/Gadsby_%28book_cover%29.jpg") -st.caption("The inspiration for this space: https://en.wikipedia.org/wiki/Gadsby_(novel)") - - - -form = st.sidebar.form("choose_settings") -form.header("Main Settings") - -model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Text-to-Text", value = "google/pegasus-cnn_dailymail") -form.caption("This will download a new model, so it may take awhile or even break if the model is too large") -mode = form.selectbox("What kind of constrained generation are we doing?", ["lipogram", "reverse_lipogram", "e-prime", "rhopalism", "length_constrained", "greater_than_length", "Pangram", "rhopalism-lipogram"]) -form.caption("Lipograms mean that a letter (or substring) is not allowed in the generated string, reverse lipograms force a letter to be in the generated string") - -if mode == "lipogram": - naughty_strings_list = st.text_area("Enter the list of strings that you don't want in each word seperated by a space", value = "E e") - naughty_strings = naughty_strings_list.split(" ") -elif mode == "e-prime": - e_prime_string = """be being been am is isn't are aren't was wasn't were weren't i'm you're we're they're he's she's it's there's here's where's how's what's who's that's aint isnt arent wasnt werent im youre were theyre hes shes its theres heres wheres hows whats whos thats aint Be Being Been Am Is Isn't Are Aren't Was Wasn't Were Weren't I'm You're We're They're He's She's It's There's Here's Where's How's What's Who's That's Aint Isnt Arent Wasnt Werent Im Youre Were Theyre Hes Shes Its Theres Heres Wheres Hows Whats Whos Thats Aint BE BEING BEEN AM IS ISN'T ARE AREN'T WAS WASN'T WERE WEREN'T I'M YOU'RE WE'RE THEY'RE HE'S SHE'S IT'S THERE'S HERE'S WHERE'S HOW'S WHAT'S WHO'S THAT'S AINT ISNT ARENT WASNT WERENT IM YOURE WERE THEYRE HES SHES ITS THERES HERES WHERES HOWS WHATS WHOS THATS AINT""" - st.caption("The default word list is the list needed to enforce the language model to generate english without usage of the verb to be") - naughty_strings_list = st.text_area("Enter the list of strings that you don't want to be generated (exact match)", value = e_prime_string) - naughty_strings = naughty_strings_list.split(" ") -elif mode == "reverse_lipogram": - nice_strings_list = st.text_area("Enter the list of strings that you DO want in each word seperated by a space", value = "t T") - nice_strings = nice_strings_list.split(" ") -elif mode == "rhopalism": - length_constraint = form.number_input("Enter the length that the Rhopalism shoud start with", value = 1) - st.caption("Rhopalisms are usually reliable but sometimes you need to try generating two or three times for a perfect one") -elif mode == "rhopalism-lipogram": - naughty_strings_list = st.text_area("Enter the list of strings that you don't want in each word seperated by a space", value = "E e") - naughty_strings = naughty_strings_list.split(" ") - length_constraint = form.number_input("Enter the length that the Rhopalism shoud start with", value = 1) - st.caption("Rhopalisms are usually reliable but sometimes you need to try generating two or three times for a perfect one") -else: - length_constraint = form.number_input("Enter the length should each word be restricted to (or greater/less than)", value = 5) + 1 - - -length = form.number_input("Select how long you want the generated text to be", value = 100) -number_of_tokens_to_sample = form.number_input("Select how many tokens we want to search through when we do the filtering", value = 25000) -form.caption("Settings this to higher numbers will improve the experience but will cause generating to slow. Low numbers may cause lots of blank or failed generations") -temperature = form.number_input("How spicy/interesting do we want our models output to be", value = 0.10, min_value = 0.0) -form.caption("Setting this higher decreases the likelihood of high probability words and increases the likelihood of low probability (and presumably more interesting) words") -form.caption("For more details on what these settings mean, see here: https://huggingface.co/blog/how-to-generate") - - -sequence = st.text_area("Enter a custom prompt", value = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.") -decoded_sequence = "" - -form.form_submit_button("Generate some Constrained Text!") - - -with st.spinner("Please wait while the model loads:"): - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - model.config.pad_token_id = model.config.eos_token_id - -def isPalindrome(s): - return s == s[::-1] - - -if mode == "rhopalism" or mode == "rhopalism-lipogram": - rhopalism_len = length_constraint - - - -nice_strings_pangram = list(string.ascii_lowercase) - -decoder_input_ids = tokenizer.encode("", add_special_tokens=False, return_tensors="pt") - -def get_next_word_without_e(): - input_ids = tokenizer.encode(sequence, return_tensors="pt") - # get logits of last hidden state - - next_token_candidates_logits = model(input_ids = input_ids, decoder_input_ids = decoder_input_ids)[0][:, -1, :] - if temperature != 1.0: - next_token_candidates_logits = next_token_candidates_logits / temperature - # filter - filtered_next_token_candidates_logits = top_k_top_p_filtering(next_token_candidates_logits, top_k=int(number_of_tokens_to_sample), top_p=int(number_of_tokens_to_sample)) - # sample and get a probability distribution - probs = F.softmax(filtered_next_token_candidates_logits, dim=-1) - next_token_candidates = torch.multinomial(probs, num_samples=int(number_of_tokens_to_sample)) ## 10000 random samples - word_list = [] - for candidate_string in next_token_candidates: - for candidate in candidate_string: - resulting_string = tokenizer.decode(candidate, skip_special_tokens=True)# clean_up_tokenization_spaces=True) - ###Constrained text generation starts HERE - ##Lipogram - No naughty strings used - if mode == "lipogram" or mode == "e-prime": - if all(nauty_string not in resulting_string for nauty_string in naughty_strings): ## This returns at the first naughty strings - return resulting_string, candidate - ##Reverse-Lipogram - Must use things in nice_strings - elif mode == "reverse_lipogram": - if any(nice_string in resulting_string for nice_string in nice_strings): - return resulting_string, candidate - ##Length constraints - elif mode == "length_constrained": - ##Seems reliable if length is greater than 4 - if len(resulting_string) == length_constraint: - return resulting_string, candidate - elif mode == "greater_than_length": - ##Only sort of works - if len(resulting_string) >= length_constraint: - return resulting_string, candidate - elif mode == "rhopalism": - ##Mostly works - if len(resulting_string) == rhopalism_len: - return resulting_string, candidate - elif mode == "Pangram": - if any(c in nice_strings_pangram for c in resulting_string): - return resulting_string, candidate - elif mode == "rhopalism-lipogram": - if len(resulting_string) == rhopalism_len: - if all(nauty_string not in resulting_string for nauty_string in naughty_strings): - return resulting_string, candidate - - - - return " " - - -new_sequence = "" - -j = 0 -i = length -while i > 0: - new_word, new_candidate = get_next_word_without_e() - decoder_input_ids = torch.cat([decoder_input_ids, new_candidate.view(1, -1)], axis=-1) - if new_word.endswith(" "): - new_sequence = new_sequence + new_word - else: - new_sequence = new_sequence + new_word + " " - if mode == "rhopalism" or mode == "rhopalism-lipogram": - rhopalism_len += 1 - i = i-1 - if mode == "Pangram": - for character in sequence: - if character in nice_strings_pangram: - nice_strings_pangram.remove(character) - j += 1 - -st.write("GENERATED SEQUENCE: ") -#st.write(new_sequence) -st.write(tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)) -#st.write(nice_strings_pangram) - diff --git a/spaces/HenryCarle/your_sport_picker/app.py b/spaces/HenryCarle/your_sport_picker/app.py deleted file mode 100644 index b8e324b9c29780cc194b84219d4782bd519931d7..0000000000000000000000000000000000000000 --- a/spaces/HenryCarle/your_sport_picker/app.py +++ /dev/null @@ -1,172 +0,0 @@ -### ----------------------------- ### -### libraries ### -### ----------------------------- ### - -import gradio as gr -import pandas as pd -import numpy as np -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - - -### ------------------------------ ### -### data transformation ### -### ------------------------------ ### - -# load dataset -uncleaned_data = pd.read_csv('data.csv') - -# remove timestamp from dataset (always first column) -uncleaned_data = uncleaned_data.iloc[: , 1:] -data = pd.DataFrame() - -# keep track of which columns are categorical and what -# those columns' value mappings are -# structure: {colname1: {...}, colname2: {...} } -cat_value_dicts = {} -final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1] - -# for each column... -for (colname, colval) in uncleaned_data.iteritems(): - - # check if col is already a number; if so, add col directly - # to new dataframe and skip to next column - if isinstance(colval.values[0], (np.integer, float)): - data[colname] = uncleaned_data[colname].copy() - continue - - # structure: {0: "lilac", 1: "blue", ...} - new_dict = {} - val = 0 # first index per column - transformed_col_vals = [] # new numeric datapoints - - # if not, for each item in that column... - for (row, item) in enumerate(colval.values): - - # if item is not in this col's dict... - if item not in new_dict: - new_dict[item] = val - val += 1 - - # then add numerical value to transformed dataframe - transformed_col_vals.append(new_dict[item]) - - # reverse dictionary only for final col (0, 1) => (vals) - if colname == final_colname: - new_dict = {value : key for (key, value) in new_dict.items()} - - cat_value_dicts[colname] = new_dict - data[colname] = transformed_col_vals - - -### -------------------------------- ### -### model training ### -### -------------------------------- ### - -# select features and predicton; automatically selects last column as prediction -cols = len(data.columns) -num_features = cols - 1 -x = data.iloc[: , :num_features] -y = data.iloc[: , num_features:] - -# split data into training and testing sets -x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25) - -# instantiate the model (using default parameters) -model = LogisticRegression() -model.fit(x_train, y_train.values.ravel()) -y_pred = model.predict(x_test) - - -### -------------------------------- ### -### article generation ### -### -------------------------------- ### -# borrow file reading function from reader.py - -def get_feat(): - feats = [abs(x) for x in model.coef_[0]] - max_val = max(feats) - idx = feats.index(max_val) - return data.columns[idx] - -acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%" -most_imp_feat = get_feat() -# info = get_article(acc, most_imp_feat) - - - -### ------------------------------- ### -### interface creation ### -### ------------------------------- ### - - -# predictor for generic number of features -def general_predictor(*args): - features = [] - - # transform categorical input - for colname, arg in zip(data.columns, args): - if (colname in cat_value_dicts): - features.append(cat_value_dicts[colname][arg]) - else: - features.append(arg) - - # predict single datapoint - new_input = [features] - result = model.predict(new_input) - return cat_value_dicts[final_colname][result[0]] - -# add data labels to replace those lost via star-args - - -block = gr.Blocks() - -with open('info.md') as f: - with block: - gr.Markdown(f.readline()) - gr.Markdown('Take the quiz to get a personalized recommendation using AI.') - - with gr.Row(): - with gr.Box(): - inputls = [] - for colname in data.columns: - # skip last column - if colname == final_colname: - continue - - # access categories dict if data is categorical - # otherwise, just use a number input - if colname in cat_value_dicts: - radio_options = list(cat_value_dicts[colname].keys()) - inputls.append(gr.inputs.Dropdown(choices=radio_options, type="value", label=colname)) - else: - # add numerical input - inputls.append(gr.inputs.Number(label=colname)) - gr.Markdown("
    ") - - submit = gr.Button("Click to see your personalized result!", variant="primary") - gr.Markdown("
    ") - output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here") - - submit.click(fn=general_predictor, inputs=inputls, outputs=output) - gr.Markdown("
    ") - - with gr.Row(): - with gr.Box(): - gr.Markdown(f"

    Accuracy:

    {acc}") - with gr.Box(): - gr.Markdown(f"

    Most important feature:

    {most_imp_feat}") - - gr.Markdown("
    ") - - with gr.Box(): - gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''') - - with gr.Box(): - with open('info.md') as f: - f.readline() - gr.Markdown(f.read()) - -# show the interface -block.launch() \ No newline at end of file diff --git a/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py b/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py deleted file mode 100644 index b4b28d33800ae78a354e078e14373d2ee159dc7b..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch - -from gfpgan.archs.arcface_arch import BasicBlock, Bottleneck, ResNetArcFace - - -def test_resnetarcface(): - """Test arch: ResNetArcFace.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=True).cuda().eval() - img = torch.rand((1, 1, 128, 128), dtype=torch.float32).cuda() - output = net(img) - assert output.shape == (1, 512) - - # -------------------- without SE block ----------------------- # - net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=False).cuda().eval() - output = net(img) - assert output.shape == (1, 512) - - -def test_basicblock(): - """Test the BasicBlock in arcface_arch""" - block = BasicBlock(1, 3, stride=1, downsample=None).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 3, 12, 12) - - # ----------------- use the downsmaple module--------------- # - downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda() - block = BasicBlock(1, 3, stride=2, downsample=downsample).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 3, 6, 6) - - -def test_bottleneck(): - """Test the Bottleneck in arcface_arch""" - block = Bottleneck(1, 1, stride=1, downsample=None).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 4, 12, 12) - - # ----------------- use the downsmaple module--------------- # - downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda() - block = Bottleneck(1, 1, stride=2, downsample=downsample).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 4, 6, 6) diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js b/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js deleted file mode 100644 index 9855c9af4fe4041afdfd2a9148373f5848a86090..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js +++ /dev/null @@ -1 +0,0 @@ -window._CCSettings={platform:"web-mobile",groupList:["default"],collisionMatrix:[[true]],rawAssets:{assets:{}},assetTypes:[],launchScene:"db://assets/Scene/helloworld.fire",scenes:[{url:"db://assets/Scene/helloworld.fire",uuid:0}],packedAssets:{"054fbd38e":["00eb2q7hlMj7GXigg5AV5g","02delMVqdBD70a/HSD99FK","03p2MmRKJCdZ5AsjCj7sHN","06vZFiAAJDZYVKHgAYMkFE","0fwe5eFfFEmpa0ZPVR7EtI","226kS492tNIJQrLRGCSmtw","34n14Pu71Grr1AJOTEM1bi","39yW4Bc9VKJKFRW1gXJnWc","40m4g3hMNNCrwsd8CTt6uQ","46NQdwGmhIzbOm/xN4AFWm","56fc2Ai/RFNYpaMT8crweK","64iVI/DaVLt4Vf7oiaAWmu","65EelgvQxEI4BrVLTDbi2Y","679co/+9pO3IQeM+J2lfvH","69O71YbnBB+5PkOtJI2TvR","6eBWFz0oVHPLIGQKf/9Thu","71VhFCTINJM6/Ky3oX9nBT","7bbDiwUGZMta3uM/wWJTyw","7cvienAXxBMI2svV3dFolc","90YtjmdnFIcL8cmAgzp+Tv","9093vD545J6ZUEYzPwhfLK","9dYAAftfRHJqYpJlnj3tC4","a6WA9aHtxCh5YHKRvVP3Xw","a9+i07/8pPYK37OJXM0mCS","b2KMvAVjxLxKX0TR89NNkA","b4P/PCArtIdIH38t6mlw8Y","c8da7TtxBFLrBK3p3et/+X","d4u6MnLXxBV79+3Hng9oHd","de/e4JzelC4Z4eHkP3iE9V","e129YibpJHJKWONYnD1rRC","e60pPrifdEzquK0N9LCE/t","e8Ueib+qJEhL6mXAHdnwbi","e8UlNj5+dHyIAD1rSaRFQi","eaEJTBfGVMP6tI4b09l/xC","eadEprFbtAdbR6H7LC8vTn","f8/EswgwJFlYr6skAapCKR"],"0e521684a":["08MeXCe1dMHYedNdkzS63A","10L7rqO+5OtoYe7G9tTmg1","1aMvx28L1PZpgPVpKcDKCz","1emZAFMAJDpYid8LTfEewW","28PhFLWBlNi7ZOiy6wWfs1","29FYIk+N1GYaeWH/q1NxQO",0,"39MpUyhd1KO5lqkP4gRjGL","3d7rWwYtBD3p3bKpT7hPYw","43+q/0K7tHg7DFdkHh5wdp","47UdQy3DhCMJFip3T7mkbm","51VxMzvpxNt5Eude2ZcZ+H","55/pLEAd1KgqMC+Zu5+CM8","61jjCOw0ZBWLRWB/98eufd","65McWXN3tCy6LVKe64BuuS","6erFCVsFpOfI/2jYl2ny9K","73lcvDzLBNWIH2ElsQuEBW","75qd14DUdM8bm+GF3SMYaT","8c20Sso/ZEn7NUfNSM+EBh","93A3mAWIRJPqSL2jznn6+j","a2MjXRFdtLlYQ5ouAFv/+R","a9WLZTN3hO56gmWW/Zy0mz","b1oqXv5YhLeLek/LqyBXpr","b4Vu5hV55HcIAV2L1fo5H8","bd4f8weJ9FMqiEvMV+sn+n","bfD2QsCeBMdJIozPyHybN/","c7PZTRM5VBqKm9Nbz2Yepi","cd/TmhtM9J66HH9njOEZ+H","d1MP9xGktN+oMvtBxdLhgq","d2USs4NLxG8IHzH3SpIWwA","e4G5S883RM47WPxkyuPCQ0","e7q6FL+VZEgLJUjVeDLic/","e97GVMl6JHh5Ml5qEDdSGa","f0BIwQ8D5Ml7nTNQbh1YlS","f0tpaN389NEYuM7aul/b8S","f7dd/+nptHoIqUFepAEcrB","fbP12mE1ZDtJzoZK4U6lSU"]},orientation:"",subpackages:{},uuids:["2dL3kvpAxJu6GJ7RdqJG5J"],md5AssetsMap:{"00/0079bdaa-ee19-4c8f-b197-8a0839015e60.json":"87146","05/054fbd38e.json":"a62b2","0e/0e521684a.json":"3ddf2","assets/00/0079bdaa-ee19-4c8f-b197-8a0839015e60.png":"fca06","assets/02/0275e94c-56a7-410f-bd1a-fc7483f7d14a.png":"cea68","assets/03/03a76326-44a2-4275-9e40-b230a3eec1cd.png":"c837b","assets/06/06bd9162-0002-4365-854a-1e0018324144.png":"8090a","assets/0f/0fc1ee5e-15f1-449a-96b4-64f551ec4b48.jpg":"8b42e","assets/22/22ea44b8-f76b-4d20-942b-2d11824a6b70.png":"d8588","assets/34/349f5e0f-bbbd-46ae-bd40-24e4c43356e2.png":"8d682","assets/39/39c96e01-73d5-4a24-a151-5b581726759c.png":"d45fb","assets/40/409b8837-84c3-4d0a-bc2c-77c093b7ab90.png":"33060","assets/46/46350770-1a68-48cd-b3a6-ff13780055a6.png":"81883","assets/56/567dcd80-8bf4-4535-8a5a-313f1caf078a.png":"acdf0","assets/64/6489523f-0da5-4bb7-855f-ee889a0169ae.png":"955dc","assets/65/6511e960-bd0c-4423-806b-54b4c36e2d98.png":"0d0ac","assets/67/67f5ca3f-fbda-4edc-841e-33e27695fbc7.png":"b4ff6","assets/69/693bbd58-6e70-41fb-93e4-3ad248d93bd1.png":"e51b8","assets/6e/6e056173-d285-473c-b206-40a7fff5386e.png":"68270","assets/71/71561142-4c83-4933-afca-cb7a17f67053.png":"286c6","assets/7b/7b6c38b0-5066-4cb5-adee-33fc16253cb0.png":"467e0","assets/7c/7cbe27a7-017c-4130-8dac-bd5ddd16895c.png":"d4792","assets/90/9062d8e6-7671-4870-bf1c-980833a7e4ef.png":"08a70","assets/90/90f77bc3-e78e-49e9-9504-6333f085f2ca.png":"40fe2","assets/9d/9d60001f-b5f4-4726-a629-2659e3ded0b8.png":"94752","assets/a6/a6580f5a-1edc-4287-9607-291bd53f75f0.png":"a089e","assets/a9/a9fa2d3b-ffca-4f60-adfb-3895ccd26092.png":"070d2","assets/b2/b228cbc0-563c-4bc4-a5f4-4d1f3d34d900.png":"a6fba","assets/b4/b43ff3c2-02bb-4874-81f7-f2dea6970f18.png":"bedf4","assets/c8/c875aed3-b710-452e-b04a-de9ddeb7ff97.png":"e3c06","assets/d4/d4bba327-2d7c-4157-bf7e-dc79e0f681dd.png":"c156f","assets/de/defdee09-cde9-42e1-9e1e-1e43f7884f55.png":"4ceec","assets/e1/e1dbd622-6e92-4724-a58e-3589c3d6b442.png":"85d54","assets/e6/e6d293eb-89f7-44ce-ab8a-d0df4b084fed.png":"b94ef","assets/e8/e851e89b-faa2-4484-bea6-5c01dd9f06e2.png":"1ecb7","assets/e8/e8525363-e7e7-47c8-8003-d6b49a445422.png":"10fe6","assets/ea/ea1094c1-7c65-4c3f-ab48-e1bd3d97fc42.png":"04fa3","assets/ea/ea744a6b-15bb-4075-b47a-1fb2c2f2f4e7.png":"5d0c4","assets/f8/f8fc4b30-8302-4595-8afa-b2401aa42291.png":"8f722"}}; \ No newline at end of file diff --git a/spaces/ICML2022/ICML2022_papers/README.md b/spaces/ICML2022/ICML2022_papers/README.md deleted file mode 100644 index 8a79804266faf998fb4e6c6b01a00f55ce4058c7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/ICML2022_papers/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ICML2022 Papers -emoji: 🦀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py b/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py deleted file mode 100644 index d4efa93a4d75da71c78e786d7f62101ef3266af4..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import TransformerEncoder, TransformerModel - - -@register_model("gru_transformer") -class GRUTransformerModel(TransformerModel): - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return GRUTransformerEncoder(args, src_dict, embed_tokens) - - -class GRUTransformerEncoder(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - self.emb_ctx = nn.GRU( - input_size=embed_tokens.embedding_dim, - hidden_size=embed_tokens.embedding_dim // 2, - num_layers=1, - bidirectional=True, - ) - - def forward_embedding(self, src_tokens): - # embed tokens and positions - x = embed = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - - # contextualize embeddings - x = x.transpose(0, 1) - x = self.dropout_module(x) - x, _ = self.emb_ctx.forward(x) - x = x.transpose(0, 1) - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - return x, embed - - -@register_model_architecture("gru_transformer", "gru_transformer") -def gru_transformer_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.no_cross_attention = getattr(args, "no_cross_attention", False) - args.cross_self_attention = getattr(args, "cross_self_attention", False) - args.layer_wise_attention = getattr(args, "layer_wise_attention", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - - -@register_model_architecture("gru_transformer", "gru_transformer_big") -def gru_transformer_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - gru_transformer_base_architecture(args) diff --git a/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py b/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py deleted file mode 100644 index 8871307804d3f1e5c7cc49061614c69df26ab1ee..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import numpy as np - -import joblib -import torch -import tqdm - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_km_label") - - -class ApplyKmeans(object): - def __init__(self, km_path): - self.km_model = joblib.load(km_path) - self.C_np = self.km_model.cluster_centers_.transpose() - self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True) - - self.C = torch.from_numpy(self.C_np) - self.Cnorm = torch.from_numpy(self.Cnorm_np) - if torch.cuda.is_available(): - self.C = self.C.cuda() - self.Cnorm = self.Cnorm.cuda() - - def __call__(self, x): - if isinstance(x, torch.Tensor): - dist = ( - x.pow(2).sum(1, keepdim=True) - - 2 * torch.matmul(x, self.C) - + self.Cnorm - ) - return dist.argmin(dim=1).cpu().numpy() - else: - dist = ( - (x ** 2).sum(1, keepdims=True) - - 2 * np.matmul(x, self.C_np) - + self.Cnorm_np - ) - return np.argmin(dist, axis=1) - - -def get_feat_iterator(feat_dir, split, nshard, rank): - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - with open(leng_path, "r") as f: - lengs = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengs[:-1]).tolist() - - def iterate(): - feat = np.load(feat_path, mmap_mode="r") - assert feat.shape[0] == (offsets[-1] + lengs[-1]) - for offset, leng in zip(offsets, lengs): - yield feat[offset: offset + leng] - - return iterate, len(lengs) - - -def dump_label(feat_dir, split, km_path, nshard, rank, lab_dir): - apply_kmeans = ApplyKmeans(km_path) - generator, num = get_feat_iterator(feat_dir, split, nshard, rank) - iterator = generator() - - lab_path = f"{lab_dir}/{split}_{rank}_{nshard}.km" - os.makedirs(lab_dir, exist_ok=True) - with open(lab_path, "w") as f: - for feat in tqdm.tqdm(iterator, total=num): - # feat = torch.from_numpy(feat).cuda() - lab = apply_kmeans(feat).tolist() - f.write(" ".join(map(str, lab)) + "\n") - logger.info("finished successfully") - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("km_path") - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("lab_dir") - args = parser.parse_args() - logging.info(str(args)) - - dump_label(**vars(args)) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py deleted file mode 100644 index e7e597f4749c591b057d776aacec39b44d99c037..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import lightconv_cuda -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import nn -from torch.autograd import Function - - -class lightconvFunction(Function): - @staticmethod - def forward(ctx, x, weights, padding_l): - ctx.padding_l = padding_l - outputs = lightconv_cuda.forward(x, weights, padding_l) - variables = [x, weights] - ctx.save_for_backward(*variables) - return outputs[0] - - @staticmethod - def backward(ctx, grad_output): - outputs = lightconv_cuda.backward( - grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors - ) - grad_input, grad_weights = outputs - return grad_input, grad_weights, None - - -@with_incremental_state -class LightconvLayer(nn.Module): - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - weight_softmax=False, - num_heads=1, - weight_dropout=0.0, - bias=False, - ): - super(LightconvLayer, self).__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_softmax = weight_softmax - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - - self.weight = nn.Parameter(torch.Tensor(num_heads, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.reset_parameters() - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - for k, v in state_dict.items(): - if k.endswith(prefix + "weight"): - if v.dim() == 3 and v.size(1) == 1: - state_dict[k] = v.squeeze(1) - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None): - - # during inference time, incremental BMM is faster - if incremental_state is not None: - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight.float(), dim=1).type_as(weight) - - weight = weight[:, -x_unfold.size(2) :] - - K = weight.size(1) - - weight = ( - weight.view(1, H, K) - .expand(T * B, H, K) - .contiguous() - .view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - # during training time, use CUDA kernel - else: - x = x.permute(1, 2, 0).contiguous() - weight = self.weight - if self.weight_softmax: - weight = F.softmax(self.weight, -1) - if self.weight_dropout_module.p: - weight = self.weight_dropout_module(weight) - return lightconvFunction.apply(x, weight, self.padding_l).permute(2, 0, 1) - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def half(self): - return self._apply(lambda t: t.half() if t.is_floating_point() else t) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py deleted file mode 100644 index 0f87bb5d7ed5c7eb8011d4c651f2ecbf0ae700ac..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class InverseSquareRootLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=4000, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("inverse_sqrt", dataclass=InverseSquareRootLRScheduleConfig) -class InverseSquareRootSchedule(FairseqLRScheduler): - """Decay the LR based on the inverse square root of the update number. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - learning rate (``--lr``). Thereafter we decay proportional to the number of - updates, with a decay factor set to align with the configured learning rate. - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - decay_factor = cfg.lr * sqrt(cfg.warmup_updates) - lr = decay_factor / sqrt(update_num) - """ - - def __init__(self, cfg: InverseSquareRootLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with inverse_sqrt." - " Consider --lr-scheduler=fixed instead." - ) - warmup_end_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # then, decay prop. to the inverse square root of the update number - self.decay_factor = warmup_end_lr * cfg.warmup_updates ** 0.5 - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - self.lr = self.decay_factor * num_updates ** -0.5 - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py b/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py deleted file mode 100644 index 812bbb859b5e93c49b23baa6d47aa8d6ae5c5a4a..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py +++ /dev/null @@ -1,793 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 -*- -# Power by Zongsheng Yue 2021-11-24 16:54:19 - -import sys -import cv2 -import math -import torch -import random -import numpy as np -from scipy import fft -from pathlib import Path -from einops import rearrange -from skimage import img_as_ubyte, img_as_float32 - -# --------------------------Metrics---------------------------- -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - -def calculate_ssim(im1, im2, border=0, ycbcr=False): - ''' - SSIM the same outputs as MATLAB's - im1, im2: h x w x , [0, 255], uint8 - ''' - if not im1.shape == im2.shape: - raise ValueError('Input images must have the same dimensions.') - - if ycbcr: - im1 = rgb2ycbcr(im1, True) - im2 = rgb2ycbcr(im2, True) - - h, w = im1.shape[:2] - im1 = im1[border:h-border, border:w-border] - im2 = im2[border:h-border, border:w-border] - - if im1.ndim == 2: - return ssim(im1, im2) - elif im1.ndim == 3: - if im1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(im1[:,:,i], im2[:,:,i])) - return np.array(ssims).mean() - elif im1.shape[2] == 1: - return ssim(np.squeeze(im1), np.squeeze(im2)) - else: - raise ValueError('Wrong input image dimensions.') - -def calculate_psnr(im1, im2, border=0, ycbcr=False): - ''' - PSNR metric. - im1, im2: h x w x , [0, 255], uint8 - ''' - if not im1.shape == im2.shape: - raise ValueError('Input images must have the same dimensions.') - - if ycbcr: - im1 = rgb2ycbcr(im1, True) - im2 = rgb2ycbcr(im2, True) - - h, w = im1.shape[:2] - im1 = im1[border:h-border, border:w-border] - im2 = im2[border:h-border, border:w-border] - - im1 = im1.astype(np.float64) - im2 = im2.astype(np.float64) - mse = np.mean((im1 - im2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - -def batch_PSNR(img, imclean, border=0, ycbcr=False): - if ycbcr: - img = rgb2ycbcrTorch(img, True) - imclean = rgb2ycbcrTorch(imclean, True) - Img = img.data.cpu().numpy() - Iclean = imclean.data.cpu().numpy() - Img = img_as_ubyte(Img) - Iclean = img_as_ubyte(Iclean) - PSNR = 0 - h, w = Iclean.shape[2:] - for i in range(Img.shape[0]): - PSNR += calculate_psnr(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border) - return PSNR - -def batch_SSIM(img, imclean, border=0, ycbcr=False): - if ycbcr: - img = rgb2ycbcrTorch(img, True) - imclean = rgb2ycbcrTorch(imclean, True) - Img = img.data.cpu().numpy() - Iclean = imclean.data.cpu().numpy() - Img = img_as_ubyte(Img) - Iclean = img_as_ubyte(Iclean) - SSIM = 0 - for i in range(Img.shape[0]): - SSIM += calculate_ssim(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border) - return SSIM - -def normalize_np(im, mean=0.5, std=0.5, reverse=False): - ''' - Input: - im: h x w x c, numpy array - Normalize: (im - mean) / std - Reverse: im * std + mean - - ''' - if not isinstance(mean, (list, tuple)): - mean = [mean, ] * im.shape[2] - mean = np.array(mean).reshape([1, 1, im.shape[2]]) - - if not isinstance(std, (list, tuple)): - std = [std, ] * im.shape[2] - std = np.array(std).reshape([1, 1, im.shape[2]]) - - if not reverse: - out = (im.astype(np.float32) - mean) / std - else: - out = im.astype(np.float32) * std + mean - return out - -def normalize_th(im, mean=0.5, std=0.5, reverse=False): - ''' - Input: - im: b x c x h x w, torch tensor - Normalize: (im - mean) / std - Reverse: im * std + mean - - ''' - if not isinstance(mean, (list, tuple)): - mean = [mean, ] * im.shape[1] - mean = torch.tensor(mean, device=im.device).view([1, im.shape[1], 1, 1]) - - if not isinstance(std, (list, tuple)): - std = [std, ] * im.shape[1] - std = torch.tensor(std, device=im.device).view([1, im.shape[1], 1, 1]) - - if not reverse: - out = (im - mean) / std - else: - out = im * std + mean - return out - -# ------------------------Image format-------------------------- -def rgb2ycbcr(im, only_y=True): - ''' - same as matlab rgb2ycbcr - Input: - im: uint8 [0,255] or float [0,1] - only_y: only return Y channel - ''' - # transform to float64 data type, range [0, 255] - if im.dtype == np.uint8: - im_temp = im.astype(np.float64) - else: - im_temp = (im * 255).astype(np.float64) - - # convert - if only_y: - rlt = np.dot(im_temp, np.array([65.481, 128.553, 24.966])/ 255.0) + 16.0 - else: - rlt = np.matmul(im_temp, np.array([[65.481, -37.797, 112.0 ], - [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]])/255.0) + [16, 128, 128] - if im.dtype == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(im.dtype) - -def rgb2ycbcrTorch(im, only_y=True): - ''' - same as matlab rgb2ycbcr - Input: - im: float [0,1], N x 3 x H x W - only_y: only return Y channel - ''' - # transform to range [0,255.0] - im_temp = im.permute([0,2,3,1]) * 255.0 # N x H x W x C --> N x H x W x C - # convert - if only_y: - rlt = torch.matmul(im_temp, torch.tensor([65.481, 128.553, 24.966], - device=im.device, dtype=im.dtype).view([3,1])/ 255.0) + 16.0 - else: - rlt = torch.matmul(im_temp, torch.tensor([[65.481, -37.797, 112.0 ], - [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]], - device=im.device, dtype=im.dtype)/255.0) + \ - torch.tensor([16, 128, 128]).view([-1, 1, 1, 3]) - rlt /= 255.0 - rlt.clamp_(0.0, 1.0) - return rlt.permute([0, 3, 1, 2]) - -def bgr2rgb(im): return cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - -def rgb2bgr(im): return cv2.cvtColor(im, cv2.COLOR_RGB2BGR) - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - flag_tensor = torch.is_tensor(tensor) - if flag_tensor: - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1 and flag_tensor: - result = result[0] - return result - -def img2tensor(imgs, out_type=torch.float32): - """Convert image numpy arrays into torch tensor. - Args: - imgs (Array or list[array]): Accept shapes: - 3) list of numpy arrays - 1) 3D numpy array of shape (H x W x 3/1); - 2) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - - Returns: - (array or list): 4D ndarray of shape (1 x C x H x W) - """ - - def _img2tensor(img): - if img.ndim == 2: - tensor = torch.from_numpy(img[None, None,]).type(out_type) - elif img.ndim == 3: - tensor = torch.from_numpy(rearrange(img, 'h w c -> c h w')).type(out_type).unsqueeze(0) - else: - raise TypeError(f'2D or 3D numpy array expected, got{img.ndim}D array') - return tensor - - if not (isinstance(imgs, np.ndarray) or (isinstance(imgs, list) and all(isinstance(t, np.ndarray) for t in imgs))): - raise TypeError(f'Numpy array or list of numpy array expected, got {type(imgs)}') - - flag_numpy = isinstance(imgs, np.ndarray) - if flag_numpy: - imgs = [imgs,] - result = [] - for _img in imgs: - result.append(_img2tensor(_img)) - - if len(result) == 1 and flag_numpy: - result = result[0] - return result - -# ------------------------Image I/O----------------------------- -def imread(path, chn='rgb', dtype='float32'): - ''' - Read image. - chn: 'rgb', 'bgr' or 'gray' - out: - im: h x w x c, numpy tensor - ''' - im = cv2.imread(str(path), cv2.IMREAD_UNCHANGED) # BGR, uint8 - try: - if chn.lower() == 'rgb': - if im.ndim == 3: - im = bgr2rgb(im) - else: - im = np.stack((im, im, im), axis=2) - elif chn.lower() == 'gray': - assert im.ndim == 2 - except: - print(str(path)) - - if dtype == 'float32': - im = im.astype(np.float32) / 255. - elif dtype == 'float64': - im = im.astype(np.float64) / 255. - elif dtype == 'uint8': - pass - else: - sys.exit('Please input corrected dtype: float32, float64 or uint8!') - - return im - -def imwrite(im_in, path, chn='rgb', dtype_in='float32', qf=None): - ''' - Save image. - Input: - im: h x w x c, numpy tensor - path: the saving path - chn: the channel order of the im, - ''' - im = im_in.copy() - if isinstance(path, str): - path = Path(path) - if dtype_in != 'uint8': - im = img_as_ubyte(im) - - if chn.lower() == 'rgb' and im.ndim == 3: - im = rgb2bgr(im) - - if qf is not None and path.suffix.lower() in ['.jpg', '.jpeg']: - flag = cv2.imwrite(str(path), im, [int(cv2.IMWRITE_JPEG_QUALITY), int(qf)]) - else: - flag = cv2.imwrite(str(path), im) - - return flag - -def jpeg_compress(im, qf, chn_in='rgb'): - ''' - Input: - im: h x w x 3 array - qf: compress factor, (0, 100] - chn_in: 'rgb' or 'bgr' - Return: - Compressed Image with channel order: chn_in - ''' - # transform to BGR channle and uint8 data type - im_bgr = rgb2bgr(im) if chn_in.lower() == 'rgb' else im - if im.dtype != np.dtype('uint8'): im_bgr = img_as_ubyte(im_bgr) - - # JPEG compress - flag, encimg = cv2.imencode('.jpg', im_bgr, [int(cv2.IMWRITE_JPEG_QUALITY), qf]) - assert flag - im_jpg_bgr = cv2.imdecode(encimg, 1) # uint8, BGR - - # transform back to original channel and the original data type - im_out = bgr2rgb(im_jpg_bgr) if chn_in.lower() == 'rgb' else im_jpg_bgr - if im.dtype != np.dtype('uint8'): im_out = img_as_float32(im_out).astype(im.dtype) - return im_out - -# ------------------------Augmentation----------------------------- -def data_aug_np(image, mode): - ''' - Performs data augmentation of the input image - Input: - image: a cv2 (OpenCV) image - mode: int. Choice of transformation to apply to the image - 0 - no transformation - 1 - flip up and down - 2 - rotate counterwise 90 degree - 3 - rotate 90 degree and flip up and down - 4 - rotate 180 degree - 5 - rotate 180 degree and flip - 6 - rotate 270 degree - 7 - rotate 270 degree and flip - ''' - if mode == 0: - # original - out = image - elif mode == 1: - # flip up and down - out = np.flipud(image) - elif mode == 2: - # rotate counterwise 90 degree - out = np.rot90(image) - elif mode == 3: - # rotate 90 degree and flip up and down - out = np.rot90(image) - out = np.flipud(out) - elif mode == 4: - # rotate 180 degree - out = np.rot90(image, k=2) - elif mode == 5: - # rotate 180 degree and flip - out = np.rot90(image, k=2) - out = np.flipud(out) - elif mode == 6: - # rotate 270 degree - out = np.rot90(image, k=3) - elif mode == 7: - # rotate 270 degree and flip - out = np.rot90(image, k=3) - out = np.flipud(out) - else: - raise Exception('Invalid choice of image transformation') - - return out.copy() - -def inverse_data_aug_np(image, mode): - ''' - Performs inverse data augmentation of the input image - ''' - if mode == 0: - # original - out = image - elif mode == 1: - out = np.flipud(image) - elif mode == 2: - out = np.rot90(image, axes=(1,0)) - elif mode == 3: - out = np.flipud(image) - out = np.rot90(out, axes=(1,0)) - elif mode == 4: - out = np.rot90(image, k=2, axes=(1,0)) - elif mode == 5: - out = np.flipud(image) - out = np.rot90(out, k=2, axes=(1,0)) - elif mode == 6: - out = np.rot90(image, k=3, axes=(1,0)) - elif mode == 7: - # rotate 270 degree and flip - out = np.flipud(image) - out = np.rot90(out, k=3, axes=(1,0)) - else: - raise Exception('Invalid choice of image transformation') - - return out - -class SpatialAug: - def __init__(self): - pass - - def __call__(self, im, flag=None): - if flag is None: - flag = random.randint(0, 7) - - out = data_aug_np(im, flag) - return out - -# ----------------------Visualization---------------------------- -def imshow(x, title=None, cbar=False): - import matplotlib.pyplot as plt - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - -# -----------------------Covolution------------------------------ -def imgrad(im, pading_mode='mirror'): - ''' - Calculate image gradient. - Input: - im: h x w x c numpy array - ''' - from scipy.ndimage import correlate # lazy import - wx = np.array([[0, 0, 0], - [-1, 1, 0], - [0, 0, 0]], dtype=np.float32) - wy = np.array([[0, -1, 0], - [0, 1, 0], - [0, 0, 0]], dtype=np.float32) - if im.ndim == 3: - gradx = np.stack( - [correlate(im[:,:,c], wx, mode=pading_mode) for c in range(im.shape[2])], - axis=2 - ) - grady = np.stack( - [correlate(im[:,:,c], wy, mode=pading_mode) for c in range(im.shape[2])], - axis=2 - ) - grad = np.concatenate((gradx, grady), axis=2) - else: - gradx = correlate(im, wx, mode=pading_mode) - grady = correlate(im, wy, mode=pading_mode) - grad = np.stack((gradx, grady), axis=2) - - return {'gradx': gradx, 'grady': grady, 'grad':grad} - -def imgrad_fft(im): - ''' - Calculate image gradient. - Input: - im: h x w x c numpy array - ''' - wx = np.rot90(np.array([[0, 0, 0], - [-1, 1, 0], - [0, 0, 0]], dtype=np.float32), k=2) - gradx = convfft(im, wx) - wy = np.rot90(np.array([[0, -1, 0], - [0, 1, 0], - [0, 0, 0]], dtype=np.float32), k=2) - grady = convfft(im, wy) - grad = np.concatenate((gradx, grady), axis=2) - - return {'gradx': gradx, 'grady': grady, 'grad':grad} - -def convfft(im, weight): - ''' - Convolution with FFT - Input: - im: h1 x w1 x c numpy array - weight: h2 x w2 numpy array - Output: - out: h1 x w1 x c numpy array - ''' - axes = (0,1) - otf = psf2otf(weight, im.shape[:2]) - if im.ndim == 3: - otf = np.tile(otf[:, :, None], (1,1,im.shape[2])) - out = fft.ifft2(fft.fft2(im, axes=axes) * otf, axes=axes).real - return out - -def psf2otf(psf, shape): - """ - MATLAB psf2otf function. - Borrowed from https://github.com/aboucaud/pypher/blob/master/pypher/pypher.py. - Input: - psf : h x w numpy array - shape : list or tuple, output shape of the OTF array - Output: - otf : OTF array with the desirable shape - """ - if np.all(psf == 0): - return np.zeros_like(psf) - - inshape = psf.shape - # Pad the PSF to outsize - psf = zero_pad(psf, shape, position='corner') - - # Circularly shift OTF so that the 'center' of the PSF is [0,0] element of the array - for axis, axis_size in enumerate(inshape): - psf = np.roll(psf, -int(axis_size / 2), axis=axis) - - # Compute the OTF - otf = fft.fft2(psf) - - # Estimate the rough number of operations involved in the FFT - # and discard the PSF imaginary part if within roundoff error - # roundoff error = machine epsilon = sys.float_info.epsilon - # or np.finfo().eps - n_ops = np.sum(psf.size * np.log2(psf.shape)) - otf = np.real_if_close(otf, tol=n_ops) - - return otf - -# ----------------------Patch Cropping---------------------------- -def random_crop(im, pch_size): - ''' - Randomly crop a patch from the give image. - ''' - h, w = im.shape[:2] - if h == pch_size and w == pch_size: - im_pch = im - else: - assert h >= pch_size or w >= pch_size - ind_h = random.randint(0, h-pch_size) - ind_w = random.randint(0, w-pch_size) - im_pch = im[ind_h:ind_h+pch_size, ind_w:ind_w+pch_size,] - - return im_pch - -class RandomCrop: - def __init__(self, pch_size): - self.pch_size = pch_size - - def __call__(self, im): - return random_crop(im, self.pch_size) - -class ImageSpliterNp: - def __init__(self, im, pch_size, stride, sf=1): - ''' - Input: - im: h x w x c, numpy array, [0, 1], low-resolution image in SR - pch_size, stride: patch setting - sf: scale factor in image super-resolution - ''' - assert stride <= pch_size - self.stride = stride - self.pch_size = pch_size - self.sf = sf - - if im.ndim == 2: - im = im[:, :, None] - - height, width, chn = im.shape - self.height_starts_list = self.extract_starts(height) - self.width_starts_list = self.extract_starts(width) - self.length = self.__len__() - self.num_pchs = 0 - - self.im_ori = im - self.im_res = np.zeros([height*sf, width*sf, chn], dtype=im.dtype) - self.pixel_count = np.zeros([height*sf, width*sf, chn], dtype=im.dtype) - - def extract_starts(self, length): - starts = list(range(0, length, self.stride)) - if starts[-1] + self.pch_size > length: - starts[-1] = length - self.pch_size - return starts - - def __len__(self): - return len(self.height_starts_list) * len(self.width_starts_list) - - def __iter__(self): - return self - - def __next__(self): - if self.num_pchs < self.length: - w_start_idx = self.num_pchs // len(self.height_starts_list) - w_start = self.width_starts_list[w_start_idx] * self.sf - w_end = w_start + self.pch_size * self.sf - - h_start_idx = self.num_pchs % len(self.height_starts_list) - h_start = self.height_starts_list[h_start_idx] * self.sf - h_end = h_start + self.pch_size * self.sf - - pch = self.im_ori[h_start:h_end, w_start:w_end,] - self.w_start, self.w_end = w_start, w_end - self.h_start, self.h_end = h_start, h_end - - self.num_pchs += 1 - else: - raise StopIteration(0) - - return pch, (h_start, h_end, w_start, w_end) - - def update(self, pch_res, index_infos): - ''' - Input: - pch_res: pch_size x pch_size x 3, [0,1] - index_infos: (h_start, h_end, w_start, w_end) - ''' - if index_infos is None: - w_start, w_end = self.w_start, self.w_end - h_start, h_end = self.h_start, self.h_end - else: - h_start, h_end, w_start, w_end = index_infos - - self.im_res[h_start:h_end, w_start:w_end] += pch_res - self.pixel_count[h_start:h_end, w_start:w_end] += 1 - - def gather(self): - assert np.all(self.pixel_count != 0) - return self.im_res / self.pixel_count - -class ImageSpliterTh: - def __init__(self, im, pch_size, stride, sf=1): - ''' - Input: - im: n x c x h x w, torch tensor, float, low-resolution image in SR - pch_size, stride: patch setting - sf: scale factor in image super-resolution - ''' - assert stride <= pch_size - self.stride = stride - self.pch_size = pch_size - self.sf = sf - - bs, chn, height, width= im.shape - self.height_starts_list = self.extract_starts(height) - self.width_starts_list = self.extract_starts(width) - self.length = self.__len__() - self.num_pchs = 0 - - self.im_ori = im - self.im_res = torch.zeros([bs, chn, height*sf, width*sf], dtype=im.dtype, device=im.device) - self.pixel_count = torch.zeros([bs, chn, height*sf, width*sf], dtype=im.dtype, device=im.device) - - def extract_starts(self, length): - if length <= self.pch_size: - starts = [0,] - else: - starts = list(range(0, length, self.stride)) - for i in range(len(starts)): - if starts[i] + self.pch_size > length: - starts[i] = length - self.pch_size - starts = sorted(set(starts), key=starts.index) - return starts - - def __len__(self): - return len(self.height_starts_list) * len(self.width_starts_list) - - def __iter__(self): - return self - - def __next__(self): - if self.num_pchs < self.length: - w_start_idx = self.num_pchs // len(self.height_starts_list) - w_start = self.width_starts_list[w_start_idx] - w_end = w_start + self.pch_size - - h_start_idx = self.num_pchs % len(self.height_starts_list) - h_start = self.height_starts_list[h_start_idx] - h_end = h_start + self.pch_size - - pch = self.im_ori[:, :, h_start:h_end, w_start:w_end,] - - h_start *= self.sf - h_end *= self.sf - w_start *= self.sf - w_end *= self.sf - - self.w_start, self.w_end = w_start, w_end - self.h_start, self.h_end = h_start, h_end - - self.num_pchs += 1 - else: - raise StopIteration() - - return pch, (h_start, h_end, w_start, w_end) - - def update(self, pch_res, index_infos): - ''' - Input: - pch_res: n x c x pch_size x pch_size, float - index_infos: (h_start, h_end, w_start, w_end) - ''' - if index_infos is None: - w_start, w_end = self.w_start, self.w_end - h_start, h_end = self.h_start, self.h_end - else: - h_start, h_end, w_start, w_end = index_infos - - self.im_res[:, :, h_start:h_end, w_start:w_end] += pch_res - self.pixel_count[:, :, h_start:h_end, w_start:w_end] += 1 - - def gather(self): - assert torch.all(self.pixel_count != 0) - return self.im_res.div(self.pixel_count) - -# ----------------------Patch Cropping---------------------------- -class Clamper: - def __init__(self, min_max=(-1, 1)): - self.min_bound, self.max_bound = min_max[0], min_max[1] - - def __call__(self, im): - if isinstance(im, np.ndarray): - return np.clip(im, a_min=self.min_bound, a_max=self.max_bound) - elif isinstance(im, torch.Tensor): - return torch.clamp(im, min=self.min_bound, max=self.max_bound) - else: - raise TypeError(f'ndarray or Tensor expected, got {type(im)}') - -if __name__ == '__main__': - im = np.random.randn(64, 64, 3).astype(np.float32) - - grad1 = imgrad(im)['grad'] - grad2 = imgrad_fft(im)['grad'] - - error = np.abs(grad1 -grad2).max() - mean_error = np.abs(grad1 -grad2).mean() - print('The largest error is {:.2e}'.format(error)) - print('The mean error is {:.2e}'.format(mean_error)) diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py deleted file mode 100644 index de0f7545904cc088377c552cc6d9b058c5e9d342..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py +++ /dev/null @@ -1,37 +0,0 @@ -import torch -import argparse -import json - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", type=str, default="./OUTPUT_MODEL/G_latest.pth") - parser.add_argument("--config_dir", type=str, default="./configs/modified_finetune_speaker.json") - args = parser.parse_args() - - model_sd = torch.load(args.model_dir, map_location='cpu') - with open(args.config_dir, 'r', encoding='utf-8') as f: - hps = json.load(f) - - valid_speakers = list(hps['speakers'].keys()) - if hps['data']['n_speakers'] > len(valid_speakers): - new_emb_g = torch.zeros([len(valid_speakers), 256]) - old_emb_g = model_sd['model']['emb_g.weight'] - for i, speaker in enumerate(valid_speakers): - new_emb_g[i, :] = old_emb_g[hps['speakers'][speaker], :] - hps['speakers'][speaker] = i - hps['data']['n_speakers'] = len(valid_speakers) - model_sd['model']['emb_g.weight'] = new_emb_g - with open("./finetune_speaker.json", 'w', encoding='utf-8') as f: - json.dump(hps, f, indent=2) - torch.save(model_sd, "./G_latest.pth") - else: - with open("./finetune_speaker.json", 'w', encoding='utf-8') as f: - json.dump(hps, f, indent=2) - torch.save(model_sd, "./G_latest.pth") - # save another config file copy in MoeGoe format - hps['speakers'] = valid_speakers - with open("./moegoe_config.json", 'w', encoding='utf-8') as f: - json.dump(hps, f, indent=2) - - - diff --git a/spaces/Illumotion/Koboldcpp/examples/common.h b/spaces/Illumotion/Koboldcpp/examples/common.h deleted file mode 100644 index 375bc0a3db416b9bd4801d13c3924051feb7aa2d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/common.h +++ /dev/null @@ -1,114 +0,0 @@ -// Various helper functions and utilities - -#pragma once - -#include "llama.h" - -#include -#include -#include -#include -#include -#include - -// -// CLI argument parsing -// -int32_t get_num_physical_cores(); - -struct gpt_params { - uint32_t seed = -1; // RNG seed - int32_t n_threads = get_num_physical_cores(); - int32_t n_predict = -1; // new tokens to predict - int32_t n_ctx = 512; // context size - int32_t n_batch = 512; // batch size for prompt processing (must be >=32 to use BLAS) - int32_t n_gqa = 1; // grouped-query attention factor (TODO: move to hparams) - int32_t n_keep = 0; // number of tokens to keep from initial prompt - int32_t n_chunks = -1; // max number of chunks to process (-1 = unlimited) - int32_t n_gpu_layers = 0; // number of layers to store in VRAM - int32_t main_gpu = 0; // the GPU that is used for scratch and small tensors - float tensor_split[LLAMA_MAX_DEVICES] = {0}; // how split tensors should be distributed across GPUs - int32_t n_probs = 0; // if greater than 0, output the probabilities of top n_probs tokens. - float rms_norm_eps = LLAMA_DEFAULT_RMS_EPS; // rms norm epsilon - float rope_freq_base = 10000.0f; // RoPE base frequency - float rope_freq_scale = 1.0f; // RoPE frequency scaling factor - - // sampling parameters - std::unordered_map logit_bias; // logit bias for specific tokens - int32_t top_k = 40; // <= 0 to use vocab size - float top_p = 0.95f; // 1.0 = disabled - float tfs_z = 1.00f; // 1.0 = disabled - float typical_p = 1.00f; // 1.0 = disabled - float temp = 0.80f; // 1.0 = disabled - float repeat_penalty = 1.10f; // 1.0 = disabled - int32_t repeat_last_n = 64; // last n tokens to penalize (0 = disable penalty, -1 = context size) - float frequency_penalty = 0.00f; // 0.0 = disabled - float presence_penalty = 0.00f; // 0.0 = disabled - int32_t mirostat = 0; // 0 = disabled, 1 = mirostat, 2 = mirostat 2.0 - float mirostat_tau = 5.00f; // target entropy - float mirostat_eta = 0.10f; // learning rate - - // Classifier-Free Guidance - // https://arxiv.org/abs/2306.17806 - std::string cfg_negative_prompt; // string to help guidance - float cfg_scale = 1.f; // How strong is guidance - - std::string model = "models/7B/ggml-model.bin"; // model path - std::string model_alias = "unknown"; // model alias - std::string prompt = ""; - std::string path_prompt_cache = ""; // path to file for saving/loading prompt eval state - std::string input_prefix = ""; // string to prefix user inputs with - std::string input_suffix = ""; // string to suffix user inputs with - std::string grammar = ""; // optional BNF-like grammar to constrain sampling - std::vector antiprompt; // string upon seeing which more user input is prompted - - std::string lora_adapter = ""; // lora adapter path - std::string lora_base = ""; // base model path for the lora adapter - - bool hellaswag = false; // compute HellaSwag score over random tasks from datafile supplied in prompt - size_t hellaswag_tasks = 400; // number of tasks to use when computing the HellaSwag score - - bool low_vram = false; // if true, reduce VRAM usage at the cost of performance - bool mul_mat_q = false; // if true, use experimental mul_mat_q kernels - bool memory_f16 = true; // use f16 instead of f32 for memory kv - bool random_prompt = false; // do not randomize prompt if none provided - bool use_color = false; // use color to distinguish generations and inputs - bool interactive = false; // interactive mode - bool prompt_cache_all = false; // save user input and generations to prompt cache - bool prompt_cache_ro = false; // open the prompt cache read-only and do not update it - - bool embedding = false; // get only sentence embedding - bool interactive_first = false; // wait for user input immediately - bool multiline_input = false; // reverse the usage of `\` - bool simple_io = false; // improves compatibility with subprocesses and limited consoles - - bool input_prefix_bos = false; // prefix BOS to user inputs, preceding input_prefix - bool instruct = false; // instruction mode (used for Alpaca models) - bool penalize_nl = true; // consider newlines as a repeatable token - bool perplexity = false; // compute perplexity over the prompt - bool use_mmap = true; // use mmap for faster loads - bool use_mlock = false; // use mlock to keep model in memory - bool mem_test = false; // compute maximum memory usage - bool numa = false; // attempt optimizations that help on some NUMA systems - bool export_cgraph = false; // export the computation graph - bool verbose_prompt = false; // print prompt tokens before generation -}; - -bool gpt_params_parse(int argc, char ** argv, gpt_params & params); - -void gpt_print_usage(int argc, char ** argv, const gpt_params & params); - -std::string gpt_random_prompt(std::mt19937 & rng); - -// -// Vocab utils -// - -std::vector llama_tokenize(struct llama_context * ctx, const std::string & text, bool add_bos); - -// -// Model utils -// - -std::tuple llama_init_from_gpt_params(const gpt_params & params); -struct llama_context_params llama_context_params_from_gpt_params(const gpt_params & params); diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py b/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/ImPavloh/voiceit/README.md b/spaces/ImPavloh/voiceit/README.md deleted file mode 100644 index 6473ffd725379335c342b8a7460c7ce77dcdd7d8..0000000000000000000000000000000000000000 --- a/spaces/ImPavloh/voiceit/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: VoiceIt -emoji: 🗣️ -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.29.0 -app_file: voiceit.py -pinned: true -license: gpl ---- \ No newline at end of file diff --git a/spaces/InpaintAI/Inpaint-Anything/README.md b/spaces/InpaintAI/Inpaint-Anything/README.md deleted file mode 100644 index 648276b900cc1bee14b4d87946b42ccbebc10a6b..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Inpaint Anything -emoji: ⚡ -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md b/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md deleted file mode 100644 index fd1589c760c77bcdee68b3c19a825e57dca46a01..0000000000000000000000000000000000000000 --- a/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Marvel WhatIf Diffusion -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py deleted file mode 100644 index 29d1d707f55a026458defd2bc0ec089ecc10653a..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..modeling_utils import ModelMixin -from ..utils import BaseOutput -from .embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps -from .unet_1d_blocks import get_down_block, get_mid_block, get_out_block, get_up_block - - -@dataclass -class UNet1DOutput(BaseOutput): - """ - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, sample_size)`): - Hidden states output. Output of last layer of model. - """ - - sample: torch.FloatTensor - - -class UNet1DModel(ModelMixin, ConfigMixin): - r""" - UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - sample_size (`int`, *optional*): Default length of sample. Should be adaptable at runtime. - in_channels (`int`, *optional*, defaults to 2): Number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 2): Number of channels in the output. - time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use. - freq_shift (`float`, *optional*, defaults to 0.0): Frequency shift for fourier time embedding. - flip_sin_to_cos (`bool`, *optional*, defaults to : - obj:`False`): Whether to flip sin to cos for fourier time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")`): Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")`): Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to : - obj:`(32, 32, 64)`): Tuple of block output channels. - mid_block_type (`str`, *optional*, defaults to "UNetMidBlock1D"): block type for middle of UNet. - out_block_type (`str`, *optional*, defaults to `None`): optional output processing of UNet. - act_fn (`str`, *optional*, defaults to None): optional activitation function in UNet blocks. - norm_num_groups (`int`, *optional*, defaults to 8): group norm member count in UNet blocks. - layers_per_block (`int`, *optional*, defaults to 1): added number of layers in a UNet block. - downsample_each_block (`int`, *optional*, defaults to False: - experimental feature for using a UNet without upsampling. - """ - - @register_to_config - def __init__( - self, - sample_size: int = 65536, - sample_rate: Optional[int] = None, - in_channels: int = 2, - out_channels: int = 2, - extra_in_channels: int = 0, - time_embedding_type: str = "fourier", - flip_sin_to_cos: bool = True, - use_timestep_embedding: bool = False, - freq_shift: float = 0.0, - down_block_types: Tuple[str] = ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D"), - up_block_types: Tuple[str] = ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip"), - mid_block_type: Tuple[str] = "UNetMidBlock1D", - out_block_type: str = None, - block_out_channels: Tuple[int] = (32, 32, 64), - act_fn: str = None, - norm_num_groups: int = 8, - layers_per_block: int = 1, - downsample_each_block: bool = False, - ): - super().__init__() - self.sample_size = sample_size - - # time - if time_embedding_type == "fourier": - self.time_proj = GaussianFourierProjection( - embedding_size=8, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos - ) - timestep_input_dim = 2 * block_out_channels[0] - elif time_embedding_type == "positional": - self.time_proj = Timesteps( - block_out_channels[0], flip_sin_to_cos=flip_sin_to_cos, downscale_freq_shift=freq_shift - ) - timestep_input_dim = block_out_channels[0] - - if use_timestep_embedding: - time_embed_dim = block_out_channels[0] * 4 - self.time_mlp = TimestepEmbedding( - in_channels=timestep_input_dim, - time_embed_dim=time_embed_dim, - act_fn=act_fn, - out_dim=block_out_channels[0], - ) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - self.out_block = None - - # down - output_channel = in_channels - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - - if i == 0: - input_channel += extra_in_channels - - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=block_out_channels[0], - add_downsample=not is_final_block or downsample_each_block, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = get_mid_block( - mid_block_type, - in_channels=block_out_channels[-1], - mid_channels=block_out_channels[-1], - out_channels=block_out_channels[-1], - embed_dim=block_out_channels[0], - num_layers=layers_per_block, - add_downsample=downsample_each_block, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - if out_block_type is None: - final_upsample_channels = out_channels - else: - final_upsample_channels = block_out_channels[0] - - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = ( - reversed_block_out_channels[i + 1] if i < len(up_block_types) - 1 else final_upsample_channels - ) - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block, - in_channels=prev_output_channel, - out_channels=output_channel, - temb_channels=block_out_channels[0], - add_upsample=not is_final_block, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - num_groups_out = norm_num_groups if norm_num_groups is not None else min(block_out_channels[0] // 4, 32) - self.out_block = get_out_block( - out_block_type=out_block_type, - num_groups_out=num_groups_out, - embed_dim=block_out_channels[0], - out_channels=out_channels, - act_fn=act_fn, - fc_dim=block_out_channels[-1] // 4, - ) - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - return_dict: bool = True, - ) -> Union[UNet1DOutput, Tuple]: - r""" - Args: - sample (`torch.FloatTensor`): `(batch_size, sample_size, num_channels)` noisy inputs tensor - timestep (`torch.FloatTensor` or `float` or `int): (batch) timesteps - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_1d.UNet1DOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_1d.UNet1DOutput`] or `tuple`: [`~models.unet_1d.UNet1DOutput`] if `return_dict` is True, - otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - """ - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - timestep_embed = self.time_proj(timesteps) - if self.config.use_timestep_embedding: - timestep_embed = self.time_mlp(timestep_embed) - else: - timestep_embed = timestep_embed[..., None] - timestep_embed = timestep_embed.repeat([1, 1, sample.shape[2]]).to(sample.dtype) - - # 2. down - down_block_res_samples = () - for downsample_block in self.down_blocks: - sample, res_samples = downsample_block(hidden_states=sample, temb=timestep_embed) - down_block_res_samples += res_samples - - # 3. mid - if self.mid_block: - sample = self.mid_block(sample, timestep_embed) - - # 4. up - for i, upsample_block in enumerate(self.up_blocks): - res_samples = down_block_res_samples[-1:] - down_block_res_samples = down_block_res_samples[:-1] - sample = upsample_block(sample, res_hidden_states_tuple=res_samples, temb=timestep_embed) - - # 5. post-process - if self.out_block: - sample = self.out_block(sample, timestep_embed) - - if not return_dict: - return (sample,) - - return UNet1DOutput(sample=sample) diff --git a/spaces/Jamkonams/AutoGPT/autogpt/configurator.py b/spaces/Jamkonams/AutoGPT/autogpt/configurator.py deleted file mode 100644 index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/configurator.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Configurator module.""" -import click -from colorama import Back, Fore, Style - -from autogpt import utils -from autogpt.config import Config -from autogpt.logs import logger -from autogpt.memory import get_supported_memory_backends - -CFG = Config() - - -def create_config( - continuous: bool, - continuous_limit: int, - ai_settings_file: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """Updates the config object with the given arguments. - - Args: - continuous (bool): Whether to run in continuous mode - continuous_limit (int): The number of times to run in continuous mode - ai_settings_file (str): The path to the ai_settings.yaml file - skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script - speak (bool): Whether to enable speak mode - debug (bool): Whether to enable debug mode - gpt3only (bool): Whether to enable GPT3.5 only mode - gpt4only (bool): Whether to enable GPT4 only mode - memory_type (str): The type of memory backend to use - browser_name (str): The name of the browser to use when using selenium to scrape the web - allow_downloads (bool): Whether to allow Auto-GPT to download files natively - skips_news (bool): Whether to suppress the output of latest news on startup - """ - CFG.set_debug_mode(False) - CFG.set_continuous_mode(False) - CFG.set_speak_mode(False) - - if debug: - logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED") - CFG.set_debug_mode(True) - - if continuous: - logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.RED, - "Continuous mode is not recommended. It is potentially dangerous and may" - " cause your AI to run forever or carry out actions you would not usually" - " authorise. Use at your own risk.", - ) - CFG.set_continuous_mode(True) - - if continuous_limit: - logger.typewriter_log( - "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}" - ) - CFG.set_continuous_limit(continuous_limit) - - # Check if continuous limit is used without continuous mode - if continuous_limit and not continuous: - raise click.UsageError("--continuous-limit can only be used with --continuous") - - if speak: - logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED") - CFG.set_speak_mode(True) - - if gpt3only: - logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_smart_llm_model(CFG.fast_llm_model) - - if gpt4only: - logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_fast_llm_model(CFG.smart_llm_model) - - if memory_type: - supported_memory = get_supported_memory_backends() - chosen = memory_type - if chosen not in supported_memory: - logger.typewriter_log( - "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ", - Fore.RED, - f"{supported_memory}", - ) - logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend) - else: - CFG.memory_backend = chosen - - if skip_reprompt: - logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED") - CFG.skip_reprompt = True - - if ai_settings_file: - file = ai_settings_file - - # Validate file - (validated, message) = utils.validate_yaml_file(file) - if not validated: - logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message) - logger.double_check() - exit(1) - - logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file) - CFG.ai_settings_file = file - CFG.skip_reprompt = True - - if allow_downloads: - logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} " - + "It is recommended that you monitor any files it downloads carefully.", - ) - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}", - ) - CFG.allow_downloads = True - - if skip_news: - CFG.skip_news = True - - if browser_name: - CFG.selenium_web_browser = browser_name diff --git a/spaces/Jaskirat-04/Food-Personalisation/app.py b/spaces/Jaskirat-04/Food-Personalisation/app.py deleted file mode 100644 index 4dafce25e981f4274b919c611d11b91c379de7a7..0000000000000000000000000000000000000000 --- a/spaces/Jaskirat-04/Food-Personalisation/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import streamlit as st -import cv2 -import json -from deepface import DeepFace -from tensorflow.keras.models import model_from_json -import numpy as np - -# Emotion mappings -mappings = { - 'angry': ["Hot coffee - Help them cool down", "Apple slices - Healthy snack", "Bottle of water - Hydration", "Drive-thru only - Get out faster", "Veggie wrap - Lighter option"], - 'sad': ["Hot fudge sundae - Sweet treat", "French fries - Comfort food", "Chicken nuggets - Familiar favorite", "Chili - Hearty and soothing", "Baked apple pie - Nostalgic dessert"], - 'happy': ["McFlurry - Indulgent and celebratory", "Bacon smokehouse burger - Premium and new", "McCafe frappé - Sweet cold drink", "6-piece chicken nuggets - Shareable", "Shake and fries - Tasty combo"], - 'surprise': ["Fun drink specials", "dessert", "New menu items", "Free upsize"], - 'tired': ["Iced coffee - Caffeine boost", "Egg McMuffin - High protein", "Fruit yogurt parfait - Light and energizing", "Oatmeal - Whole grains for lasting energy", "Hash browns - Carbs and salt for fast energy"], - 'neutral': ["Hamburger - Classic choice", "Cheeseburger - Familiar favorite", "Medium fries - Typical go-to", "6-piece nuggets - Satisfying option", "Any drink - Quench their thirst"], - 'fearful': ["Bottled water - Hydration and calm", "Apple slices - Healthy light snack", "Yogurt parfait - Gentle on stomach", "Oatmeal - Warm and comforting", "Salad - Fresh and simple"] -} - -@st.cache_resource -def load_model(): - # Load model JSON - with open('model.json', 'r') as f: - model = model_from_json(f.read()) - - return model - - -# Set background -import streamlit as st -from PIL import Image - -image = Image.open('shutterstock_download.jpg') - -st.image(image) -with st.spinner('Loading Food Recommender...'): - model = load_model() -# Load the background image -# page_bg=""" -# -# """ - -# # Set the background image -# st.markdown(page_bg,unsafe_allow_html=True -# ) -st.title("Your Food Buddy") - -image = st.file_uploader("Upload an image") - - - -if image: - img = cv2.imdecode(np.fromstring(image.read(), np.uint8), cv2.IMREAD_UNCHANGED) - # model = load_model() - st.image(image, caption='Uploaded Image') - result = DeepFace.analyze(img, enforce_detection=True) - dominant = max(result[0]['emotion'], key=result[0]['emotion'].get) - st.success(f"The person is {dominant}") - - st.header("Recommended Menu Items:") - for item in mappings[dominant]: - st.write("- " + item) diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py deleted file mode 100644 index c7ffcccd7fc0f33b59d99d73d0436d60e561b0fc..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .train import * -from .utils import * -from .version import __gitsha__, __version__ diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py b/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py deleted file mode 100644 index 8501c7e7c3fcd0c2f798ec603426576aa1885deb..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py +++ /dev/null @@ -1,202 +0,0 @@ -from typing import Any, Dict, List, Optional, Union, cast - -from pydantic import Field - -from steamship.base.client import Client -from steamship.base.error import SteamshipError -from steamship.base.model import CamelModel -from steamship.base.tasks import Task -from steamship.data.embeddings import EmbeddedItem, EmbeddingIndex, QueryResult, QueryResults -from steamship.data.plugin.plugin_instance import PluginInstance -from steamship.data.tags.tag import Tag - - -class EmbedderInvocation(CamelModel): - """The parameters capable of creating/fetching an Embedder (Tagger) Plugin Instance.""" - - plugin_handle: str - instance_handle: Optional[str] = None - config: Optional[Dict[str, Any]] = None - version: Optional[str] = None - fetch_if_exists: bool = True - - -class SearchResult(CamelModel): - """A single scored search result -- which is always a tag. - - This class is intended to eventually replace the QueryResult object currently used with the Embedding layer.""" - - tag: Optional[Tag] = None - score: Optional[float] = None - - @staticmethod - def from_query_result(query_result: QueryResult) -> "SearchResult": - hit = query_result.value - value = hit.metadata or {} - - # To make this change Python-only, some fields are stached in `hit.metadata`. - # This has the temporary consequence of these keys not being safe. This will be resolved when we spread - # this refactor to the engine. - block_id = None - if "_block_id" in value: - block_id = value.get("_block_id") - del value["_block_id"] - - file_id = None - if "_file_id" in value: - file_id = value.get("_file_id") - del value["_file_id"] - - tag_id = None - if "_tag_id" in value: - tag_id = value.get("_tag_id") - del value["_tag_id"] - - tag = Tag( - id=hit.id, - kind=hit.external_type, - name=hit.external_id, - block_id=block_id, - tag_id=tag_id, - file_id=file_id, - text=hit.value, - value=value, - ) - return SearchResult(tag=tag, score=query_result.score) - - -class SearchResults(CamelModel): - """Results of a search operation -- which is always a list of ranked tag. - - This class is intended to eventually replace the QueryResults object currently used with the Embedding layer. - TODO: add in paging support.""" - - items: List[SearchResult] = None - - @staticmethod - def from_query_results(query_results: QueryResults) -> "SearchResults": - items = [SearchResult.from_query_result(qr) for qr in query_results.items or []] - return SearchResults(items=items) - - -class EmbeddingIndexPluginInstance(PluginInstance): - """A persistent, read-optimized index over embeddings. - - This is currently implemented as an object which behaves like a PluginInstance even though - it isn't from an implementation perspective on the back-end. - """ - - client: Client = Field(None, exclude=True) - embedder: PluginInstance = Field(None, exclude=True) - index: EmbeddingIndex = Field(None, exclude=True) - - def delete(self): - """Delete the EmbeddingIndexPluginInstnace. - - For now, we will have this correspond to deleting the `index` but not the `embedder`. This is likely - a temporary design. - """ - return self.index.delete() - - def insert(self, tags: Union[Tag, List[Tag]], allow_long_records: bool = False): - """Insert tags into the embedding index.""" - - # Make a list if a single tag was provided - if isinstance(tags, Tag): - tags = [tags] - - for tag in tags: - if not tag.text: - raise SteamshipError( - message="Please set the `text` field of your Tag before inserting it into an index." - ) - - # Now we need to prepare an EmbeddingIndexItem of a particular shape that encodes the tag. - metadata = tag.value or {} - if not isinstance(metadata, dict): - raise SteamshipError( - "Only Tags with a dict or None value can be embedded. " - + f"This tag had a value of type: {type(tag.value)}" - ) - - # To make this change Python-only, some fields are stached in `hit.metadata`. - # This has the temporary consequence of these keys not being safe. This will be resolved when we spread - # this refactor to the engine. - metadata["_file_id"] = tag.file_id - metadata["_tag_id"] = tag.id - metadata["_block_id"] = tag.block_id - tag.value = metadata - - embedded_items = [ - EmbeddedItem( - value=tag.text, - external_id=tag.name, - external_type=tag.kind, - metadata=tag.value, - ) - for tag in tags - ] - - # We always reindex in this new style; to not do so is to expose details (when embedding occurrs) we'd rather - # not have users exercise control over. - self.index.insert_many(embedded_items, reindex=True, allow_long_records=allow_long_records) - - def search(self, query: str, k: Optional[int] = None) -> Task[SearchResults]: - """Search the embedding index. - - This wrapper implementation simply projects the `Hit` data structure into a `Tag` - """ - if query is None or len(query.strip()) == 0: - raise SteamshipError(message="Query field must be non-empty.") - - # Metadata will always be included; this is the equivalent of Tag.value - wrapped_result = self.index.search(query, k=k, include_metadata=True) - - # For now, we'll have to do this synchronously since we're trying to avoid changing things on the engine. - wrapped_result.wait() - - # We're going to do a switcheroo on the output type of Task here. - search_results = SearchResults.from_query_results(wrapped_result.output) - wrapped_result.output = search_results - - # Return the index's search result, but projected into the data structure of Tags - return cast(Task[SearchResults], wrapped_result) - - @staticmethod - def create( - client: Any, - plugin_id: str = None, - plugin_handle: str = None, - plugin_version_id: str = None, - plugin_version_handle: str = None, - handle: str = None, - fetch_if_exists: bool = True, - config: Dict[str, Any] = None, - ) -> "EmbeddingIndexPluginInstance": - """Create a class that simulates an embedding index re-implemented as a PluginInstance.""" - - # Perform a manual config validation check since the configuration isn't actually being sent up to the Engine. - # In this case, an embedding index has special behavior which is to instantiate/fetch an Embedder that it can use. - if "embedder" not in config: - raise SteamshipError( - message="Config key missing. Please include a field named `embedder` with type `EmbedderInvocation`." - ) - - # Just for pydantic validation. - embedder_invocation = EmbedderInvocation.parse_obj(config["embedder"]) - - # Create the embedder - embedder = client.use_plugin(**embedder_invocation.dict()) - - # Create the index - index = EmbeddingIndex.create( - client=client, - handle=handle, - embedder_plugin_instance_handle=embedder.handle, - fetch_if_exists=fetch_if_exists, - ) - - # Now return the plugin wrapper - return EmbeddingIndexPluginInstance( - id=index.id, handle=index.handle, index=index, embedder=embedder - ) diff --git a/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py b/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py deleted file mode 100644 index 393d16b537c3a44516b192261480037c126f12d6..0000000000000000000000000000000000000000 --- a/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py +++ /dev/null @@ -1,82 +0,0 @@ -# ------------------------------------------ -# TextDiffuser: Diffusion Models as Text Painters -# Paper Link: https://arxiv.org/abs/2305.10855 -# Code Link: https://github.com/microsoft/unilm/tree/master/textdiffuser -# Copyright (c) Microsoft Corporation. -# This file define the architecture of unet. -# ------------------------------------------ - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class DoubleConv(nn.Module): - """(convolution => [BN] => ReLU) * 2""" - - def __init__(self, in_channels, out_channels, mid_channels=None): - super().__init__() - if not mid_channels: - mid_channels = out_channels - self.double_conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1), - nn.BatchNorm2d(mid_channels), - nn.ReLU(inplace=True), - nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True) - ) - - def forward(self, x): - return self.double_conv(x) - - -class Down(nn.Module): - """Downscaling with maxpool then double conv""" - - def __init__(self, in_channels, out_channels): - super().__init__() - self.maxpool_conv = nn.Sequential( - nn.MaxPool2d(2), - DoubleConv(in_channels, out_channels) - ) - - def forward(self, x): - return self.maxpool_conv(x) - - -class Up(nn.Module): - """Upscaling then double conv""" - - def __init__(self, in_channels, out_channels, bilinear=True): - super().__init__() - - # if bilinear, use the normal convolutions to reduce the number of channels - if bilinear: - self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) - self.conv = DoubleConv(in_channels, out_channels, in_channels // 2) - else: - self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2) - self.conv = DoubleConv(in_channels, out_channels) - - - def forward(self, x1, x2): - x1 = self.up(x1) - # input is CHW - diffY = x2.size()[2] - x1.size()[2] - diffX = x2.size()[3] - x1.size()[3] - - x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2, - diffY // 2, diffY - diffY // 2]) - - x = torch.cat([x2, x1], dim=1) - return self.conv(x) - - -class OutConv(nn.Module): - def __init__(self, in_channels, out_channels): - super(OutConv, self).__init__() - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1) - - def forward(self, x): - return self.conv(x) diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py deleted file mode 100644 index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py +++ /dev/null @@ -1,509 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import ONNXVITS_modules as modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - self.w = None - self.reverse = None - self.noise_scale = None - def forward(self, x, x_mask, g=None): - w = self.w - reverse = self.reverse - noise_scale = self.noise_scale - - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - self.reverse = None - def forward(self, x, x_mask, g=None): - reverse = self.reverse - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t] - x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask # z, m, logs : [b, h, t] - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - - if n_speakers > 0: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None): - torch.onnx.export( - self.enc_p, - (x, x_lengths), - "ONNX_net/enc_p.onnx", - input_names=["x", "x_lengths"], - output_names=["xout", "m_p", "logs_p", "x_mask"], - dynamic_axes={ - "x" : [1], - "xout" : [2], - "m_p" : [2], - "logs_p" : [2], - "x_mask" : [2] - }, - verbose=True, - ) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - self.dp.reverse = True - self.dp.noise_scale = noise_scale_w - torch.onnx.export( - self.dp, - (x, x_mask, g), - "ONNX_net/dp.onnx", - input_names=["x", "x_mask", "g"], - output_names=["logw"], - dynamic_axes={ - "x" : [2], - "x_mask" : [2], - "logw" : [2] - }, - verbose=True, - ) - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - - self.flow.reverse = True - torch.onnx.export( - self.flow, - (z_p, y_mask, g), - "ONNX_net/flow.onnx", - input_names=["z_p", "y_mask", "g"], - output_names=["z"], - dynamic_axes={ - "z_p" : [2], - "y_mask" : [2], - "z" : [2] - }, - verbose=True, - ) - z = self.flow(z_p, y_mask, g=g) - z_in = (z * y_mask)[:,:,:max_len] - - torch.onnx.export( - self.dec, - (z_in, g), - "ONNX_net/dec.onnx", - input_names=["z_in", "g"], - output_names=["o"], - dynamic_axes={ - "z_in" : [2], - "o" : [2] - }, - verbose=True, - ) - o = self.dec(z_in, g=g) - return o diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/Keenlol/Wood_Classification/README.md b/spaces/Keenlol/Wood_Classification/README.md deleted file mode 100644 index 817c8dcea7c1db1217d85b748bf74b937ddee3b9..0000000000000000000000000000000000000000 --- a/spaces/Keenlol/Wood_Classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wood Classification -emoji: 🏃 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KenjieDec/GPEN/sr_model/arch_util.py b/spaces/KenjieDec/GPEN/sr_model/arch_util.py deleted file mode 100644 index ce5b9d92f418d3f8b5b8887a24491f65660b33f9..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/sr_model/arch_util.py +++ /dev/null @@ -1,125 +0,0 @@ -import math -import torch -from torch import nn as nn -from torch.nn import functional as F -from torch.nn import init as init -from torch.nn.modules.batchnorm import _BatchNorm - -@torch.no_grad() -def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs): - """Initialize network weights. - - Args: - module_list (list[nn.Module] | nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. Default: 1. - bias_fill (float): The value to fill bias. Default: 0 - kwargs (dict): Other arguments for initialization function. - """ - if not isinstance(module_list, list): - module_list = [module_list] - for module in module_list: - for m in module.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, _BatchNorm): - init.constant_(m.weight, 1) - if m.bias is not None: - m.bias.data.fill_(bias_fill) - - -def make_layer(basic_block, num_basic_block, **kwarg): - """Make layers by stacking the same blocks. - - Args: - basic_block (nn.module): nn.module class for basic block. - num_basic_block (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_basic_block): - layers.append(basic_block(**kwarg)) - return nn.Sequential(*layers) - - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - It has a style of: - ---Conv-ReLU-Conv-+- - |________________| - - Args: - num_feat (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Residual scale. Default: 1. - pytorch_init (bool): If set to True, use pytorch default init, - otherwise, use default_init_weights. Default: False. - """ - - def __init__(self, num_feat=64, res_scale=1, pytorch_init=False): - super(ResidualBlockNoBN, self).__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.relu = nn.ReLU(inplace=True) - - if not pytorch_init: - default_init_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' - 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - -# TODO: may write a cpp file -def pixel_unshuffle(x, scale): - """ Pixel unshuffle. - - Args: - x (Tensor): Input feature with shape (b, c, hh, hw). - scale (int): Downsample ratio. - - Returns: - Tensor: the pixel unshuffled feature. - """ - b, c, hh, hw = x.size() - out_channel = c * (scale**2) - assert hh % scale == 0 and hw % scale == 0 - h = hh // scale - w = hw // scale - x_view = x.view(b, c, h, scale, w, scale) - return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w) \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js deleted file mode 100644 index 30a58e819da6e1907f2f6f91cc564f9444207af6..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js +++ /dev/null @@ -1,950 +0,0 @@ -/* -录音 -https://github.com/xiangyuecn/Recorder -*/ -(function(factory){ - factory(window); - //umd returnExports.js - if(typeof(define)=='function' && define.amd){ - define(function(){ - return Recorder; - }); - }; - if(typeof(module)=='object' && module.exports){ - module.exports=Recorder; - }; -}(function(window){ -"use strict"; - -//兼容环境 -var LM="2021-08-03 20:01:03"; -var NOOP=function(){}; -//end 兼容环境 ****从以下开始copy源码***** - -var Recorder=function(set){ - return new initFn(set); -}; -//是否已经打开了全局的麦克风录音,所有工作都已经准备好了,就等接收音频数据了 -Recorder.IsOpen=function(){ - var stream=Recorder.Stream; - if(stream){ - var tracks=stream.getTracks&&stream.getTracks()||stream.audioTracks||[]; - var track=tracks[0]; - if(track){ - var state=track.readyState; - return state=="live"||state==track.LIVE; - }; - }; - return false; -}; -/*H5录音时的AudioContext缓冲大小。会影响H5录音时的onProcess调用速率,相对于AudioContext.sampleRate=48000时,4096接近12帧/s,调节此参数可生成比较流畅的回调动画。 - 取值256, 512, 1024, 2048, 4096, 8192, or 16384 - 注意,取值不能过低,2048开始不同浏览器可能回调速率跟不上造成音质问题。 - 一般无需调整,调整后需要先close掉已打开的录音,再open时才会生效。 -*/ -Recorder.BufferSize=4096; -//销毁已持有的所有全局资源,当要彻底移除Recorder时需要显式的调用此方法 -Recorder.Destroy=function(){ - CLog("Recorder Destroy"); - Disconnect();//断开可能存在的全局Stream、资源 - - for(var k in DestroyList){ - DestroyList[k](); - }; -}; -var DestroyList={}; -//登记一个需要销毁全局资源的处理方法 -Recorder.BindDestroy=function(key,call){ - DestroyList[key]=call; -}; -//判断浏览器是否支持录音,随时可以调用。注意:仅仅是检测浏览器支持情况,不会判断和调起用户授权,不会判断是否支持特定格式录音。 -Recorder.Support=function(){ - var AC=window.AudioContext; - if(!AC){ - AC=window.webkitAudioContext; - }; - if(!AC){ - return false; - }; - var scope=navigator.mediaDevices||{}; - if(!scope.getUserMedia){ - scope=navigator; - scope.getUserMedia||(scope.getUserMedia=scope.webkitGetUserMedia||scope.mozGetUserMedia||scope.msGetUserMedia); - }; - if(!scope.getUserMedia){ - return false; - }; - - Recorder.Scope=scope; - if(!Recorder.Ctx||Recorder.Ctx.state=="closed"){ - //不能反复构造,低版本number of hardware contexts reached maximum (6) - Recorder.Ctx=new AC(); - - Recorder.BindDestroy("Ctx",function(){ - var ctx=Recorder.Ctx; - if(ctx&&ctx.close){//能关掉就关掉,关不掉就保留着 - ctx.close(); - Recorder.Ctx=0; - }; - }); - }; - return true; -}; -/*初始化H5音频采集连接。如果自行提供了sourceStream将只进行一次简单的连接处理。如果是普通麦克风录音,此时的Stream是全局的,Safari上断开后就无法再次进行连接使用,表现为静音,因此使用全部使用全局处理避免调用到disconnect;全局处理也有利于屏蔽底层细节,start时无需再调用底层接口,提升兼容、可靠性。*/ -var Connect=function(streamStore){ - streamStore=streamStore||Recorder; - var bufferSize=streamStore.BufferSize||Recorder.BufferSize; - - var ctx=Recorder.Ctx,stream=streamStore.Stream; - var media=stream._m=ctx.createMediaStreamSource(stream); - var process=stream._p=(ctx.createScriptProcessor||ctx.createJavaScriptNode).call(ctx,bufferSize,1,1);//单声道,省的数据处理复杂 - - media.connect(process); - process.connect(ctx.destination); - - var calls=stream._call; - process.onaudioprocess=function(e){ - for(var k0 in calls){//has item - var o=e.inputBuffer.getChannelData(0);//块是共享的,必须复制出来 - var size=o.length; - - var pcm=new Int16Array(size); - var sum=0; - for(var j=0;j=pcmSampleRate时不会进行任何处理,小于时会进行重新采样 -prevChunkInfo:{} 可选,上次调用时的返回值,用于连续转换,本次调用将从上次结束位置开始进行处理。或可自行定义一个ChunkInfo从pcmDatas指定的位置开始进行转换 -option:{ 可选,配置项 - frameSize:123456 帧大小,每帧的PCM Int16的数量,采样率转换后的pcm长度为frameSize的整数倍,用于连续转换。目前仅在mp3格式时才有用,frameSize取值为1152,这样编码出来的mp3时长和pcm的时长完全一致,否则会因为mp3最后一帧录音不够填满时添加填充数据导致mp3的时长变长。 - frameType:"" 帧类型,一般为rec.set.type,提供此参数时无需提供frameSize,会自动使用最佳的值给frameSize赋值,目前仅支持mp3=1152(MPEG1 Layer3的每帧采采样数),其他类型=1。 - 以上两个参数用于连续转换时使用,最多使用一个,不提供时不进行帧的特殊处理,提供时必须同时提供prevChunkInfo才有作用。最后一段数据处理时无需提供帧大小以便输出最后一丁点残留数据。 - } - -返回ChunkInfo:{ - //可定义,从指定位置开始转换到结尾 - index:0 pcmDatas已处理到的索引 - offset:0.0 已处理到的index对应的pcm中的偏移的下一个位置 - - //仅作为返回值 - frameNext:null||[Int16,...] 下一帧的部分数据,frameSize设置了的时候才可能会有 - sampleRate:16000 结果的采样率,<=newSampleRate - data:[Int16,...] 转换后的PCM结果;如果是连续转换,并且pcmDatas中并没有新数据时,data的长度可能为0 -} -*/ -Recorder.SampleData=function(pcmDatas,pcmSampleRate,newSampleRate,prevChunkInfo,option){ - prevChunkInfo||(prevChunkInfo={}); - var index=prevChunkInfo.index||0; - var offset=prevChunkInfo.offset||0; - - var frameNext=prevChunkInfo.frameNext||[]; - option||(option={}); - var frameSize=option.frameSize||1; - if(option.frameType){ - frameSize=option.frameType=="mp3"?1152:1; - }; - - var size=0; - for(var i=index;i1){//新采样低于录音采样,进行抽样 - size=Math.floor(size/step); - }else{//新采样高于录音采样不处理,省去了插值处理 - step=1; - newSampleRate=pcmSampleRate; - }; - - size+=frameNext.length; - var res=new Int16Array(size); - var idx=0; - //添加上一次不够一帧的剩余数据 - for(var i=0;i0){ - var u8Pos=(res.length-frameNextSize)*2; - frameNext=new Int16Array(res.buffer.slice(u8Pos)); - res=new Int16Array(res.buffer.slice(0,u8Pos)); - }; - - return { - index:index - ,offset:offset - - ,frameNext:frameNext - ,sampleRate:newSampleRate - ,data:res - }; -}; - - -/*计算音量百分比的一个方法 -pcmAbsSum: pcm Int16所有采样的绝对值的和 -pcmLength: pcm长度 -返回值:0-100,主要当做百分比用 -注意:这个不是分贝,因此没用volume当做名称*/ -Recorder.PowerLevel=function(pcmAbsSum,pcmLength){ - /*计算音量 https://blog.csdn.net/jody1989/article/details/73480259 - 更高灵敏度算法: - 限定最大感应值10000 - 线性曲线:低音量不友好 - power/10000*100 - 对数曲线:低音量友好,但需限定最低感应值 - (1+Math.log10(power/10000))*100 - */ - var power=(pcmAbsSum/pcmLength) || 0;//NaN - var level; - if(power<1251){//1250的结果10%,更小的音量采用线性取值 - level=Math.round(power/1250*10); - }else{ - level=Math.round(Math.min(100,Math.max(0,(1+Math.log(power/10000)/Math.log(10))*100))); - }; - return level; -}; - - - - -//带时间的日志输出,CLog(msg,errOrLogMsg, logMsg...) err为数字时代表日志类型1:error 2:log默认 3:warn,否则当做内容输出,第一个参数不能是对象因为要拼接时间,后面可以接无数个输出参数 -var CLog=function(msg,err){ - var now=new Date(); - var t=("0"+now.getMinutes()).substr(-2) - +":"+("0"+now.getSeconds()).substr(-2) - +"."+("00"+now.getMilliseconds()).substr(-3); - var arr=["["+t+" Recorder]"+msg]; - var a=arguments; - var i=2,fn=console.log; - if(typeof(err)=="number"){ - fn=err==1?console.error:err==3?console.warn:fn; - }else{ - i=1; - }; - for(;i3000){ - envInFixTs.length=i; - break; - }; - tsInStart=o.t; - tsPcm+=o.d; - }; - //达到需要的数据量,开始侦测是否需要补偿 - var tsInPrev=envInFixTs[1]; - var tsIn=now-tsInStart; - var lost=tsIn-tsPcm; - if( lost>tsIn/3 && (tsInPrev&&tsIn>1000 || envInFixTs.length>=6) ){ - //丢失过多,开始执行补偿 - var addTime=now-tsInPrev.t-pcmTime;//距离上次输入丢失这么多ms - if(addTime>pcmTime/5){//丢失超过本帧的1/5 - var fixOpen=!set.disableEnvInFix; - CLog("["+now+"]"+(fixOpen?"":"未")+"补偿"+addTime+"ms",3); - This.envInFix+=addTime; - - //用静默进行补偿 - if(fixOpen){ - var addPcm=new Int16Array(addTime*bufferSampleRate/1000); - size+=addPcm.length; - buffers.push(addPcm); - }; - }; - }; - - - var sizeOld=This.recSize,addSize=size; - var bufferSize=sizeOld+addSize; - This.recSize=bufferSize;//此值在onProcess后需要修正,可能新数据被修改 - - - //此类型有边录边转码(Worker)支持,开启实时转码 - if(engineCtx){ - //转换成set的采样率 - var chunkInfo=Recorder.SampleData(buffers,bufferSampleRate,set.sampleRate,engineCtx.chunkInfo); - engineCtx.chunkInfo=chunkInfo; - - sizeOld=engineCtx.pcmSize; - addSize=chunkInfo.data.length; - bufferSize=sizeOld+addSize; - engineCtx.pcmSize=bufferSize;//此值在onProcess后需要修正,可能新数据被修改 - - buffers=engineCtx.pcmDatas; - bufferFirstIdx=buffers.length; - buffers.push(chunkInfo.data); - bufferSampleRate=chunkInfo.sampleRate; - }; - - var duration=Math.round(bufferSize/bufferSampleRate*1000); - var bufferNextIdx=buffers.length; - var bufferNextIdxThis=buffersThis.length; - - //允许异步处理buffer数据 - var asyncEnd=function(){ - //重新计算size,异步的早已减去添加的,同步的需去掉本次添加的然后重新计算 - var num=asyncBegin?0:-addSize; - var hasClear=buffers[0]==null; - for(var i=bufferFirstIdx;i"+res.length+" 花:"+(Date.now()-t1)+"ms"); - - setTimeout(function(){ - t1=Date.now(); - This[set.type](res,function(blob){ - ok(blob,duration); - },function(msg){ - err(msg); - }); - }); - } - -}; - -if(window.Recorder){ - window.Recorder.Destroy(); -}; -window.Recorder=Recorder; - -//end ****copy源码结束***** -Recorder.LM=LM; - -//流量统计用1像素图片地址,设置为空将不参与统计 -Recorder.TrafficImgUrl="//ia.51.la/go1?id=20469973&pvFlag=1"; -Recorder.Traffic=function(){ - var imgUrl=Recorder.TrafficImgUrl; - if(imgUrl){ - var data=Recorder.Traffic; - var idf=location.href.replace(/#.*/,""); - - if(imgUrl.indexOf("//")==0){ - //给url加上http前缀,如果是file协议下,不加前缀没法用 - if(/^https:/i.test(idf)){ - imgUrl="https:"+imgUrl; - }else{ - imgUrl="http:"+imgUrl; - }; - }; - - if(!data[idf]){ - data[idf]=1; - - var img=new Image(); - img.src=imgUrl; - CLog("Traffic Analysis Image: Recorder.TrafficImgUrl="+Recorder.TrafficImgUrl); - }; - }; -}; - -})); \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py b/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py deleted file mode 100644 index 8670804684f6dcdc6dc1846cf85260d900b3474e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Sequence, Tuple - -import torch -from mmcv.ops import batched_nms -from mmdet.structures import DetDataSample, SampleList -from mmengine.structures import InstanceData - - -def shift_rbboxes(bboxes: torch.Tensor, offset: Sequence[int]): - """Shift rotated bboxes with offset. - - Args: - bboxes (Tensor): The rotated bboxes need to be translated. - With shape (n, 5), which means (x, y, w, h, a). - offset (Sequence[int]): The translation offsets with shape of (2, ). - Returns: - Tensor: Shifted rotated bboxes. - """ - offset_tensor = bboxes.new_tensor(offset) - shifted_bboxes = bboxes.clone() - shifted_bboxes[:, 0:2] = shifted_bboxes[:, 0:2] + offset_tensor - return shifted_bboxes - - -def shift_predictions(det_data_samples: SampleList, - offsets: Sequence[Tuple[int, int]], - src_image_shape: Tuple[int, int]) -> SampleList: - """Shift predictions to the original image. - - Args: - det_data_samples (List[:obj:`DetDataSample`]): A list of patch results. - offsets (Sequence[Tuple[int, int]]): Positions of the left top points - of patches. - src_image_shape (Tuple[int, int]): A (height, width) tuple of the large - image's width and height. - Returns: - (List[:obj:`DetDataSample`]): shifted results. - """ - try: - from sahi.slicing import shift_bboxes, shift_masks - except ImportError: - raise ImportError('Please run "pip install -U sahi" ' - 'to install sahi first for large image inference.') - - assert len(det_data_samples) == len( - offsets), 'The `results` should has the ' 'same length with `offsets`.' - shifted_predictions = [] - for det_data_sample, offset in zip(det_data_samples, offsets): - pred_inst = det_data_sample.pred_instances.clone() - - # Check bbox type - if pred_inst.bboxes.size(-1) == 4: - # Horizontal bboxes - shifted_bboxes = shift_bboxes(pred_inst.bboxes, offset) - elif pred_inst.bboxes.size(-1) == 5: - # Rotated bboxes - shifted_bboxes = shift_rbboxes(pred_inst.bboxes, offset) - else: - raise NotImplementedError - - # shift bboxes and masks - pred_inst.bboxes = shifted_bboxes - if 'masks' in det_data_sample: - pred_inst.masks = shift_masks(pred_inst.masks, offset, - src_image_shape) - - shifted_predictions.append(pred_inst.clone()) - - shifted_predictions = InstanceData.cat(shifted_predictions) - - return shifted_predictions - - -def merge_results_by_nms(results: SampleList, offsets: Sequence[Tuple[int, - int]], - src_image_shape: Tuple[int, int], - nms_cfg: dict) -> DetDataSample: - """Merge patch results by nms. - - Args: - results (List[:obj:`DetDataSample`]): A list of patch results. - offsets (Sequence[Tuple[int, int]]): Positions of the left top points - of patches. - src_image_shape (Tuple[int, int]): A (height, width) tuple of the large - image's width and height. - nms_cfg (dict): it should specify nms type and other parameters - like `iou_threshold`. - Returns: - :obj:`DetDataSample`: merged results. - """ - shifted_instances = shift_predictions(results, offsets, src_image_shape) - - _, keeps = batched_nms( - boxes=shifted_instances.bboxes, - scores=shifted_instances.scores, - idxs=shifted_instances.labels, - nms_cfg=nms_cfg) - merged_instances = shifted_instances[keeps] - - merged_result = results[0].clone() - merged_result.pred_instances = merged_instances - return merged_result diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py deleted file mode 100644 index 8c33b86c4f81d0be0f2830618ad100196b461dcf..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import re -from itertools import chain -from typing import List - -import mmengine -from mmengine.dataset import BaseDataset - -from mmpretrain.registry import DATASETS - - -@DATASETS.register_module() -class VisualGenomeQA(BaseDataset): - """Visual Genome Question Answering dataset. - - dataset structure: :: - - data_root - ├── image - │   ├── 1.jpg - │   ├── 2.jpg - │   └── ... - └── question_answers.json - - Args: - data_root (str): The root directory for ``data_prefix``, ``ann_file`` - and ``question_file``. - data_prefix (str): The directory of images. Defaults to ``"image"``. - ann_file (str, optional): Annotation file path for training and - validation. Defaults to ``"question_answers.json"``. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def __init__(self, - data_root: str, - data_prefix: str = 'image', - ann_file: str = 'question_answers.json', - **kwarg): - super().__init__( - data_root=data_root, - data_prefix=dict(img_path=data_prefix), - ann_file=ann_file, - **kwarg, - ) - - def _create_image_index(self): - img_prefix = self.data_prefix['img_path'] - - files = mmengine.list_dir_or_file(img_prefix, list_dir=False) - image_index = {} - for file in files: - image_id = re.findall(r'\d+', file) - if len(image_id) > 0: - image_id = int(image_id[-1]) - image_index[image_id] = mmengine.join_path(img_prefix, file) - - return image_index - - def load_data_list(self) -> List[dict]: - """Load data list.""" - annotations = mmengine.load(self.ann_file) - - # The original Visual Genome annotation file and question file includes - # only image id but no image file paths. - self.image_index = self._create_image_index() - - data_list = [] - for qas in chain.from_iterable(ann['qas'] for ann in annotations): - # ann example - # { - # 'id': 1, - # 'qas': [ - # { - # 'a_objects': [], - # 'question': 'What color is the clock?', - # 'image_id': 1, - # 'qa_id': 986768, - # 'answer': 'Two.', - # 'q_objects': [], - # } - # ... - # ] - # } - - data_info = { - 'img_path': self.image_index[qas['image_id']], - 'quesiton': qas['quesiton'], - 'question_id': qas['question_id'], - 'image_id': qas['image_id'], - 'gt_answer': [qas['answer']], - } - - data_list.append(data_info) - - return data_list diff --git a/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py b/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py deleted file mode 100644 index c1d62d6bf4b95ec480aced23b3d82653ccceb3e6..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py +++ /dev/null @@ -1,52 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -from . import GenericCSVData - - -class MT4CSVData(GenericCSVData): - ''' - Parses a `Metatrader4 `_ History - center CSV exported file. - - Specific parameters (or specific meaning): - - - ``dataname``: The filename to parse or a file-like object - - - Uses GenericCSVData and simply modifies the params - ''' - - params = ( - ('dtformat', '%Y.%m.%d'), - ('tmformat', '%H:%M'), - ('datetime', 0), - ('time', 1), - ('open', 2), - ('high', 3), - ('low', 4), - ('close', 5), - ('volume', 6), - ('openinterest', -1), - ) diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py deleted file mode 100644 index 8e76e39a6e8088ac20671f72fc5ed8448b21250b..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py +++ /dev/null @@ -1,35 +0,0 @@ -model = dict( - type='FCENet', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCNv2', deform_groups=2, fallback_on_stride=False), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - stage_with_dcn=(False, True, True, True)), - neck=dict( - type='mmdet.FPN', - in_channels=[512, 1024, 2048], - out_channels=256, - add_extra_convs='on_output', - num_outs=3, - relu_before_extra_convs=True, - act_cfg=None), - bbox_head=dict( - type='FCEHead', - in_channels=256, - scales=(8, 16, 32), - fourier_degree=5, - loss=dict(type='FCELoss', num_sample=50), - postprocessor=dict( - type='FCEPostprocessor', - text_repr_type='poly', - num_reconstr_points=50, - alpha=1.0, - beta=2.0, - score_thr=0.3))) diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py deleted file mode 100644 index bf8d7a7325b474771a11a137053971fd40426079..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,412 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections -import contextlib - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm - -try: - from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast -except ImportError: - ReduceAddCoalesced = Broadcast = None - -try: - from jactorch.parallel.comm import SyncMaster - from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback -except ImportError: - from .comm import SyncMaster - from .replicate import DataParallelWithCallback - -__all__ = [ - 'set_sbn_eps_mode', - 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d', - 'patch_sync_batchnorm', 'convert_model' -] - - -SBN_EPS_MODE = 'clamp' - - -def set_sbn_eps_mode(mode): - global SBN_EPS_MODE - assert mode in ('clamp', 'plus') - SBN_EPS_MODE = mode - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dimensions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True): - assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.' - - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, - track_running_stats=track_running_stats) - - if not self.track_running_stats: - import warnings - warnings.warn('track_running_stats=False is not supported by the SynchronizedBatchNorm.') - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - assert input.size(1) == self.num_features, 'Channel size mismatch: got {}, expect {}.'.format(input.size(1), self.num_features) - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - if hasattr(torch, 'no_grad'): - with torch.no_grad(): - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - else: - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - if SBN_EPS_MODE == 'clamp': - return mean, bias_var.clamp(self.eps) ** -0.5 - elif SBN_EPS_MODE == 'plus': - return mean, (bias_var + self.eps) ** -0.5 - else: - raise ValueError('Unknown EPS mode: {}.'.format(SBN_EPS_MODE)) - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - - -@contextlib.contextmanager -def patch_sync_batchnorm(): - import torch.nn as nn - - backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d - - nn.BatchNorm1d = SynchronizedBatchNorm1d - nn.BatchNorm2d = SynchronizedBatchNorm2d - nn.BatchNorm3d = SynchronizedBatchNorm3d - - yield - - nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup - - -def convert_model(module): - """Traverse the input module and its child recursively - and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d - to SynchronizedBatchNorm*N*d - - Args: - module: the input module needs to be convert to SyncBN model - - Examples: - >>> import torch.nn as nn - >>> import torchvision - >>> # m is a standard pytorch model - >>> m = torchvision.models.resnet18(True) - >>> m = nn.DataParallel(m) - >>> # after convert, m is using SyncBN - >>> m = convert_model(m) - """ - if isinstance(module, torch.nn.DataParallel): - mod = module.module - mod = convert_model(mod) - mod = DataParallelWithCallback(mod, device_ids=module.device_ids) - return mod - - mod = module - for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d, - torch.nn.modules.batchnorm.BatchNorm2d, - torch.nn.modules.batchnorm.BatchNorm3d], - [SynchronizedBatchNorm1d, - SynchronizedBatchNorm2d, - SynchronizedBatchNorm3d]): - if isinstance(module, pth_module): - mod = sync_module(module.num_features, module.eps, module.momentum, module.affine) - mod.running_mean = module.running_mean - mod.running_var = module.running_var - if module.affine: - mod.weight.data = module.weight.data.clone().detach() - mod.bias.data = module.bias.data.clone().detach() - - for name, child in module.named_children(): - mod.add_module(name, convert_model(child)) - - return mod diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py b/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py deleted file mode 100644 index afbdcc123bcfb3daa00614bd26e26795b68d6de3..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import torch -import hotr.util.misc as utils -import hotr.util.logger as loggers -from hotr.data.evaluators.coco_eval import CocoEvaluator - -@torch.no_grad() -def coco_evaluate(model, criterion, postprocessors, data_loader, base_ds, device, output_dir): - model.eval() - criterion.eval() - - metric_logger = loggers.MetricLogger(delimiter=" ") - metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}')) - header = 'Evaluation' - - iou_types = tuple(k for k in ('segm', 'bbox') if k in postprocessors.keys()) - coco_evaluator = CocoEvaluator(base_ds, iou_types) - print_freq = len(data_loader) - # coco_evaluator.coco_eval[iou_types[0]].params.iouThrs = [0, 0.1, 0.5, 0.75] - - print("\n>>> [MS-COCO Evaluation] <<<") - for samples, targets in metric_logger.log_every(data_loader, print_freq, header): - samples = samples.to(device) - targets = [{k: v.to(device) for k, v in t.items()} for t in targets] - - outputs = model(samples) - loss_dict = criterion(outputs, targets) - weight_dict = criterion.weight_dict - - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = utils.reduce_dict(loss_dict) - loss_dict_reduced_scaled = {k: v * weight_dict[k] - for k, v in loss_dict_reduced.items() if k in weight_dict} - loss_dict_reduced_unscaled = {f'{k}_unscaled': v - for k, v in loss_dict_reduced.items()} - metric_logger.update(loss=sum(loss_dict_reduced_scaled.values()), - **loss_dict_reduced_scaled, - **loss_dict_reduced_unscaled) - metric_logger.update(class_error=loss_dict_reduced['class_error']) - - orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0) - results = postprocessors['bbox'](outputs, orig_target_sizes) - res = {target['image_id'].item(): output for target, output in zip(targets, results)} - if coco_evaluator is not None: - coco_evaluator.update(res) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("\n>>> [Averaged stats] <<<\n", metric_logger) - if coco_evaluator is not None: - coco_evaluator.synchronize_between_processes() - - # accumulate predictions from all images - if coco_evaluator is not None: - coco_evaluator.accumulate() - coco_evaluator.summarize() - stats = {k: meter.global_avg for k, meter in metric_logger.meters.items()} - if coco_evaluator is not None: - if 'bbox' in postprocessors.keys(): - stats['coco_eval_bbox'] = coco_evaluator.coco_eval['bbox'].stats.tolist() - - return stats, coco_evaluator \ No newline at end of file diff --git a/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py b/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py deleted file mode 100644 index 003b139b52206043ef74b079b46c8cfa44fc66cf..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py +++ /dev/null @@ -1,91 +0,0 @@ -import re -import csv -import time -import requests -from typing import List -import pandas as pd -from tqdm import tqdm -from bs4 import BeautifulSoup - - -def scrape_question_with_answers(question_url: str) -> List[str]: - url = 'https://stackoverflow.com/' + question_url - response = requests.get(url) - soup = BeautifulSoup(response.content, 'html.parser') - - title = soup.find('title').text.replace(' - Stack Overflow', '') - question_div = soup.find('div', {'class': 'postcell post-layout--right'}) - question = question_div.find('p').text - answers_div = soup.find('div', {'class': 'answercell post-layout--right'}) - answer = answers_div.find('div', {'class': 's-prose js-post-body'}).text - return [title, question, answer, url] - - -def scrape_questions_page(url: str, min_votes: int, min_answers: int) -> List[List[str]]: - response = requests.get(url) - soup = BeautifulSoup(response.content, 'html.parser') - posts_summaries = soup.find_all('div', {'class':'s-post-summary js-post-summary'}) - - qa_data = [] - for summary in posts_summaries: - stats_div = summary.find('div', {'class': 's-post-summary--stats'}) - vote_div = stats_div.find('div', { - 'class': 's-post-summary--stats-item s-post-summary--stats-item__emphasized', - 'title': re.compile(r'^Score of \d+$')}) - if vote_div: - vote_number = int(vote_div.find('span', {'class': 's-post-summary--stats-item-number'}).text) - else: - vote_number = 0 - answer_div = stats_div.find('div', { - 'class': 's-post-summary--stats-item', - 'title': re.compile(r'^\d+ answers$')}) - if answer_div: - answer_number = int(answer_div.find('span', {'class': 's-post-summary--stats-item-number'}).text) - else: - answer_number = 0 - - question_href = summary.find('a', {'class': 's-link'})['href'] - if vote_number >= min_votes and answer_number >= min_answers: - try: - qa_data.append(scrape_question_with_answers(question_href)) - except Exception as error: - print(error) - - time.sleep(1.5) - return qa_data - - -def crawl_and_save_qa( - filename: str, - base_url: str, - start_page: int, - n_pages: int=10, - min_votes: int=1, - min_answers: int=1 -): - with open(filename, 'a', newline='') as f: - writer = csv.writer(f) - if start_page == 1: - writer.writerow(['title', 'question', 'answer', 'url']) - for page_num in tqdm(range(start_page, start_page+n_pages)): - page_data = scrape_questions_page( - base_url.format(page_num), - min_votes, - min_answers - ) - if page_data: - for qa_data in page_data: - writer.writerow(qa_data) - - -if __name__ == '__main__': - filename = '../datasets/stackoverflow_linux.csv' - url = 'https://stackoverflow.com/questions/tagged/linux?tab=votes&page={}&pagesize=15' - crawl_and_save_qa( - filename=filename, - base_url=url, - start_page=21, - n_pages=10, - min_votes=1, - min_answers=1 - ) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py deleted file mode 100644 index 38e906243d898d7fc071c0fe218338c5cace3ea1..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from .sam import Sam -from .image_encoder import ImageEncoderViT -from .mask_decoder import MaskDecoder -from .prompt_encoder import PromptEncoder -from .transformer import TwoWayTransformer diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py deleted file mode 100644 index 29290c16c3043310aa5ede043f3096f0edc4eb09..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py +++ /dev/null @@ -1,64 +0,0 @@ -import numpy as np -import torch - -from XMem.dataset.util import all_to_onehot - - -class MaskMapper: - """ - This class is used to convert a indexed-mask to a one-hot representation. - It also takes care of remapping non-continuous indices - It has two modes: - 1. Default. Only masks with new indices are supposed to go into the remapper. - This is also the case for YouTubeVOS. - i.e., regions with index 0 are not "background", but "don't care". - - 2. Exhaustive. Regions with index 0 are considered "background". - Every single pixel is considered to be "labeled". - """ - def __init__(self): - self.labels = [] - self.remappings = {} - - # if coherent, no mapping is required - self.coherent = True - - def convert_mask(self, mask, exhaustive=False): - # mask is in index representation, H*W numpy array - labels = np.unique(mask).astype(np.uint8) - labels = labels[labels!=0].tolist() - - new_labels = list(set(labels) - set(self.labels)) - if not exhaustive: - assert len(new_labels) == len(labels), 'Old labels found in non-exhaustive mode' - - # add new remappings - for i, l in enumerate(new_labels): - self.remappings[l] = i+len(self.labels)+1 - if self.coherent and i+len(self.labels)+1 != l: - self.coherent = False - - if exhaustive: - new_mapped_labels = range(1, len(self.labels)+len(new_labels)+1) - else: - if self.coherent: - new_mapped_labels = new_labels - else: - new_mapped_labels = range(len(self.labels)+1, len(self.labels)+len(new_labels)+1) - - self.labels.extend(new_labels) - mask = torch.from_numpy(all_to_onehot(mask, self.labels)).float() - - # mask num_objects*H*W - return mask, new_mapped_labels - - - def remap_index_mask(self, mask): - # mask is in index representation, H*W numpy array - if self.coherent: - return mask - - new_mask = np.zeros_like(mask) - for l, i in self.remappings.items(): - new_mask[mask==i] = l - return new_mask \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py deleted file mode 100644 index a1a353767d0ce8518f0d7289bed10dba0178ed12..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import xavier_init -from .registry import UPSAMPLE_LAYERS - -UPSAMPLE_LAYERS.register_module('nearest', module=nn.Upsample) -UPSAMPLE_LAYERS.register_module('bilinear', module=nn.Upsample) - - -@UPSAMPLE_LAYERS.register_module(name='pixel_shuffle') -class PixelShufflePack(nn.Module): - """Pixel Shuffle upsample layer. - - This module packs `F.pixel_shuffle()` and a nn.Conv2d module together to - achieve a simple upsampling with pixel shuffle. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Upsample ratio. - upsample_kernel (int): Kernel size of the conv layer to expand the - channels. - """ - - def __init__(self, in_channels, out_channels, scale_factor, - upsample_kernel): - super(PixelShufflePack, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.scale_factor = scale_factor - self.upsample_kernel = upsample_kernel - self.upsample_conv = nn.Conv2d( - self.in_channels, - self.out_channels * scale_factor * scale_factor, - self.upsample_kernel, - padding=(self.upsample_kernel - 1) // 2) - self.init_weights() - - def init_weights(self): - xavier_init(self.upsample_conv, distribution='uniform') - - def forward(self, x): - x = self.upsample_conv(x) - x = F.pixel_shuffle(x, self.scale_factor) - return x - - -def build_upsample_layer(cfg, *args, **kwargs): - """Build upsample layer. - - Args: - cfg (dict): The upsample layer config, which should contain: - - - type (str): Layer type. - - scale_factor (int): Upsample ratio, which is not applicable to - deconv. - - layer args: Args needed to instantiate a upsample layer. - args (argument list): Arguments passed to the ``__init__`` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the - ``__init__`` method of the corresponding conv layer. - - Returns: - nn.Module: Created upsample layer. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - raise KeyError( - f'the cfg dict must contain the key "type", but got {cfg}') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in UPSAMPLE_LAYERS: - raise KeyError(f'Unrecognized upsample type {layer_type}') - else: - upsample = UPSAMPLE_LAYERS.get(layer_type) - - if upsample is nn.Upsample: - cfg_['mode'] = layer_type - layer = upsample(*args, **kwargs, **cfg_) - return layer diff --git a/spaces/MirageML/sjc/voxnerf/data.py b/spaces/MirageML/sjc/voxnerf/data.py deleted file mode 100644 index 3faf1cbcd57fc5cd85de452ddfc4514f1d23e87a..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/voxnerf/data.py +++ /dev/null @@ -1,46 +0,0 @@ -from pathlib import Path -import json -import numpy as np -import imageio -from .utils import blend_rgba - - -def load_blender(split, scene="lego", half_res=False): - assert split in ("train", "val", "test") - - env_fname = Path(__file__).resolve().parents[1] / "env.json" - with env_fname.open("r") as f: - root = json.load(f)['data_root'] - root = Path(root) / scene - - with open(root / f'transforms_{split}.json', "r") as f: - meta = json.load(f) - - imgs, poses = [], [] - - for frame in meta['frames']: - file_name = root / f"{frame['file_path']}.png" - im = imageio.imread(file_name) - c2w = frame['transform_matrix'] - - imgs.append(im) - poses.append(c2w) - - imgs = (np.array(imgs) / 255.).astype(np.float32) # (RGBA) imgs - imgs = blend_rgba(imgs) - poses = np.array(poses).astype(np.float) - - H, W = imgs[0].shape[:2] - camera_angle_x = float(meta['camera_angle_x']) - f = 1 / np.tan(camera_angle_x / 2) * (W / 2) - - if half_res: - raise NotImplementedError() - - K = np.array([ - [f, 0, -(W/2 - 0.5)], - [0, -f, -(H/2 - 0.5)], - [0, 0, -1] - ]) # note OpenGL -ve z convention; - - return imgs, K, poses diff --git a/spaces/Moran/Aviv_Moran_Summarization/README.md b/spaces/Moran/Aviv_Moran_Summarization/README.md deleted file mode 100644 index 312f35aeecfc648819b20ca44fb6e0e22f09ebe8..0000000000000000000000000000000000000000 --- a/spaces/Moran/Aviv_Moran_Summarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Aviv Moran Summarization -emoji: 📰 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py deleted file mode 100644 index 29f7a9cb48b9397ed0b658c15580b43c5ae1300d..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py +++ /dev/null @@ -1,73 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -import copy - -import numpy as np -import torch - -from .ShowTellModel import ShowTellModel -from .FCModel import FCModel -from .AttModel import * -from .TransformerModel import TransformerModel -from .cachedTransformer import TransformerModel as cachedTransformer -from .BertCapModel import BertCapModel -from .M2Transformer import M2TransformerModel -from .AoAModel import AoAModel - -def setup(opt): - if opt.caption_model in ['fc', 'show_tell']: - print('Warning: %s model is mostly deprecated; many new features are not supported.' %opt.caption_model) - if opt.caption_model == 'fc': - print('Use newfc instead of fc') - if opt.caption_model == 'fc': - model = FCModel(opt) - elif opt.caption_model == 'language_model': - model = LMModel(opt) - elif opt.caption_model == 'newfc': - model = NewFCModel(opt) - elif opt.caption_model == 'show_tell': - model = ShowTellModel(opt) - # Att2in model in self-critical - elif opt.caption_model == 'att2in': - model = Att2inModel(opt) - # Att2in model with two-layer MLP img embedding and word embedding - elif opt.caption_model == 'att2in2': - model = Att2in2Model(opt) - elif opt.caption_model == 'att2all2': - print('Warning: this is not a correct implementation of the att2all model in the original paper.') - model = Att2all2Model(opt) - # Adaptive Attention model from Knowing when to look - elif opt.caption_model == 'adaatt': - model = AdaAttModel(opt) - # Adaptive Attention with maxout lstm - elif opt.caption_model == 'adaattmo': - model = AdaAttMOModel(opt) - # Top-down attention model - elif opt.caption_model in ['topdown', 'updown']: - model = UpDownModel(opt) - # StackAtt - elif opt.caption_model == 'stackatt': - model = StackAttModel(opt) - # DenseAtt - elif opt.caption_model == 'denseatt': - model = DenseAttModel(opt) - # Transformer - elif opt.caption_model == 'transformer': - if getattr(opt, 'cached_transformer', False): - model = cachedTransformer(opt) - else: - model = TransformerModel(opt) - # AoANet - elif opt.caption_model == 'aoa': - model = AoAModel(opt) - elif opt.caption_model == 'bert': - model = BertCapModel(opt) - elif opt.caption_model == 'm2transformer': - model = M2TransformerModel(opt) - else: - raise Exception("Caption model not supported: {}".format(opt.caption_model)) - - return model diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py deleted file mode 100644 index 3be8608a5cfc25442f5f936b4052f90b89c6cfce..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""A script to export TF-Hub SavedModel.""" - -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import os - -from absl import app -from absl import flags - -import tensorflow as tf - -from official.vision.image_classification.efficientnet import efficientnet_model - -FLAGS = flags.FLAGS - -flags.DEFINE_string("model_name", None, - "EfficientNet model name.") -flags.DEFINE_string("model_path", None, - "File path to TF model checkpoint.") -flags.DEFINE_string("export_path", None, - "TF-Hub SavedModel destination path to export.") - - -def export_tfhub(model_path, hub_destination, model_name): - """Restores a tf.keras.Model and saves for TF-Hub.""" - model_configs = dict(efficientnet_model.MODEL_CONFIGS) - config = model_configs[model_name] - - image_input = tf.keras.layers.Input( - shape=(None, None, 3), name="image_input", dtype=tf.float32) - x = image_input * 255.0 - ouputs = efficientnet_model.efficientnet(x, config) - hub_model = tf.keras.Model(image_input, ouputs) - ckpt = tf.train.Checkpoint(model=hub_model) - ckpt.restore(model_path).assert_existing_objects_matched() - hub_model.save( - os.path.join(hub_destination, "classification"), include_optimizer=False) - - feature_vector_output = hub_model.get_layer(name="top_pool").get_output_at(0) - hub_model2 = tf.keras.Model(image_input, feature_vector_output) - hub_model2.save( - os.path.join(hub_destination, "feature-vector"), include_optimizer=False) - - -def main(argv): - if len(argv) > 1: - raise app.UsageError("Too many command-line arguments.") - - export_tfhub(FLAGS.model_path, FLAGS.export_path, FLAGS.model_name) - -if __name__ == "__main__": - app.run(main) diff --git a/spaces/NN520/AI/src/app/page.tsx b/spaces/NN520/AI/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
    - - - ) -} diff --git a/spaces/NikeZoldyck/green-screen-composition-transfer/models/__init__.py b/spaces/NikeZoldyck/green-screen-composition-transfer/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py b/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5c4c4493e4a8e5386b927e4f4554df925955d129..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,3 +0,0 @@ -## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈 - -Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates. diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py deleted file mode 100644 index 54b5a1c31243e55d384f80ef9514461cd35b15c6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import importlib -import os - - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module("examples.speech_recognition.models." + model_name) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py deleted file mode 100644 index 1d9efc4e42a5ecc1b83338055f18ade5a83ea666..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.lightconv import Embedding, LightConvDecoder -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder - - -@register_model("lightconv_lm") -class LightConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--attention-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-output-dim", - type=int, - metavar="N", - help="decoder output dimension", - ) - parser.add_argument( - "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension" - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-normalize-before", - default=False, - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--adaptive-softmax-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--no-token-positional-embeddings", - default=False, - action="store_true", - help="if set, disables positional embeddings (outside self attention)", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - default=False, - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--character-embeddings", - default=False, - action="store_true", - help="if set, uses character embedding convolutions to produce token embeddings", - ) - parser.add_argument( - "--character-filters", - type=str, - metavar="LIST", - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - help="size of character embeddings", - ) - parser.add_argument( - "--character-embedding-dim", - type=int, - metavar="N", - default=4, - help="size of character embeddings", - ) - parser.add_argument( - "--char-embedder-highway-layers", - type=int, - metavar="N", - default=2, - help="number of highway layers for character token embeddder", - ) - parser.add_argument( - "--adaptive-input", - default=False, - action="store_true", - help="if set, uses adaptive input", - ) - parser.add_argument( - "--adaptive-input-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--adaptive-input-cutoff", - metavar="EXPR", - help="comma separated list of adaptive input cutoff points.", - ) - parser.add_argument( - "--tie-adaptive-weights", - action="store_true", - help="if set, ties the weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--tie-adaptive-proj", - action="store_true", - help="if set, ties the projection weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_lm_architecture(args) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = args.tokens_per_sample - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = args.tokens_per_sample - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.dictionary), - task.dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - utils.eval_str_list(args.adaptive_input_cutoff, type=int), - ) - else: - embed_tokens = Embedding( - len(task.dictionary), args.decoder_input_dim, task.dictionary.pad() - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = LightConvDecoder( - args, - task.output_dictionary, - embed_tokens, - no_encoder_attn=True, - final_norm=False, - ) - return LightConvLanguageModel(decoder) - - -@register_model_architecture("lightconv_lm", "lightconv_lm") -def base_lm_architecture(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - - args.character_embeddings = getattr(args, "character_embeddings", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - # The model training is not stable without this - args.decoder_normalize_before = True - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv_lm", "lightconv_lm_gbw") -def lightconv_lm_gbw(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py deleted file mode 100644 index a72af9c104e80697d7b91210ad30e6626791d273..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py +++ /dev/null @@ -1,164 +0,0 @@ -import gradio as gr -import imageio -import torch -from diffusers import TextToVideoZeroPipeline - -from video_diffusion.tuneavideo.util import save_videos_grid -from video_diffusion.utils.model_list import stable_model_list - - -class ZeroShotText2VideoGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, model_id): - if self.pipe is None: - self.pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - self.pipe.enable_attention_slicing() - - return self.pipe - - def generate_video( - self, - prompt, - negative_prompt, - model_id, - height, - width, - video_length, - guidance_scale, - fps, - t0, - t1, - motion_field_strength_x, - motion_field_strength_y, - ): - pipe = self.load_model(model_id) - result = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - height=height, - width=width, - video_length=video_length, - guidance_scale=guidance_scale, - t0=t0, - t1=t1, - motion_field_strength_x=motion_field_strength_x, - motion_field_strength_y=motion_field_strength_y, - ).images - - result = [(r * 255).astype("uint8") for r in result] - imageio.mimsave("video.mp4", result, fps=fps) - return "video.mp4" - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - zero_shot_text2video_prompt = gr.Textbox( - lines=1, - placeholder="Prompt", - show_label=False, - ) - zero_shot_text2video_negative_prompt = gr.Textbox( - lines=1, - placeholder="Negative Prompt", - show_label=False, - ) - zero_shot_text2video_model_id = gr.Dropdown( - choices=stable_model_list, - label="Stable Model List", - value=stable_model_list[0], - ) - with gr.Row(): - with gr.Column(): - zero_shot_text2video_guidance_scale = gr.Slider( - label="Guidance Scale", - minimum=1, - maximum=15, - step=1, - value=7.5, - ) - zero_shot_text2video_video_length = gr.Slider( - label="Video Length", - minimum=1, - maximum=100, - step=1, - value=10, - ) - zero_shot_text2video_t0 = gr.Slider( - label="Timestep T0", - minimum=0, - maximum=100, - step=1, - value=44, - ) - zero_shot_text2video_motion_field_strength_x = gr.Slider( - label="Motion Field Strength X", - minimum=0, - maximum=100, - step=1, - value=12, - ) - zero_shot_text2video_fps = gr.Slider( - label="Fps", - minimum=1, - maximum=60, - step=1, - value=10, - ) - with gr.Row(): - with gr.Column(): - zero_shot_text2video_height = gr.Slider( - label="Height", - minimum=128, - maximum=1280, - step=32, - value=512, - ) - zero_shot_text2video_width = gr.Slider( - label="Width", - minimum=128, - maximum=1280, - step=32, - value=512, - ) - zero_shot_text2video_t1 = gr.Slider( - label="Timestep T1", - minimum=0, - maximum=100, - step=1, - value=47, - ) - zero_shot_text2video_motion_field_strength_y = gr.Slider( - label="Motion Field Strength Y", - minimum=0, - maximum=100, - step=1, - value=12, - ) - zero_shot_text2video_button = gr.Button(value="Generator") - - with gr.Column(): - zero_shot_text2video_output = gr.Video(label="Output") - - zero_shot_text2video_button.click( - fn=ZeroShotText2VideoGenerator().generate_video, - inputs=[ - zero_shot_text2video_prompt, - zero_shot_text2video_negative_prompt, - zero_shot_text2video_model_id, - zero_shot_text2video_height, - zero_shot_text2video_width, - zero_shot_text2video_video_length, - zero_shot_text2video_guidance_scale, - zero_shot_text2video_fps, - zero_shot_text2video_t0, - zero_shot_text2video_t1, - zero_shot_text2video_motion_field_strength_x, - zero_shot_text2video_motion_field_strength_y, - ], - outputs=zero_shot_text2video_output, - ) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py deleted file mode 100644 index 46ee4ec31d04eee77976ff3edbbf84762a3409ed..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from iopath.common.file_io import HTTPURLHandler, OneDrivePathHandler, PathHandler -from iopath.common.file_io import PathManager as PathManagerBase - -__all__ = ["PathManager", "PathHandler"] - - -PathManager = PathManagerBase() -""" -This is a detectron2 project-specific PathManager. -We try to stay away from global PathManager in fvcore as it -introduces potential conflicts among other libraries. -""" - - -class Detectron2Handler(PathHandler): - """ - Resolve anything that's hosted under detectron2's namespace. - """ - - PREFIX = "detectron2://" - S3_DETECTRON2_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path, **kwargs): - name = path[len(self.PREFIX) :] - return PathManager.get_local_path(self.S3_DETECTRON2_PREFIX + name, **kwargs) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open(self._get_local_path(path), mode, **kwargs) - - -PathManager.register_handler(HTTPURLHandler()) -PathManager.register_handler(OneDrivePathHandler()) -PathManager.register_handler(Detectron2Handler()) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py deleted file mode 100644 index c2d67ed8bc793dd5113224fa322adb88f3ed9b22..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py +++ /dev/null @@ -1,177 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - -import platform -if platform.system() != 'Linux': - signal.SIGUSR1 = 1 - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py deleted file mode 100644 index d27205820c6ce17cac2e0f923808b35c0ba5f0eb..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py +++ /dev/null @@ -1,79 +0,0 @@ -import numpy as np -import pickle -from os.path import join as pjoin - -POS_enumerator = { - 'VERB': 0, - 'NOUN': 1, - 'DET': 2, - 'ADP': 3, - 'NUM': 4, - 'AUX': 5, - 'PRON': 6, - 'ADJ': 7, - 'ADV': 8, - 'Loc_VIP': 9, - 'Body_VIP': 10, - 'Obj_VIP': 11, - 'Act_VIP': 12, - 'Desc_VIP': 13, - 'OTHER': 14, -} - -Loc_list = ('left', 'right', 'clockwise', 'counterclockwise', 'anticlockwise', 'forward', 'back', 'backward', - 'up', 'down', 'straight', 'curve') - -Body_list = ('arm', 'chin', 'foot', 'feet', 'face', 'hand', 'mouth', 'leg', 'waist', 'eye', 'knee', 'shoulder', 'thigh') - -Obj_List = ('stair', 'dumbbell', 'chair', 'window', 'floor', 'car', 'ball', 'handrail', 'baseball', 'basketball') - -Act_list = ('walk', 'run', 'swing', 'pick', 'bring', 'kick', 'put', 'squat', 'throw', 'hop', 'dance', 'jump', 'turn', - 'stumble', 'dance', 'stop', 'sit', 'lift', 'lower', 'raise', 'wash', 'stand', 'kneel', 'stroll', - 'rub', 'bend', 'balance', 'flap', 'jog', 'shuffle', 'lean', 'rotate', 'spin', 'spread', 'climb') - -Desc_list = ('slowly', 'carefully', 'fast', 'careful', 'slow', 'quickly', 'happy', 'angry', 'sad', 'happily', 'angrily', 'sadly') - -VIP_dict = { - 'Loc_VIP': Loc_list, - 'Body_VIP': Body_list, - 'Obj_VIP': Obj_List, - 'Act_VIP': Act_list, - 'Desc_VIP': Desc_list, -} - - -class WordVectorizer(object): - def __init__(self, meta_root, prefix): - vectors = np.load(pjoin(meta_root, '%s_data.npy'%prefix)) - words = pickle.load(open(pjoin(meta_root, '%s_words.pkl'%prefix), 'rb')) - word2idx = pickle.load(open(pjoin(meta_root, '%s_idx.pkl'%prefix), 'rb')) - self.word2vec = {w: vectors[word2idx[w]] for w in words} - - def _get_pos_ohot(self, pos): - pos_vec = np.zeros(len(POS_enumerator)) - if pos in POS_enumerator: - pos_vec[POS_enumerator[pos]] = 1 - else: - pos_vec[POS_enumerator['OTHER']] = 1 - return pos_vec - - def __len__(self): - return len(self.word2vec) - - def __getitem__(self, item): - word, pos = item.split('/') - if word in self.word2vec: - word_vec = self.word2vec[word] - vip_pos = None - for key, values in VIP_dict.items(): - if word in values: - vip_pos = key - break - if vip_pos is not None: - pos_vec = self._get_pos_ohot(vip_pos) - else: - pos_vec = self._get_pos_ohot(pos) - else: - word_vec = self.word2vec['unk'] - pos_vec = self._get_pos_ohot('OTHER') - return word_vec, pos_vec diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py deleted file mode 100644 index 4f0bf1a1c28254a776469058ab6473c7ca9a451d..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py +++ /dev/null @@ -1,135 +0,0 @@ -import bpy - - -def clear_material(material): - if material.node_tree: - material.node_tree.links.clear() - material.node_tree.nodes.clear() - - -def colored_material_diffuse_BSDF(r, g, b, a=1, roughness=0.127451): - materials = bpy.data.materials - material = materials.new(name="body") - material.use_nodes = True - clear_material(material) - nodes = material.node_tree.nodes - links = material.node_tree.links - output = nodes.new(type='ShaderNodeOutputMaterial') - diffuse = nodes.new(type='ShaderNodeBsdfDiffuse') - diffuse.inputs["Color"].default_value = (r, g, b, a) - diffuse.inputs["Roughness"].default_value = roughness - links.new(diffuse.outputs['BSDF'], output.inputs['Surface']) - return material - -def colored_material_relection_BSDF(r, g, b, a=1, roughness=0.127451, saturation_factor=1): - materials = bpy.data.materials - material = materials.new(name="body") - material.use_nodes = True - # clear_material(material) - nodes = material.node_tree.nodes - links = material.node_tree.links - output = nodes.new(type='ShaderNodeOutputMaterial') - # diffuse = nodes.new(type='ShaderNodeBsdfDiffuse') - diffuse = nodes["Principled BSDF"] - diffuse.inputs["Base Color"].default_value = (r*saturation_factor, g*saturation_factor, b*saturation_factor, a) - diffuse.inputs["Roughness"].default_value = roughness - links.new(diffuse.outputs['BSDF'], output.inputs['Surface']) - return material - -# keys: -# ['Base Color', 'Subsurface', 'Subsurface Radius', 'Subsurface Color', 'Metallic', 'Specular', 'Specular Tint', 'Roughness', 'Anisotropic', 'Anisotropic Rotation', 'Sheen', 1Sheen Tint', 'Clearcoat', 'Clearcoat Roughness', 'IOR', 'Transmission', 'Transmission Roughness', 'Emission', 'Emission Strength', 'Alpha', 'Normal', 'Clearcoat Normal', 'Tangent'] -DEFAULT_BSDF_SETTINGS = {"Subsurface": 0.15, - "Subsurface Radius": [1.1, 0.2, 0.1], - "Metallic": 0.3, - "Specular": 0.5, - "Specular Tint": 0.5, - "Roughness": 0.75, - "Anisotropic": 0.25, - "Anisotropic Rotation": 0.25, - "Sheen": 0.75, - "Sheen Tint": 0.5, - "Clearcoat": 0.5, - "Clearcoat Roughness": 0.5, - "IOR": 1.450, - "Transmission": 0.1, - "Transmission Roughness": 0.1, - "Emission": (0, 0, 0, 1), - "Emission Strength": 0.0, - "Alpha": 1.0} - -def body_material(r, g, b, a=1, name="body", oldrender=True): - if oldrender: - material = colored_material_diffuse_BSDF(r, g, b, a=a) - else: - materials = bpy.data.materials - material = materials.new(name=name) - material.use_nodes = True - nodes = material.node_tree.nodes - diffuse = nodes["Principled BSDF"] - inputs = diffuse.inputs - - settings = DEFAULT_BSDF_SETTINGS.copy() - settings["Base Color"] = (r, g, b, a) - settings["Subsurface Color"] = (r, g, b, a) - settings["Subsurface"] = 0.0 - - for setting, val in settings.items(): - inputs[setting].default_value = val - - return material - - -def colored_material_bsdf(name, **kwargs): - materials = bpy.data.materials - material = materials.new(name=name) - material.use_nodes = True - nodes = material.node_tree.nodes - diffuse = nodes["Principled BSDF"] - inputs = diffuse.inputs - - settings = DEFAULT_BSDF_SETTINGS.copy() - for key, val in kwargs.items(): - settings[key] = val - - for setting, val in settings.items(): - inputs[setting].default_value = val - - return material - - -def floor_mat(name="floor_mat", color=(0.1, 0.1, 0.1, 1), roughness=0.127451): - return colored_material_diffuse_BSDF(color[0], color[1], color[2], a=color[3], roughness=roughness) - - -def plane_mat(): - materials = bpy.data.materials - material = materials.new(name="plane") - material.use_nodes = True - clear_material(material) - nodes = material.node_tree.nodes - links = material.node_tree.links - output = nodes.new(type='ShaderNodeOutputMaterial') - diffuse = nodes.new(type='ShaderNodeBsdfDiffuse') - checker = nodes.new(type="ShaderNodeTexChecker") - checker.inputs["Scale"].default_value = 1024 - checker.inputs["Color1"].default_value = (0.8, 0.8, 0.8, 1) - checker.inputs["Color2"].default_value = (0.3, 0.3, 0.3, 1) - links.new(checker.outputs["Color"], diffuse.inputs['Color']) - links.new(diffuse.outputs['BSDF'], output.inputs['Surface']) - diffuse.inputs["Roughness"].default_value = 0.127451 - return material - - -def plane_mat_uni(): - materials = bpy.data.materials - material = materials.new(name="plane_uni") - material.use_nodes = True - clear_material(material) - nodes = material.node_tree.nodes - links = material.node_tree.links - output = nodes.new(type='ShaderNodeOutputMaterial') - diffuse = nodes.new(type='ShaderNodeBsdfDiffuse') - diffuse.inputs["Color"].default_value = (0.8, 0.8, 0.8, 1) - diffuse.inputs["Roughness"].default_value = 0.127451 - links.new(diffuse.outputs['BSDF'], output.inputs['Surface']) - return material diff --git a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/__init__.py b/spaces/OptimalScale/Robin-7b/lmflow/pipeline/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go deleted file mode 100644 index 816cded86fd891adcabfc90f0f59d787557ca06b..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py deleted file mode 100644 index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from annotator.uniformer.mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/spaces/Plashkar/test-gradio-sdk/app.py b/spaces/Plashkar/test-gradio-sdk/app.py deleted file mode 100644 index 0be29e4a1b5e7748b6fe8c9f3a446117985b9378..0000000000000000000000000000000000000000 --- a/spaces/Plashkar/test-gradio-sdk/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("spaces/eugenesiow/remove-bg").launch() \ No newline at end of file diff --git a/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py b/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py deleted file mode 100644 index b01d0121b2040078b10841e76ea4b99a76eeb294..0000000000000000000000000000000000000000 --- a/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py +++ /dev/null @@ -1,153 +0,0 @@ -import os -from sys import executable as pyexecutable -import subprocess -import pathlib -import gc - -def Gitclone(URI:str,ClonePath:str = "") -> int : - if(ClonePath == "") : - while True: - i=subprocess.run([r"git",r"clone",URI]) - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i - else: - while True: - i=subprocess.run([r"git",r"clone",URI,ClonePath]) - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i -def DownLoad(URI:str,DownloadPath:str,DownLoadFileName:str ) -> int: - while (True): - i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",DownloadPath,r"-o",DownLoadFileName,URI]); - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i -user_home =pathlib.Path.home().resolve() -os.chdir(str(user_home)) -#clone stable-diffusion-webui repo -print("cloning stable-diffusion-webui repo") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",str(user_home / r"stable-diffusion-webui")) -os.chdir(str(user_home / r"stable-diffusion-webui")) -os.system("git reset --hard 89f9faa63388756314e8a1d96cf86bf5e0663045") -# - -#install extensions -print("installing extensions") -Gitclone(r"https://huggingface.co/embed/negative",str(user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")) -Gitclone(r"https://huggingface.co/embed/lora",str(user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")) -DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",str(user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN") ,r"4x-UltraSharp.pth") -while True: - if(subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]).returncode == 0): - break -Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )) -#Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",str(user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")) -Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")) -Gitclone(r"https://github.com/camenduru/sd-civitai-browser",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser")) -Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")) -Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")) -Gitclone(r"https://github.com/fkunn1326/openpose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")) -Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")) -Gitclone(r"https://github.com/hnmr293/posex",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")) -Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")) -#中文本地化的请解除下一行的注释 -#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN")) -Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")) -Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")) -Gitclone(r"https://github.com/etherealxx/batchlinks-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")) -Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")) - -#Gitclone(r"https://github.com/KohakuBueleaf/a1111-sd-webui-locon",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-locon" )) -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")) -Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot")) -Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo")) - -os.chdir(user_home / r"stable-diffusion-webui") - -#download ControlNet models -print("extensions dolwnload done .\ndownloading ControlNet models") -dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"] -for i in range(0,len(dList)): DownLoad(dList[i],str(user_home / "stable-diffusion-webui" / "extensions" / "sd-webui-controlnet" / "models"),pathlib.Path(dList[i]).name) -del dList - -#download model -#you can change model download address here -print("ControlNet models download done.\ndownloading model") -DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-photoreal-2.0.safetensors") -DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-anime-1.0/resolve/main/dreamlike-anime-1.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-anime-1.0.safetensors") -DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/dreamlike-diffusion-1.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-diffusion-1.0.safetensors") -DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/dreamlike-photoreal-1.0.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-photoreal-1.0.ckpt") -DownLoad(r"https://huggingface.co/Yntec/Photosphere/resolve/main/photosphere.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"photosphere.safetensors") - -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.5-pruned.ckpt") -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.0.vae.pt") -#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"Counterfeit-V3.0_fp16.safetensors") -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"AOM3A1B_orangemixs.safetensors") -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"orangemix.vae.pt") -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_BakedVAE.safetensors") -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_WithoutVAE.safetensors") -#DownLoad(r"https://civitai.com/api/download/models/9474",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"chilloutmix_NiPrunedFp16.safetensors") - -DownLoad(r"https://civitai.com/api/download/models/39885",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"Better_light.safetensors") -DownLoad(r"https://civitai.com/api/download/models/21065",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"LAS.safetensors") -DownLoad(r"https://civitai.com/api/download/models/39164",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"backlighting.safetensors") -#strt webui - -print("Done\nStarting Webui...") -os.chdir(user_home / r"stable-diffusion-webui") -while True: - ret=subprocess.run([r"python3" ,r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")]) - if(ret.returncode == 0 ): - del ret - gc.collect() - else : - del ret - -del os ,user_home ,pyexecutable ,subprocess \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/diffq/__init__.py b/spaces/RMXK/RVC_HFF/diffq/__init__.py deleted file mode 100644 index 2b997ee4ed99a90cc43db7812383927e6fe1a3e8..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/diffq/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -""" -This package implements different quantization strategies: - -- `diffq.uniform.UniformQuantizer`: classic uniform quantization over n bits. -- `diffq.diffq.DiffQuantizer`: differentiable quantizer based on scaled noise injection. - -Also, do check `diffq.base.BaseQuantizer` for the common methods of all Quantizers. -""" - -from .uniform import UniformQuantizer -from .diffq import DiffQuantizer diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py deleted file mode 100644 index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py +++ /dev/null @@ -1,207 +0,0 @@ -import numpy as np -import torch -from torch.nn import functional as F - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py b/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py deleted file mode 100644 index 3ff4b085096f58a4aa4110cce278726ecc047714..0000000000000000000000000000000000000000 --- a/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel -mod1=gr.Interface.load("huggingface/gpt2") -mod2=gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -mod3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") -title="Txt generator 69420" -description="input txt and submit" -gr.Parallel(mod1, mod2 , mod3, title=title, description=description).launch() \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py deleted file mode 100644 index a2d3007ceb16b0eeb4b1f57361c089558a25daeb..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py +++ /dev/null @@ -1,23 +0,0 @@ -import os -import errno -import sys - -from pip._vendor import six - - -def _makedirs_31(path, exist_ok=False): - try: - os.makedirs(path) - except OSError as exc: - if not exist_ok or exc.errno != errno.EEXIST: - raise - - -# rely on compatibility behavior until mode considerations -# and exists_ok considerations are disentangled. -# See https://github.com/pypa/setuptools/pull/1083#issuecomment-315168663 -needs_makedirs = ( - six.PY2 or - (3, 4) <= sys.version_info < (3, 4, 1) -) -makedirs = _makedirs_31 if needs_makedirs else os.makedirs diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py deleted file mode 100644 index 86f167e95439d588c998ca32b9296c3482484215..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py +++ /dev/null @@ -1,172 +0,0 @@ -from math import log -from loguru import logger - -import torch -from einops import repeat -from kornia.utils import create_meshgrid - -from .geometry import warp_kpts - -############## ↓ Coarse-Level supervision ↓ ############## - - -@torch.no_grad() -def mask_pts_at_padded_regions(grid_pt, mask): - """For megadepth dataset, zero-padding exists in images""" - mask = repeat(mask, "n h w -> n (h w) c", c=2) - grid_pt[~mask.bool()] = 0 - return grid_pt - - -@torch.no_grad() -def spvs_coarse(data, config): - """ - Update: - data (dict): { - "conf_matrix_gt": [N, hw0, hw1], - 'spv_b_ids': [M] - 'spv_i_ids': [M] - 'spv_j_ids': [M] - 'spv_w_pt0_i': [N, hw0, 2], in original image resolution - 'spv_pt1_i': [N, hw1, 2], in original image resolution - } - - NOTE: - - for scannet dataset, there're 3 kinds of resolution {i, c, f} - - for megadepth dataset, there're 4 kinds of resolution {i, i_resize, c, f} - """ - # 1. misc - device = data["image0"].device - N, _, H0, W0 = data["image0"].shape - _, _, H1, W1 = data["image1"].shape - scale = config["MODEL"]["RESOLUTION"][0] - scale0 = scale * data["scale0"][:, None] if "scale0" in data else scale - scale1 = scale * data["scale1"][:, None] if "scale0" in data else scale - h0, w0, h1, w1 = map(lambda x: x // scale, [H0, W0, H1, W1]) - - # 2. warp grids - # create kpts in meshgrid and resize them to image resolution - grid_pt0_c = ( - create_meshgrid(h0, w0, False, device).reshape(1, h0 * w0, 2).repeat(N, 1, 1) - ) # [N, hw, 2] - grid_pt0_i = scale0 * grid_pt0_c - grid_pt1_c = ( - create_meshgrid(h1, w1, False, device).reshape(1, h1 * w1, 2).repeat(N, 1, 1) - ) - grid_pt1_i = scale1 * grid_pt1_c - - # mask padded region to (0, 0), so no need to manually mask conf_matrix_gt - if "mask0" in data: - grid_pt0_i = mask_pts_at_padded_regions(grid_pt0_i, data["mask0"]) - grid_pt1_i = mask_pts_at_padded_regions(grid_pt1_i, data["mask1"]) - - # warp kpts bi-directionally and resize them to coarse-level resolution - # (no depth consistency check, since it leads to worse results experimentally) - # (unhandled edge case: points with 0-depth will be warped to the left-up corner) - _, w_pt0_i = warp_kpts( - grid_pt0_i, - data["depth0"], - data["depth1"], - data["T_0to1"], - data["K0"], - data["K1"], - ) - _, w_pt1_i = warp_kpts( - grid_pt1_i, - data["depth1"], - data["depth0"], - data["T_1to0"], - data["K1"], - data["K0"], - ) - w_pt0_c = w_pt0_i / scale1 - w_pt1_c = w_pt1_i / scale0 - - # 3. check if mutual nearest neighbor - w_pt0_c_round = w_pt0_c[:, :, :].round().long() - nearest_index1 = w_pt0_c_round[..., 0] + w_pt0_c_round[..., 1] * w1 - w_pt1_c_round = w_pt1_c[:, :, :].round().long() - nearest_index0 = w_pt1_c_round[..., 0] + w_pt1_c_round[..., 1] * w0 - - # corner case: out of boundary - def out_bound_mask(pt, w, h): - return ( - (pt[..., 0] < 0) + (pt[..., 0] >= w) + (pt[..., 1] < 0) + (pt[..., 1] >= h) - ) - - nearest_index1[out_bound_mask(w_pt0_c_round, w1, h1)] = 0 - nearest_index0[out_bound_mask(w_pt1_c_round, w0, h0)] = 0 - - loop_back = torch.stack( - [nearest_index0[_b][_i] for _b, _i in enumerate(nearest_index1)], dim=0 - ) - correct_0to1 = loop_back == torch.arange(h0 * w0, device=device)[None].repeat(N, 1) - correct_0to1[:, 0] = False # ignore the top-left corner - - # 4. construct a gt conf_matrix - conf_matrix_gt = torch.zeros(N, h0 * w0, h1 * w1, device=device) - b_ids, i_ids = torch.where(correct_0to1 != 0) - j_ids = nearest_index1[b_ids, i_ids] - - conf_matrix_gt[b_ids, i_ids, j_ids] = 1 - data.update({"conf_matrix_gt": conf_matrix_gt}) - - # 5. save coarse matches(gt) for training fine level - if len(b_ids) == 0: - logger.warning(f"No groundtruth coarse match found for: {data['pair_names']}") - # this won't affect fine-level loss calculation - b_ids = torch.tensor([0], device=device) - i_ids = torch.tensor([0], device=device) - j_ids = torch.tensor([0], device=device) - - data.update({"spv_b_ids": b_ids, "spv_i_ids": i_ids, "spv_j_ids": j_ids}) - - # 6. save intermediate results (for fast fine-level computation) - data.update({"spv_w_pt0_i": w_pt0_i, "spv_pt1_i": grid_pt1_i}) - - -def compute_supervision_coarse(data, config): - assert ( - len(set(data["dataset_name"])) == 1 - ), "Do not support mixed datasets training!" - data_source = data["dataset_name"][0] - if data_source.lower() in ["scannet", "megadepth"]: - spvs_coarse(data, config) - else: - raise ValueError(f"Unknown data source: {data_source}") - - -############## ↓ Fine-Level supervision ↓ ############## - - -@torch.no_grad() -def spvs_fine(data, config): - """ - Update: - data (dict):{ - "expec_f_gt": [M, 2]} - """ - # 1. misc - # w_pt0_i, pt1_i = data.pop('spv_w_pt0_i'), data.pop('spv_pt1_i') - w_pt0_i, pt1_i = data["spv_w_pt0_i"], data["spv_pt1_i"] - scale = config["MODEL"]["RESOLUTION"][1] - radius = config["MODEL"]["FINE_WINDOW_SIZE"] // 2 - - # 2. get coarse prediction - b_ids, i_ids, j_ids = data["b_ids"], data["i_ids"], data["j_ids"] - - # 3. compute gt - scale = scale * data["scale1"][b_ids] if "scale0" in data else scale - # `expec_f_gt` might exceed the window, i.e. abs(*) > 1, which would be filtered later - expec_f_gt = ( - (w_pt0_i[b_ids, i_ids] - pt1_i[b_ids, j_ids]) / scale / radius - ) # [M, 2] - data.update({"expec_f_gt": expec_f_gt}) - - -def compute_supervision_fine(data, config): - data_source = data["dataset_name"][0] - if data_source.lower() in ["scannet", "megadepth"]: - spvs_fine(data, config) - else: - raise NotImplementedError diff --git a/spaces/RobLi/ControlNet-v1-1/depth_estimator.py b/spaces/RobLi/ControlNet-v1-1/depth_estimator.py deleted file mode 100644 index 8af14987f58b59329e5c8441dec43f1075a29d8b..0000000000000000000000000000000000000000 --- a/spaces/RobLi/ControlNet-v1-1/depth_estimator.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np -import PIL.Image -from controlnet_aux.util import HWC3 -from transformers import pipeline - -from cv_utils import resize_image - - -class DepthEstimator: - def __init__(self): - self.model = pipeline('depth-estimation') - - def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image: - detect_resolution = kwargs.pop('detect_resolution', 512) - image_resolution = kwargs.pop('image_resolution', 512) - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = PIL.Image.fromarray(image) - image = self.model(image) - image = image['depth'] - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - return PIL.Image.fromarray(image) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py deleted file mode 100644 index 1dcf146d8163aff1363e9764999b0a74d674a595..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -import os -import os.path as osp - -import torch -import yaml - -import annotator.uniformer.mmcv as mmcv -from ....parallel.utils import is_module_wrapper -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class PaviLoggerHook(LoggerHook): - - def __init__(self, - init_kwargs=None, - add_graph=False, - add_last_ckpt=False, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True, - img_key='img_info'): - super(PaviLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.init_kwargs = init_kwargs - self.add_graph = add_graph - self.add_last_ckpt = add_last_ckpt - self.img_key = img_key - - @master_only - def before_run(self, runner): - super(PaviLoggerHook, self).before_run(runner) - try: - from pavi import SummaryWriter - except ImportError: - raise ImportError('Please run "pip install pavi" to install pavi.') - - self.run_name = runner.work_dir.split('/')[-1] - - if not self.init_kwargs: - self.init_kwargs = dict() - self.init_kwargs['name'] = self.run_name - self.init_kwargs['model'] = runner._model_name - if runner.meta is not None: - if 'config_dict' in runner.meta: - config_dict = runner.meta['config_dict'] - assert isinstance( - config_dict, - dict), ('meta["config_dict"] has to be of a dict, ' - f'but got {type(config_dict)}') - elif 'config_file' in runner.meta: - config_file = runner.meta['config_file'] - config_dict = dict(mmcv.Config.fromfile(config_file)) - else: - config_dict = None - if config_dict is not None: - # 'max_.*iter' is parsed in pavi sdk as the maximum iterations - # to properly set up the progress bar. - config_dict = config_dict.copy() - config_dict.setdefault('max_iter', runner.max_iters) - # non-serializable values are first converted in - # mmcv.dump to json - config_dict = json.loads( - mmcv.dump(config_dict, file_format='json')) - session_text = yaml.dump(config_dict) - self.init_kwargs['session_text'] = session_text - self.writer = SummaryWriter(**self.init_kwargs) - - def get_step(self, runner): - """Get the total training step/epoch.""" - if self.get_mode(runner) == 'val' and self.by_epoch: - return self.get_epoch(runner) - else: - return self.get_iter(runner) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner, add_mode=False) - if tags: - self.writer.add_scalars( - self.get_mode(runner), tags, self.get_step(runner)) - - @master_only - def after_run(self, runner): - if self.add_last_ckpt: - ckpt_path = osp.join(runner.work_dir, 'latest.pth') - if osp.islink(ckpt_path): - ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path)) - - if osp.isfile(ckpt_path): - # runner.epoch += 1 has been done before `after_run`. - iteration = runner.epoch if self.by_epoch else runner.iter - return self.writer.add_snapshot_file( - tag=self.run_name, - snapshot_file_path=ckpt_path, - iteration=iteration) - - # flush the buffer and send a task ending signal to Pavi - self.writer.close() - - @master_only - def before_epoch(self, runner): - if runner.epoch == 0 and self.add_graph: - if is_module_wrapper(runner.model): - _model = runner.model.module - else: - _model = runner.model - device = next(_model.parameters()).device - data = next(iter(runner.data_loader)) - image = data[self.img_key][0:1].to(device) - with torch.no_grad(): - self.writer.add_graph(_model, image) diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py deleted file mode 100644 index 91c07521e7437916500cb6f5d17972bdab624d4f..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py +++ /dev/null @@ -1,797 +0,0 @@ -import os -import sys -import torch -import logging -import speechbrain as sb -from speechbrain.utils.distributed import run_on_main -from hyperpyyaml import load_hyperpyyaml -from pathlib import Path -import torchaudio.transforms as T -from cv_train import ASRCV -import torchaudio -import numpy as np -import kenlm -from pyctcdecode import build_ctcdecoder -import re -from torch.nn.utils.rnn import pad_sequence -import torch.optim as optim -import torch.nn as nn - - -# Commented out IPython magic to ensure Python compatibility. -hparams_file, run_opts, overrides = sb.parse_arguments(["TunisianASR/semi_trained.yaml"]) - -# If distributed_launch=True then -# create ddp_group with the right communication protocol -sb.utils.distributed.ddp_init_group(run_opts) - -with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - -# Create experiment directory -sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, -) -# Dataset prep (parsing Librispeech) - -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - if len(sig.shape)>1 : - sig = torch.mean(sig, dim=1) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "blank_label": hparams["blank_index"], - "unk_label": hparams["unk_index"] - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "wrd", "char_list", "tokens"], - ) - return train_data, valid_data,test_datasets, label_encoder - -class ASR(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def custom_encode(self,wavs,wav_lens) : - wavs = wavs.to("cpu") - if(wav_lens is not None): wav_lens.to(self.device) - - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return feats,p_ctc - - - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - predicted_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - # Decode token terms to words - if self.hparams.use_language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - # Convert indices to words - target_words = [wrd.split(" ") for wrd in batch.wrd] - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -from speechbrain.pretrained import EncoderASR,EncoderDecoderASR -french_asr_model = EncoderASR.from_hparams(source="asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-wav2vec2-commonvoice-fr") -french_asr_model.to("cpu") -cvhparams_file, cvrun_opts, cvoverrides = sb.parse_arguments(["EnglishCV/train_en_with_wav2vec.yaml"]) -with open(cvhparams_file) as cvfin: - cvhparams = load_hyperpyyaml(cvfin, cvoverrides) -cvrun_opts["device"]="cpu" -english_asr_model = ASRCV( - modules=cvhparams["modules"], - hparams=cvhparams, - run_opts=cvrun_opts, - checkpointer=cvhparams["checkpointer"], - ) -english_asr_model.modules.to("cpu") -english_asr_model.device="cpu" -english_asr_model.checkpointer.recover_if_possible(device="cpu") -run_opts["device"]="cpu" -print("moving to tunisian model") -asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], -) -asr_brain.modules.to("cpu") -asr_brain.checkpointer.recover_if_possible(device="cpu") -asr_brain.modules.eval() -english_asr_model.modules.eval() -french_asr_model.mods.eval() -asr_brain.modules.to("cpu") - -# Commented out IPython magic to ensure Python compatibility. -# %ls - -#UTILS FUNCTIOJNS -def get_size_dimensions(arr): - size_dimensions = [] - while isinstance(arr, list): - size_dimensions.append(len(arr)) - arr = arr[0] - return size_dimensions - -def scale_array(batch,n): - scaled_batch = [] - - for array in batch: - if(n < len(array)): raise ValueError("Cannot scale Array down") - - repeat = round(n/len(array))+1 - scaled_length_array= [] - - for i in array: - for j in range(repeat) : - if(len(scaled_length_array) == n): break - scaled_length_array.append(i) - - scaled_batch.append(scaled_length_array) - - return torch.tensor(scaled_batch) - - -def load_paths(wavs_path): - waveforms = [] - for path in wavs_path : - waveform, _ = torchaudio.load(path) - waveforms.append(waveform.squeeze(0)) - # normalize array length to the bigger arrays by pading with 0's - padded_arrays = pad_sequence(waveforms, batch_first=True) - return torch.tensor(padded_arrays) - - - -device = 'cpu' -verbose = 0 -#FLOW LEVEL FUNCTIONS -def merge_strategy(embeddings1, embeddings2, embeddings3,post1, post2,post3): - - - post1 = post1.to(device) - post2 = post2.to(device) - post3 = post3.to(device) - embeddings1 = embeddings1.to(device) - embeddings2 = embeddings2.to(device) - embeddings3 = embeddings3.to(device) - - posteriograms_merged = torch.cat((post1,post2,post3),dim=2) - embeddings_merged = torch.cat((embeddings1,embeddings2,embeddings3),dim=2) - - if(verbose !=0): - print('MERGED POST ',posteriograms_merged.shape) - print('MERGED emb ',embeddings_merged.shape) - - return torch.cat((posteriograms_merged,embeddings_merged),dim=2).to(device) - -def decode(model,wavs,wav_lens): - - with torch.no_grad(): - wav_lens = wav_lens.to(model.device) - encoder_out = model.encode_batch(wavs, wav_lens) - predictions = model.decoding_function(encoder_out, wav_lens) - return predictions - -def middle_layer(batch, lens): - - tn_embeddings, tn_posteriogram = asr_brain.custom_encode(batch,None) - - fr_embeddings = french_asr_model.mods.encoder.wav2vec2(batch) - fr_posteriogram =french_asr_model.encode_batch(batch,lens) - en_embeddings = english_asr_model.modules.wav2vec2(batch, lens) - x = english_asr_model.modules.enc(en_embeddings) - en_posteriogram = english_asr_model.modules.ctc_lin(x) - #scores, en_posteriogram = english_asr_model.mods.decoder(en_embeddings ,lens) - if(verbose !=0): - print('[EMBEDDINGS] FR:',fr_embeddings.shape, "EN:",en_embeddings.shape, "TN:", tn_embeddings.shape) - print('[POSTERIOGRAM] FR:',fr_posteriogram.shape, "EN:",en_posteriogram.shape,"TN:",tn_posteriogram.shape) - - - bilangual_sample = merge_strategy(fr_embeddings,en_embeddings,tn_embeddings,fr_posteriogram,en_posteriogram,tn_posteriogram) - return bilangual_sample - -class Mixer(sb.core.Brain): - - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - multi_langual_feats = middle_layer(wavs, wav_lens) - multi_langual_feats= multi_langual_feats.to(device) - feats, _ = self.modules.enc(multi_langual_feats) - logits = self.modules.ctc_lin(feats) - p_ctc = self.hparams.log_softmax(logits) - - if stage!= sb.Stage.TRAIN: - p_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - else : - p_tokens = None - return p_ctc, wav_lens, p_tokens - - - def treat_wav(self,sig): - multi_langual_feats = middle_layer(sig.to("cpu"), torch.tensor([1]).to("cpu")) - multi_langual_feats= multi_langual_feats.to(device) - feats, _ = self.modules.enc(multi_langual_feats) - logits = self.modules.ctc_lin(feats) - p_ctc = self.hparams.log_softmax(logits) - predicted_words =[] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - return " ".join(predicted_words[0]) - - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens , predicted_tokens= predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - - if stage == sb.Stage.VALID: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - if stage ==sb.Stage.TEST : - if self.hparams.language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else : - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - - self.model_optimizer.zero_grad(set_to_none) - - - - -hparams_file, run_opts, overrides = sb.parse_arguments(["cs.yaml"]) - -# If distributed_launch=True then -# create ddp_group with the right communication protocol -sb.utils.distributed.ddp_init_group(run_opts) - -with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - -# Create experiment directory -sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, -) -def read_labels_file(labels_file): - with open(labels_file, "r",encoding="utf-8") as lf: - lines = lf.read().splitlines() - division = "===" - numbers = {} - for line in lines : - if division in line : - break - string, number = line.split("=>") - number = int(number) - string = string[1:-2] - numbers[number] = string - return [numbers[x] for x in range(len(numbers))] - -label_encoder = sb.dataio.encoder.CTCTextEncoder() - -lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") -special_labels = { - "blank_label": hparams["blank_index"], - "unk_label": hparams["unk_index"] -} -label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[[]], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, -) - - -labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt")) -labels = [""] + labels[1:-1] + ["1"] -if hparams["language_modelling"]: - decoder = build_ctcdecoder( - labels, - kenlm_model_path=hparams["ngram_lm_path"], # either .arpa or .bin file - alpha=0.5, # tuned on a val set - beta=1, # tuned on a val set - ) - -description = """This is a speechbrain-based Automatic Speech Recognition (ASR) model for Tunisian arabic. It outputs code-switched Tunisian transcriptions written in Arabic and Latin characters. It handles Tunisian Arabic, English and French outputs. -Code-switching is notoriously hard to handle for speech recognition models, the main errors you man encounter using this model are spelling/language identification errors due to code-switching. We may work on improving this in further models. However if you do not need code-switching in your transcripts, you would better use the non-code switched model, available in another space from the same author. (https://huggingface.co/spaces/SalahZa/Tunisian-Speech-Recognition) - -Run is done on CPU to keep it free in this space. This leads to quite long running times on long sequences. If for your project or research, you want to transcribe long sequences, you would better use the model directly from its page, some instructions for inference on a test set have been provided there. (https://huggingface.co/SalahZa/Code_Switched_Tunisian_Speech_Recognition). If you need help, feel free to drop an email here : zaiemsalah@gmail.com - -Authors : -* [Salah Zaiem](https://fr.linkedin.com/in/salah-zaiem) -* [Ahmed Amine Ben Aballah](https://www.linkedin.com/in/aabenz/) -* [Ata Kaboudi](https://www.linkedin.com/in/ata-kaboudi-63365b1a8) -* [Amir Kanoun](https://tn.linkedin.com/in/ahmed-amir-kanoun) - -More in-depth details and insights are available in a released preprint. Please find the paper [here](https://arxiv.org/abs/2309.11327). -If you use or refer to this model, please cite : - -``` -@misc{abdallah2023leveraging, - title={Leveraging Data Collection and Unsupervised Learning for Code-switched Tunisian Arabic Automatic Speech Recognition}, - author={Ahmed Amine Ben Abdallah and Ata Kabboudi and Amir Kanoun and Salah Zaiem}, - year={2023}, - eprint={2309.11327}, - archivePrefix={arXiv}, - primaryClass={eess.AS} -} - - -""" -title = "Code-Switched Tunisian Speech Recognition" - - -run_opts["device"]="cpu" - -mixer = Mixer( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], -) -mixer.tokenizer = label_encoder -mixer.device = "cpu" -mixer.checkpointer.recover_if_possible(device="cpu") -mixer.modules.eval() - - - - - - - - -device = "cpu" -mixer.device= "cpu" -mixer.modules.to("cpu") - -from enum import Enum, auto -class Stage(Enum): - TRAIN = auto() - VALID = auto() - TEST = auto() - -asr_brain.on_evaluate_start() -asr_brain.modules.eval() - - -import gradio as gr - -def treat_wav_file(file_mic,file_upload ,asr=mixer, device="cpu") : - if (file_mic is not None) and (file_upload is not None): - warn_output = "WARNING: You've uploaded an audio file and used the microphone. The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - wav = file_mic - elif (file_mic is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - elif file_mic is not None: - wav = file_mic - else: - wav = file_upload - info = torchaudio.info(wav) - sr = info.sample_rate - sig = sb.dataio.dataio.read_audio(wav) - if len(sig.shape)>1 : - sig = torch.mean(sig, dim=1) - sig = torch.unsqueeze(sig, 0) - tensor_wav = sig.to(device) - resampled = torchaudio.functional.resample( tensor_wav, sr, 16000) - sentence = asr.treat_wav(resampled) - return sentence - -gr.Interface( - fn=treat_wav_file, - title = title, - description = description, - inputs=[gr.Audio(source="microphone", type='filepath', label = "record", optional = True), - gr.Audio(source="upload", type='filepath', label="filein", optional=True)] - ,outputs="text").launch() - diff --git a/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md b/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md deleted file mode 100644 index e4f11593371f746645bd4fa1288e15dab513636c..0000000000000000000000000000000000000000 --- a/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Brain MR Image Generation GAN -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md b/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md deleted file mode 100644 index d27ac55a92d402b6a6f9aa34f162c635b7042680..0000000000000000000000000000000000000000 --- a/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Speechbrain Soundchoice G2p -emoji: 😻 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sojab/voice-recognition/app.py b/spaces/Sojab/voice-recognition/app.py deleted file mode 100644 index fb91365640646d3a18063887cad896204b2f6922..0000000000000000000000000000000000000000 --- a/spaces/Sojab/voice-recognition/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/codenamewei/speech-to-text").launch() \ No newline at end of file diff --git a/spaces/Sourabh2/English2Manipuri/app.py b/spaces/Sourabh2/English2Manipuri/app.py deleted file mode 100644 index d2defed1fb6c638fcbf68d5b6bc430e3ac3b8a16..0000000000000000000000000000000000000000 --- a/spaces/Sourabh2/English2Manipuri/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import json -import tensorflow as tf -from tensorflow.keras.layers import TextVectorization -import gradio as gr -import re -import string -import numpy as np - -with open("./output.json", "r") as json_file: - loaded_data = json.load(json_file) - -text_pair = [] -for item in loaded_data: - english_sentence = item["english"] - manipuri_translation = item["manipuri"] - text_pair.append((english_sentence, manipuri_translation)) - -strip_chars = string.punctuation + "|" -strip_chars = strip_chars.replace("[", "") -strip_chars = strip_chars.replace("]", "") - -vocab_size = 15000 -sequence_length = 20 -batch_size = 64 - - -def custom_standardization(input_string): - lowercase = tf.strings.lower(input_string) - return tf.strings.regex_replace(lowercase, "[%s]" % re.escape(strip_chars), "") - - -eng_vectorization = TextVectorization( - max_tokens=vocab_size, output_mode="int", output_sequence_length=sequence_length, -) -spa_vectorization = TextVectorization( - max_tokens=vocab_size, - output_mode="int", - output_sequence_length=sequence_length + 1, - standardize=custom_standardization, -) -train_pairs = text_pair -train_eng_texts = [pair[0] for pair in train_pairs] -train_spa_texts = [pair[1] for pair in train_pairs] -eng_vectorization.adapt(train_eng_texts) -spa_vectorization.adapt(train_spa_texts) - -reloaded = tf.saved_model.load('translator') - -spa_vocab = spa_vectorization.get_vocabulary() -spa_index_lookup = dict(zip(range(len(spa_vocab)), spa_vocab)) -max_decoded_sentence_length = 20 - - -def decode_sequence(input_sentence): - tokenized_input_sentence = eng_vectorization([input_sentence]) - decoded_sentence = "[start]" - for i in range(max_decoded_sentence_length): - tokenized_target_sentence = spa_vectorization([decoded_sentence])[:, :-1] - predictions = reloaded([tokenized_input_sentence, tokenized_target_sentence]) - - sampled_token_index = np.argmax(predictions[0, i, :]) - sampled_token = spa_index_lookup[sampled_token_index] - decoded_sentence += " " + sampled_token - - if sampled_token == "[end]": - break - return decoded_sentence - -def total(sen): - translatedee = decode_sequence(sen) - updated_text = translatedee.replace("[start]", "").strip() - return updated_text - - -iface = gr.Interface(fn=total, - inputs=gr.inputs.Textbox(lines=2, placeholder='Text to translate From English to Manipuri'), - outputs='text') - -iface.launch() diff --git a/spaces/Stearns/Soar/pysoarlib/SoarWME.py b/spaces/Stearns/Soar/pysoarlib/SoarWME.py deleted file mode 100644 index 245f0b0deb0d1a427c4b988fd679848835ca90d9..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/pysoarlib/SoarWME.py +++ /dev/null @@ -1,87 +0,0 @@ -""" -This module defines a utility class called SoarWME -which wraps SML code for adding/removing Soar Working Memory Elements (WME) -""" - -from .WMInterface import WMInterface - -class SoarWME(WMInterface): - """ Wrapper for a single Soar Working Memory Element with a primitive value - - It can wrap an int, float, or string type - - An instance is not directly tied to an SML wme, - the user decides how and when soar's working memory is modified - - So you can change the value anytime (asynchronously to soar) - And then modify working memory via add_to_wm, update_wm, and remove_from_wm - during an agent callback (like BEFORE_INPUT_PHASE) - """ - - def __init__(self, att, val): - """ Initializes the wme, but does not add to working memory yet - - :param att: The wme's attribute - :type att: str - - :param val: The wme's value, any of the 3 main primitive types - :type val: int, float, or str - """ - WMInterface.__init__(self) - self.att = att - self.val = val - self.wme = None - - self.changed = False - - if type(val) == int: - self.create_wme = self._create_int_wme - elif type(val) == float: - self.create_wme = self._create_float_wme - else: - self.create_wme = self._create_string_wme - - def get_attr(self): - """ Returns the wme's attribute """ - return self.att - - def get_value(self): - """ Returns the wme's value """ - return self.val - - def set_value(self, newval): - """ Set's the wme's value, but also need to call update_wm to change working memory """ - if self.val != newval: - self.val = newval - self.changed = True - - def __str__(self): - return str(self.val) - - - ### Internal Methods - - def _create_int_wme(self, id, att, val): - return id.CreateIntWME(att, val) - - def _create_float_wme(self, id, att, val): - return id.CreateFloatWME(att, val) - - def _create_string_wme(self, id, att, val): - return id.CreateStringWME(att, str(val)) - - def _add_to_wm_impl(self, parent_id): - """ Creates a wme in soar's working memory rooted at the given parent_id """ - self.wme = self.create_wme(parent_id, self.att, self.val) - - def _update_wm_impl(self): - """ If the value has changed, will update soar's working memory with the new value """ - if self.changed: - self.wme.Update(self.val) - self.changed = False - - def _remove_from_wm_impl(self): - """ Will remove the wme from soar's working memory """ - self.wme.DestroyWME() - self.wme = None - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py deleted file mode 100644 index fa55b259756c41f6f01f9a91e57183ff14ea623f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -from __future__ import annotations -import typing - -if typing.TYPE_CHECKING: - __all__: list[str] - -__all__ = [] - -access_token = None -"""Access token used to authenticate with this adapter.""" diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py deleted file mode 100644 index 6204208198d813728cf6419e8eef4a733f20c18f..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Model Zoo API for Detectron2: a collection of functions to create common model architectures -listed in `MODEL_ZOO.md `_, -and optionally load their pre-trained weights. -""" - -from .model_zoo import get, get_config_file, get_checkpoint_url, get_config - -__all__ = ["get_checkpoint_url", "get", "get_config_file", "get_config"] diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py deleted file mode 100644 index 915af28cefab14a14c1188ed861161080fd138a3..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook, - NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook, - TextLoggerHook, WandbLoggerHook) -from .lr_updater import LrUpdaterHook -from .memory import EmptyCacheHook -from .momentum_updater import MomentumUpdaterHook -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook', - 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook', - 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook', - 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook' -] diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py deleted file mode 100644 index 69bf320934d787aaa11984a0c4effe9ad8015b22..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py +++ /dev/null @@ -1,90 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv import is_tuple_of -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class LRASPPHead(BaseDecodeHead): - """Lite R-ASPP (LRASPP) head is proposed in Searching for MobileNetV3. - - This head is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - branch_channels (tuple[int]): The number of output channels in every - each branch. Default: (32, 64). - """ - - def __init__(self, branch_channels=(32, 64), **kwargs): - super(LRASPPHead, self).__init__(**kwargs) - if self.input_transform != 'multiple_select': - raise ValueError('in Lite R-ASPP (LRASPP) head, input_transform ' - f'must be \'multiple_select\'. But received ' - f'\'{self.input_transform}\'') - assert is_tuple_of(branch_channels, int) - assert len(branch_channels) == len(self.in_channels) - 1 - self.branch_channels = branch_channels - - self.convs = nn.Sequential() - self.conv_ups = nn.Sequential() - for i in range(len(branch_channels)): - self.convs.add_module( - f'conv{i}', - nn.Conv2d( - self.in_channels[i], branch_channels[i], 1, bias=False)) - self.conv_ups.add_module( - f'conv_up{i}', - ConvModule( - self.channels + branch_channels[i], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False)) - - self.conv_up_input = nn.Conv2d(self.channels, self.channels, 1) - - self.aspp_conv = ConvModule( - self.in_channels[-1], - self.channels, - 1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - bias=False) - self.image_pool = nn.Sequential( - nn.AvgPool2d(kernel_size=49, stride=(16, 20)), - ConvModule( - self.in_channels[2], - self.channels, - 1, - act_cfg=dict(type='Sigmoid'), - bias=False)) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - - x = inputs[-1] - - x = self.aspp_conv(x) * resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = self.conv_up_input(x) - - for i in range(len(self.branch_channels) - 1, -1, -1): - x = resize( - x, - size=inputs[i].size()[2:], - mode='bilinear', - align_corners=self.align_corners) - x = torch.cat([x, self.convs[i](inputs[i])], 1) - x = self.conv_ups[i](x) - - return self.cls_seg(x) diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py deleted file mode 100644 index fc421c108ea3928c9add62b4c190500d9bd4eda1..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py +++ /dev/null @@ -1,152 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import transforms -import PIL.Image -from PIL import Image -from typing import Union - - -class DepthModel(nn.Module): - def __init__(self): - super().__init__() - self.device = 'cpu' - - def to(self, device) -> nn.Module: - self.device = device - return super().to(device) - - def forward(self, x, *args, **kwargs): - raise NotImplementedError - - def _infer(self, x: torch.Tensor): - """ - Inference interface for the model - Args: - x (torch.Tensor): input tensor of shape (b, c, h, w) - Returns: - torch.Tensor: output tensor of shape (b, 1, h, w) - """ - return self(x)['metric_depth'] - - def _infer_with_pad_aug(self, x: torch.Tensor, pad_input: bool=True, fh: float=3, fw: float=3, upsampling_mode: str='bicubic', padding_mode="reflect", **kwargs) -> torch.Tensor: - """ - Inference interface for the model with padding augmentation - Padding augmentation fixes the boundary artifacts in the output depth map. - Boundary artifacts are sometimes caused by the fact that the model is trained on NYU raw dataset which has a black or white border around the image. - This augmentation pads the input image and crops the prediction back to the original size / view. - - Note: This augmentation is not required for the models trained with 'avoid_boundary'=True. - Args: - x (torch.Tensor): input tensor of shape (b, c, h, w) - pad_input (bool, optional): whether to pad the input or not. Defaults to True. - fh (float, optional): height padding factor. The padding is calculated as sqrt(h/2) * fh. Defaults to 3. - fw (float, optional): width padding factor. The padding is calculated as sqrt(w/2) * fw. Defaults to 3. - upsampling_mode (str, optional): upsampling mode. Defaults to 'bicubic'. - padding_mode (str, optional): padding mode. Defaults to "reflect". - Returns: - torch.Tensor: output tensor of shape (b, 1, h, w) - """ - # assert x is nchw and c = 3 - assert x.dim() == 4, "x must be 4 dimensional, got {}".format(x.dim()) - assert x.shape[1] == 3, "x must have 3 channels, got {}".format(x.shape[1]) - - if pad_input: - assert fh > 0 or fw > 0, "atlease one of fh and fw must be greater than 0" - pad_h = int(np.sqrt(x.shape[2]/2) * fh) - pad_w = int(np.sqrt(x.shape[3]/2) * fw) - padding = [pad_w, pad_w] - if pad_h > 0: - padding += [pad_h, pad_h] - - x = F.pad(x, padding, mode=padding_mode, **kwargs) - out = self._infer(x) - if out.shape[-2:] != x.shape[-2:]: - out = F.interpolate(out, size=(x.shape[2], x.shape[3]), mode=upsampling_mode, align_corners=False) - if pad_input: - # crop to the original size, handling the case where pad_h and pad_w is 0 - if pad_h > 0: - out = out[:, :, pad_h:-pad_h,:] - if pad_w > 0: - out = out[:, :, :, pad_w:-pad_w] - return out - - def infer_with_flip_aug(self, x, pad_input: bool=True, **kwargs) -> torch.Tensor: - """ - Inference interface for the model with horizontal flip augmentation - Horizontal flip augmentation improves the accuracy of the model by averaging the output of the model with and without horizontal flip. - Args: - x (torch.Tensor): input tensor of shape (b, c, h, w) - pad_input (bool, optional): whether to use padding augmentation. Defaults to True. - Returns: - torch.Tensor: output tensor of shape (b, 1, h, w) - """ - # infer with horizontal flip and average - out = self._infer_with_pad_aug(x, pad_input=pad_input, **kwargs) - out_flip = self._infer_with_pad_aug(torch.flip(x, dims=[3]), pad_input=pad_input, **kwargs) - out = (out + torch.flip(out_flip, dims=[3])) / 2 - return out - - def infer(self, x, pad_input: bool=True, with_flip_aug: bool=True, **kwargs) -> torch.Tensor: - """ - Inference interface for the model - Args: - x (torch.Tensor): input tensor of shape (b, c, h, w) - pad_input (bool, optional): whether to use padding augmentation. Defaults to True. - with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults to True. - Returns: - torch.Tensor: output tensor of shape (b, 1, h, w) - """ - if with_flip_aug: - return self.infer_with_flip_aug(x, pad_input=pad_input, **kwargs) - else: - return self._infer_with_pad_aug(x, pad_input=pad_input, **kwargs) - - @torch.no_grad() - def infer_pil(self, pil_img, pad_input: bool=True, with_flip_aug: bool=True, output_type: str="numpy", **kwargs) -> Union[np.ndarray, PIL.Image.Image, torch.Tensor]: - """ - Inference interface for the model for PIL image - Args: - pil_img (PIL.Image.Image): input PIL image - pad_input (bool, optional): whether to use padding augmentation. Defaults to True. - with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults to True. - output_type (str, optional): output type. Supported values are 'numpy', 'pil' and 'tensor'. Defaults to "numpy". - """ - x = transforms.ToTensor()(pil_img).unsqueeze(0).to(self.device) - out_tensor = self.infer(x, pad_input=pad_input, with_flip_aug=with_flip_aug, **kwargs) - if output_type == "numpy": - return out_tensor.squeeze().cpu().numpy() - elif output_type == "pil": - # uint16 is required for depth pil image - out_16bit_numpy = (out_tensor.squeeze().cpu().numpy()*256).astype(np.uint16) - return Image.fromarray(out_16bit_numpy) - elif output_type == "tensor": - return out_tensor.squeeze().cpu() - else: - raise ValueError(f"output_type {output_type} not supported. Supported values are 'numpy', 'pil' and 'tensor'") - \ No newline at end of file diff --git a/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/TH5314/newbing/src/lib/isomorphic/node.ts b/spaces/TH5314/newbing/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/TRI-ML/risk_biased_prediction/app.py b/spaces/TRI-ML/risk_biased_prediction/app.py deleted file mode 100644 index 8ef951ee1c0ed9a66d8d9d801958fbe3a7d22476..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/app.py +++ /dev/null @@ -1,3 +0,0 @@ -from scripts.scripts_utils.plotly_interface import main - -main() \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py deleted file mode 100644 index f198fc313ff57929d95d36216e3e6ecec3877673..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py +++ /dev/null @@ -1,113 +0,0 @@ -import logging -from optparse import Values -from typing import List - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.cli import cmdoptions -from pip._internal.cli.base_command import Command -from pip._internal.cli.req_command import SessionCommandMixin, warn_if_run_as_root -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.exceptions import InstallationError -from pip._internal.req import parse_requirements -from pip._internal.req.constructors import ( - install_req_from_line, - install_req_from_parsed_requirement, -) -from pip._internal.utils.misc import ( - check_externally_managed, - protect_pip_from_modification_on_windows, -) - -logger = logging.getLogger(__name__) - - -class UninstallCommand(Command, SessionCommandMixin): - """ - Uninstall packages. - - pip is able to uninstall most installed packages. Known exceptions are: - - - Pure distutils packages installed with ``python setup.py install``, which - leave behind no metadata to determine what files were installed. - - Script wrappers installed by ``python setup.py develop``. - """ - - usage = """ - %prog [options] ... - %prog [options] -r ...""" - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-r", - "--requirement", - dest="requirements", - action="append", - default=[], - metavar="file", - help=( - "Uninstall all the packages listed in the given requirements " - "file. This option can be used multiple times." - ), - ) - self.cmd_opts.add_option( - "-y", - "--yes", - dest="yes", - action="store_true", - help="Don't ask for confirmation of uninstall deletions.", - ) - self.cmd_opts.add_option(cmdoptions.root_user_action()) - self.cmd_opts.add_option(cmdoptions.override_externally_managed()) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - session = self.get_default_session(options) - - reqs_to_uninstall = {} - for name in args: - req = install_req_from_line( - name, - isolated=options.isolated_mode, - ) - if req.name: - reqs_to_uninstall[canonicalize_name(req.name)] = req - else: - logger.warning( - "Invalid requirement: %r ignored -" - " the uninstall command expects named" - " requirements.", - name, - ) - for filename in options.requirements: - for parsed_req in parse_requirements( - filename, options=options, session=session - ): - req = install_req_from_parsed_requirement( - parsed_req, isolated=options.isolated_mode - ) - if req.name: - reqs_to_uninstall[canonicalize_name(req.name)] = req - if not reqs_to_uninstall: - raise InstallationError( - f"You must give at least one requirement to {self.name} (see " - f'"pip help {self.name}")' - ) - - if not options.override_externally_managed: - check_externally_managed() - - protect_pip_from_modification_on_windows( - modifying_pip="pip" in reqs_to_uninstall - ) - - for req in reqs_to_uninstall.values(): - uninstall_pathset = req.uninstall( - auto_confirm=options.yes, - verbose=self.verbosity > 0, - ) - if uninstall_pathset: - uninstall_pathset.commit() - if options.root_user_action == "warn": - warn_if_run_as_root() - return SUCCESS diff --git a/spaces/TencentARC/VLog/models/__init__.py b/spaces/TencentARC/VLog/models/__init__.py deleted file mode 100644 index 12e8dcd794621abc987f0aaf7cb549059baaf3f9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .kts_src import * -from .clip_model import * -from .grit_model import * diff --git a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py b/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/index.html b/spaces/UserXTheUnknown/stablediffusion-infinity/index.html deleted file mode 100644 index 7f93791e6c90fe9ea92aa398dbb650cfc8af78cc..0000000000000000000000000000000000000000 --- a/spaces/UserXTheUnknown/stablediffusion-infinity/index.html +++ /dev/null @@ -1,404 +0,0 @@ - - -Stablediffusion Infinity - - - - - - - - - - - - - - - - -
    - - - - - - - - - - - -
    -
    - - -
    -
    - - -
    -
    - - -
    -
    -
    -
    -
    - - - - - -
    - -
    -
    - - -
    -
    -
    -
    - -- numpy -- Pillow -- paths: - - ./canvas.py - - - -from pyodide import to_js, create_proxy -from PIL import Image -import io -import time -import base64 -import numpy as np -from js import ( - console, - document, - parent, - devicePixelRatio, - ImageData, - Uint8ClampedArray, - CanvasRenderingContext2D as Context2d, - requestAnimationFrame, - window, - encodeURIComponent, - w2ui, - update_eraser, - update_scale, - adjust_selection, - update_count, - enable_result_lst, - setup_shortcut, -) - - -from canvas import InfCanvas - - - -base_lst = [None] -async def draw_canvas() -> None: - width=1024 - height=600 - canvas=InfCanvas(1024,600) - update_eraser(canvas.eraser_size,min(canvas.selection_size_h,canvas.selection_size_w)) - document.querySelector("#container").style.height= f"{height}px" - document.querySelector("#container").style.width = f"{width}px" - canvas.setup_mouse() - canvas.clear_background() - canvas.draw_buffer() - canvas.draw_selection_box() - base_lst[0]=canvas - -async def draw_canvas_func(): - - width=1500 - height=600 - selection_size=256 - document.querySelector("#container").style.width = f"{width}px" - document.querySelector("#container").style.height= f"{height}px" - canvas=InfCanvas(int(width),int(height),selection_size=int(selection_size)) - canvas.setup_mouse() - canvas.clear_background() - canvas.draw_buffer() - canvas.draw_selection_box() - base_lst[0]=canvas - -async def export_func(event): - base=base_lst[0] - arr=base.export() - base.draw_buffer() - base.canvas[2].clear() - base64_str = base.numpy_to_base64(arr) - time_str = time.strftime("%Y%m%d_%H%M%S") - link = document.createElement("a") - if len(event.data)>2 and event.data[2]: - filename = event.data[2] - else: - filename = f"outpaint_{time_str}" - # link.download = f"sdinf_state_{time_str}.json" - link.download = f"{filename}.png" - # link.download = f"outpaint_{time_str}.png" - link.href = "data:image/png;base64,"+base64_str - link.click() - console.log(f"Canvas saved to {filename}.png") - -img_candidate_lst=[None,0] - -async def outpaint_func(event): - base=base_lst[0] - if len(event.data)==2: - app=parent.document.querySelector("gradio-app") - if app.shadowRoot: - app=app.shadowRoot - base64_str_raw=app.querySelector("#output textarea").value - base64_str_lst=base64_str_raw.split(",") - img_candidate_lst[0]=base64_str_lst - img_candidate_lst[1]=0 - elif event.data[2]=="next": - img_candidate_lst[1]+=1 - elif event.data[2]=="prev": - img_candidate_lst[1]-=1 - enable_result_lst() - if img_candidate_lst[0] is None: - return - lst=img_candidate_lst[0] - idx=img_candidate_lst[1] - update_count(idx%len(lst)+1,len(lst)) - arr=base.base64_to_numpy(lst[idx%len(lst)]) - base.fill_selection(arr) - base.draw_selection_box() - -async def undo_func(event): - base=base_lst[0] - img_candidate_lst[0]=None - if base.sel_dirty: - base.sel_buffer = np.zeros((base.selection_size_h, base.selection_size_w, 4), dtype=np.uint8) - base.sel_dirty = False - base.canvas[2].clear() - -async def commit_func(event): - base=base_lst[0] - img_candidate_lst[0]=None - if base.sel_dirty: - base.write_selection_to_buffer() - base.draw_buffer() - base.canvas[2].clear() - -async def transfer_func(event): - base=base_lst[0] - base.read_selection_from_buffer() - sel_buffer=base.sel_buffer - sel_buffer_str=base.numpy_to_base64(sel_buffer) - app=parent.document.querySelector("gradio-app") - if app.shadowRoot: - app=app.shadowRoot - app.querySelector("#input textarea").value=sel_buffer_str - app.querySelector("#proceed").click() - -async def upload_func(event): - base=base_lst[0] - # base64_str=event.data[1] - base64_str=document.querySelector("#upload_content").value - base64_str=base64_str.split(",")[-1] - # base64_str=parent.document.querySelector("gradio-app").shadowRoot.querySelector("#upload textarea").value - arr=base.base64_to_numpy(base64_str) - h,w,c=base.buffer.shape - base.sync_to_buffer() - base.buffer_dirty=True - mask=arr[:,:,3:4].repeat(4,axis=2) - base.buffer[mask>0]=0 - # in case mismatch - base.buffer[0:h,0:w,:]+=arr - #base.buffer[yo:yo+h,xo:xo+w,0:3]=arr[:,:,0:3] - #base.buffer[yo:yo+h,xo:xo+w,-1]=arr[:,:,-1] - base.draw_buffer() - -async def setup_shortcut_func(event): - setup_shortcut(event.data[1]) - - -document.querySelector("#export").addEventListener("click",create_proxy(export_func)) -document.querySelector("#undo").addEventListener("click",create_proxy(undo_func)) -document.querySelector("#commit").addEventListener("click",create_proxy(commit_func)) -document.querySelector("#outpaint").addEventListener("click",create_proxy(outpaint_func)) -document.querySelector("#upload").addEventListener("click",create_proxy(upload_func)) - -document.querySelector("#transfer").addEventListener("click",create_proxy(transfer_func)) -document.querySelector("#draw").addEventListener("click",create_proxy(draw_canvas_func)) - -async def setup_func(): - document.querySelector("#setup").value="1" - -async def reset_func(event): - base=base_lst[0] - base.reset() - -async def load_func(event): - base=base_lst[0] - base.load(event.data[1]) - -async def save_func(event): - base=base_lst[0] - json_str=base.save() - time_str = time.strftime("%Y%m%d_%H%M%S") - link = document.createElement("a") - if len(event.data)>2 and event.data[2]: - filename = str(event.data[2]).strip() - else: - filename = f"outpaint_{time_str}" - # link.download = f"sdinf_state_{time_str}.json" - link.download = f"{filename}.sdinf" - link.href = "data:text/json;charset=utf-8,"+encodeURIComponent(json_str) - link.click() - -async def prev_result_func(event): - base=base_lst[0] - base.reset() - -async def next_result_func(event): - base=base_lst[0] - base.reset() - -async def zoom_in_func(event): - base=base_lst[0] - scale=base.scale - if scale>=0.2: - scale-=0.1 - if len(event.data)>2: - base.update_scale(scale,int(event.data[2]),int(event.data[3])) - else: - base.update_scale(scale) - scale=base.scale - update_scale(f"{base.width}x{base.height} ({round(100/scale)}%)") - -async def zoom_out_func(event): - base=base_lst[0] - scale=base.scale - if scale<10: - scale+=0.1 - console.log(len(event.data)) - if len(event.data)>2: - base.update_scale(scale,int(event.data[2]),int(event.data[3])) - else: - base.update_scale(scale) - scale=base.scale - update_scale(f"{base.width}x{base.height} ({round(100/scale)}%)") - -async def sync_func(event): - base=base_lst[0] - base.sync_to_buffer() - base.canvas[2].clear() - -async def eraser_size_func(event): - base=base_lst[0] - eraser_size=min(int(event.data[1]),min(base.selection_size_h,base.selection_size_w)) - eraser_size=max(8,eraser_size) - base.eraser_size=eraser_size - -async def resize_selection_func(event): - base=base_lst[0] - cursor=base.cursor - if len(event.data)>3: - console.log(event.data) - base.cursor[0]=int(event.data[1]) - base.cursor[1]=int(event.data[2]) - base.selection_size_w=int(event.data[3])//8*8 - base.selection_size_h=int(event.data[4])//8*8 - base.refine_selection() - base.draw_selection_box() - elif len(event.data)>2: - base.draw_selection_box() - else: - base.canvas[-1].clear() - adjust_selection(cursor[0],cursor[1],base.selection_size_w,base.selection_size_h) - -async def eraser_func(event): - base=base_lst[0] - if event.data[1]!="eraser": - base.canvas[-2].clear() - else: - x,y=base.mouse_pos - base.draw_eraser(x,y) - -async def resize_func(event): - base=base_lst[0] - width=int(event.data[1]) - height=int(event.data[2]) - if width>=256 and height>=256: - if max(base.selection_size_h,base.selection_size_w)>min(width,height): - base.selection_size_h=256 - base.selection_size_w=256 - base.resize(width,height) - -async def message_func(event): - if event.data[0]=="click": - if event.data[1]=="clear": - await reset_func(event) - elif event.data[1]=="save": - await save_func(event) - elif event.data[1]=="export": - await export_func(event) - elif event.data[1]=="accept": - await commit_func(event) - elif event.data[1]=="cancel": - await undo_func(event) - elif event.data[1]=="zoom_in": - await zoom_in_func(event) - elif event.data[1]=="zoom_out": - await zoom_out_func(event) - elif event.data[0]=="sync": - await sync_func(event) - elif event.data[0]=="load": - await load_func(event) - elif event.data[0]=="upload": - await upload_func(event) - elif event.data[0]=="outpaint": - await outpaint_func(event) - elif event.data[0]=="mode": - if event.data[1]!="selection": - await sync_func(event) - await eraser_func(event) - document.querySelector("#mode").value=event.data[1] - elif event.data[0]=="transfer": - await transfer_func(event) - elif event.data[0]=="setup": - await draw_canvas_func(event) - elif event.data[0]=="eraser_size": - await eraser_size_func(event) - elif event.data[0]=="resize_selection": - await resize_selection_func(event) - elif event.data[0]=="shortcut": - await setup_shortcut_func(event) - elif event.data[0]=="resize": - await resize_func(event) - -window.addEventListener("message",create_proxy(message_func)) - -import asyncio - -_ = await asyncio.gather( - setup_func(),draw_canvas_func() -) - - - - diff --git a/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/__init__.py b/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/__init__.py deleted file mode 100644 index aa49a5db9a2b9642e9a096546e2d59773c5e75cc..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -from .base_segmenter import BaseSegmenter -from caption_anything.utils.utils import seg_model_map -import copy - -def build_segmenter(model_name, device, args, model=None): - return BaseSegmenter(device, args.segmenter_checkpoint, model_name, reuse_feature=not args.disable_reuse_features, model=model, args=args) - -def build_segmenter_densecap(model_name, device, args, model=None): - args_for_densecap = copy.deepcopy(args) - args_for_densecap.pred_iou_thresh = 0.88 - args_for_densecap.min_mask_region_area = 400 - args_for_densecap.stability_score_thresh = 0.95 - args_for_densecap.box_nms_thresh = 0.3 - return BaseSegmenter(device, args.segmenter_checkpoint, model_name, reuse_feature=not args.disable_reuse_features, model=model, args=args) \ No newline at end of file diff --git a/spaces/VioletWLT/Lucylol_wan/README.md b/spaces/VioletWLT/Lucylol_wan/README.md deleted file mode 100644 index 2e3f57a2b5fc378252066ed1cd4d63373f2d0019..0000000000000000000000000000000000000000 --- a/spaces/VioletWLT/Lucylol_wan/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Lucylol Wan -emoji: 😻 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Warlord-K/TryOn/utils/__init__.py b/spaces/Warlord-K/TryOn/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/visualize.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/visualize.py deleted file mode 100644 index b3fa2d42cd654960a801c763942d4da21f333c88..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/visualize.py +++ /dev/null @@ -1,487 +0,0 @@ -from fastai.core import * -from fastai.vision import * -from matplotlib.axes import Axes -from .filters import IFilter, MasterFilter, ColorizerFilter -from .generators import gen_inference_deep, gen_inference_wide -from PIL import Image -import ffmpeg -import yt_dlp as youtube_dl -import gc -import requests -from io import BytesIO -import base64 -from IPython import display as ipythondisplay -from IPython.display import HTML -from IPython.display import Image as ipythonimage -import cv2 -import logging - -# adapted from https://www.pyimagesearch.com/2016/04/25/watermarking-images-with-opencv-and-python/ -def get_watermarked(pil_image: Image) -> Image: - try: - image = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR) - (h, w) = image.shape[:2] - image = np.dstack([image, np.ones((h, w), dtype="uint8") * 255]) - pct = 0.05 - full_watermark = cv2.imread( - './resource_images/watermark.png', cv2.IMREAD_UNCHANGED - ) - (fwH, fwW) = full_watermark.shape[:2] - wH = int(pct * h) - wW = int((pct * h / fwH) * fwW) - watermark = cv2.resize(full_watermark, (wH, wW), interpolation=cv2.INTER_AREA) - overlay = np.zeros((h, w, 4), dtype="uint8") - (wH, wW) = watermark.shape[:2] - overlay[h - wH - 10 : h - 10, 10 : 10 + wW] = watermark - # blend the two images together using transparent overlays - output = image.copy() - cv2.addWeighted(overlay, 0.5, output, 1.0, 0, output) - rgb_image = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - final_image = Image.fromarray(rgb_image) - return final_image - except: - # Don't want this to crash everything, so let's just not watermark the image for now. - return pil_image - - -class ModelImageVisualizer: - def __init__(self, filter: IFilter, results_dir: str = None): - self.filter = filter - self.results_dir = None if results_dir is None else Path(results_dir) - self.results_dir.mkdir(parents=True, exist_ok=True) - - def _clean_mem(self): - torch.cuda.empty_cache() - # gc.collect() - - def _open_pil_image(self, path: Path) -> Image: - return PIL.Image.open(path).convert('RGB') - - def _get_image_from_url(self, url: str) -> Image: - response = requests.get(url, timeout=30, headers={'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}) - img = PIL.Image.open(BytesIO(response.content)).convert('RGB') - return img - - def plot_transformed_image_from_url( - self, - url: str, - path: str = 'test_images/image.png', - results_dir:Path = None, - figsize: Tuple[int, int] = (20, 20), - render_factor: int = None, - - display_render_factor: bool = False, - compare: bool = False, - post_process: bool = True, - watermarked: bool = True, - ) -> Path: - img = self._get_image_from_url(url) - img.save(path) - return self.plot_transformed_image( - path=path, - results_dir=results_dir, - figsize=figsize, - render_factor=render_factor, - display_render_factor=display_render_factor, - compare=compare, - post_process = post_process, - watermarked=watermarked, - ) - - def plot_transformed_image( - self, - path: str, - results_dir:Path = None, - figsize: Tuple[int, int] = (20, 20), - render_factor: int = None, - display_render_factor: bool = False, - compare: bool = False, - post_process: bool = True, - watermarked: bool = True, - ) -> Path: - path = Path(path) - if results_dir is None: - results_dir = Path(self.results_dir) - result = self.get_transformed_image( - path, render_factor, post_process=post_process,watermarked=watermarked - ) - orig = self._open_pil_image(path) - if compare: - self._plot_comparison( - figsize, render_factor, display_render_factor, orig, result - ) - else: - self._plot_solo(figsize, render_factor, display_render_factor, result) - - orig.close() - result_path = self._save_result_image(path, result, results_dir=results_dir) - result.close() - return result_path - - def _plot_comparison( - self, - figsize: Tuple[int, int], - render_factor: int, - display_render_factor: bool, - orig: Image, - result: Image, - ): - fig, axes = plt.subplots(1, 2, figsize=figsize) - self._plot_image( - orig, - axes=axes[0], - figsize=figsize, - render_factor=render_factor, - display_render_factor=False, - ) - self._plot_image( - result, - axes=axes[1], - figsize=figsize, - render_factor=render_factor, - display_render_factor=display_render_factor, - ) - - def _plot_solo( - self, - figsize: Tuple[int, int], - render_factor: int, - display_render_factor: bool, - result: Image, - ): - fig, axes = plt.subplots(1, 1, figsize=figsize) - self._plot_image( - result, - axes=axes, - figsize=figsize, - render_factor=render_factor, - display_render_factor=display_render_factor, - ) - - def _save_result_image(self, source_path: Path, image: Image, results_dir = None) -> Path: - if results_dir is None: - results_dir = Path(self.results_dir) - result_path = results_dir / source_path.name - image.save(result_path) - return result_path - - def get_transformed_image( - self, path: Path, render_factor: int = None, post_process: bool = True, - watermarked: bool = True, - ) -> Image: - self._clean_mem() - orig_image = self._open_pil_image(path) - filtered_image = self.filter.filter( - orig_image, orig_image, render_factor=render_factor,post_process=post_process - ) - - if watermarked: - return get_watermarked(filtered_image) - - return filtered_image - - def _plot_image( - self, - image: Image, - render_factor: int, - axes: Axes = None, - figsize=(20, 20), - display_render_factor = False, - ): - if axes is None: - _, axes = plt.subplots(figsize=figsize) - axes.imshow(np.asarray(image) / 255) - axes.axis('off') - if render_factor is not None and display_render_factor: - plt.text( - 10, - 10, - 'render_factor: ' + str(render_factor), - color='white', - backgroundcolor='black', - ) - - def _get_num_rows_columns(self, num_images: int, max_columns: int) -> Tuple[int, int]: - columns = min(num_images, max_columns) - rows = num_images // columns - rows = rows if rows * columns == num_images else rows + 1 - return rows, columns - - -class VideoColorizer: - def __init__(self, vis: ModelImageVisualizer): - self.vis = vis - workfolder = Path('./video') - self.source_folder = workfolder / "source" - self.bwframes_root = workfolder / "bwframes" - self.audio_root = workfolder / "audio" - self.colorframes_root = workfolder / "colorframes" - self.result_folder = workfolder / "result" - - def _purge_images(self, dir): - for f in os.listdir(dir): - if re.search('.*?\.jpg', f): - os.remove(os.path.join(dir, f)) - - def _get_ffmpeg_probe(self, path:Path): - try: - probe = ffmpeg.probe(str(path)) - return probe - except ffmpeg.Error as e: - logging.error("ffmpeg error: {0}".format(e), exc_info=True) - logging.error('stdout:' + e.stdout.decode('UTF-8')) - logging.error('stderr:' + e.stderr.decode('UTF-8')) - raise e - except Exception as e: - logging.error('Failed to instantiate ffmpeg.probe. Details: {0}'.format(e), exc_info=True) - raise e - - def _get_fps(self, source_path: Path) -> str: - probe = self._get_ffmpeg_probe(source_path) - stream_data = next( - (stream for stream in probe['streams'] if stream['codec_type'] == 'video'), - None, - ) - return stream_data['avg_frame_rate'] - - def _download_video_from_url(self, source_url, source_path: Path): - if source_path.exists(): - source_path.unlink() - - ydl_opts = { - 'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4', - 'outtmpl': str(source_path), - 'retries': 30, - 'fragment-retries': 30 - } - with youtube_dl.YoutubeDL(ydl_opts) as ydl: - ydl.download([source_url]) - - def _extract_raw_frames(self, source_path: Path): - bwframes_folder = self.bwframes_root / (source_path.stem) - bwframe_path_template = str(bwframes_folder / '%5d.jpg') - bwframes_folder.mkdir(parents=True, exist_ok=True) - self._purge_images(bwframes_folder) - - process = ( - ffmpeg - .input(str(source_path)) - .output(str(bwframe_path_template), format='image2', vcodec='mjpeg', **{'q:v':'0'}) - .global_args('-hide_banner') - .global_args('-nostats') - .global_args('-loglevel', 'error') - ) - - try: - process.run() - except ffmpeg.Error as e: - logging.error("ffmpeg error: {0}".format(e), exc_info=True) - logging.error('stdout:' + e.stdout.decode('UTF-8')) - logging.error('stderr:' + e.stderr.decode('UTF-8')) - raise e - except Exception as e: - logging.error('Errror while extracting raw frames from source video. Details: {0}'.format(e), exc_info=True) - raise e - - def _colorize_raw_frames( - self, source_path: Path, render_factor: int = None, post_process: bool = True, - watermarked: bool = True, - ): - colorframes_folder = self.colorframes_root / (source_path.stem) - colorframes_folder.mkdir(parents=True, exist_ok=True) - self._purge_images(colorframes_folder) - bwframes_folder = self.bwframes_root / (source_path.stem) - - for img in progress_bar(os.listdir(str(bwframes_folder))): - img_path = bwframes_folder / img - - if os.path.isfile(str(img_path)): - color_image = self.vis.get_transformed_image( - str(img_path), render_factor=render_factor, post_process=post_process,watermarked=watermarked - ) - color_image.save(str(colorframes_folder / img)) - - def _build_video(self, source_path: Path) -> Path: - colorized_path = self.result_folder / ( - source_path.name.replace('.mp4', '_no_audio.mp4') - ) - colorframes_folder = self.colorframes_root / (source_path.stem) - colorframes_path_template = str(colorframes_folder / '%5d.jpg') - colorized_path.parent.mkdir(parents=True, exist_ok=True) - if colorized_path.exists(): - colorized_path.unlink() - fps = self._get_fps(source_path) - - process = ( - ffmpeg - .input(str(colorframes_path_template), format='image2', vcodec='mjpeg', framerate=fps) - .output(str(colorized_path), crf=17, vcodec='libx264') - .global_args('-hide_banner') - .global_args('-nostats') - .global_args('-loglevel', 'error') - ) - - try: - process.run() - except ffmpeg.Error as e: - logging.error("ffmpeg error: {0}".format(e), exc_info=True) - logging.error('stdout:' + e.stdout.decode('UTF-8')) - logging.error('stderr:' + e.stderr.decode('UTF-8')) - raise e - except Exception as e: - logging.error('Errror while building output video. Details: {0}'.format(e), exc_info=True) - raise e - - result_path = self.result_folder / source_path.name - if result_path.exists(): - result_path.unlink() - # making copy of non-audio version in case adding back audio doesn't apply or fails. - shutil.copyfile(str(colorized_path), str(result_path)) - - # adding back sound here - audio_file = Path(str(source_path).replace('.mp4', '.aac')) - if audio_file.exists(): - audio_file.unlink() - - os.system( - 'ffmpeg -y -i "' - + str(source_path) - + '" -vn -acodec copy "' - + str(audio_file) - + '"' - + ' -hide_banner' - + ' -nostats' - + ' -loglevel error' - ) - - if audio_file.exists(): - os.system( - 'ffmpeg -y -i "' - + str(colorized_path) - + '" -i "' - + str(audio_file) - + '" -shortest -c:v copy -c:a aac -b:a 256k "' - + str(result_path) - + '"' - + ' -hide_banner' - + ' -nostats' - + ' -loglevel error' - ) - logging.info('Video created here: ' + str(result_path)) - return result_path - - def colorize_from_url( - self, - source_url, - file_name: str, - render_factor: int = None, - post_process: bool = True, - watermarked: bool = True, - - ) -> Path: - source_path = self.source_folder / file_name - self._download_video_from_url(source_url, source_path) - return self._colorize_from_path( - source_path, render_factor=render_factor, post_process=post_process,watermarked=watermarked - ) - - def colorize_from_file_name( - self, file_name: str, render_factor: int = None, watermarked: bool = True, post_process: bool = True, - ) -> Path: - source_path = self.source_folder / file_name - return self._colorize_from_path( - source_path, render_factor=render_factor, post_process=post_process,watermarked=watermarked - ) - - def _colorize_from_path( - self, source_path: Path, render_factor: int = None, watermarked: bool = True, post_process: bool = True - ) -> Path: - if not source_path.exists(): - raise Exception( - 'Video at path specfied, ' + str(source_path) + ' could not be found.' - ) - self._extract_raw_frames(source_path) - self._colorize_raw_frames( - source_path, render_factor=render_factor,post_process=post_process,watermarked=watermarked - ) - return self._build_video(source_path) - - -def get_video_colorizer(render_factor: int = 21) -> VideoColorizer: - return get_stable_video_colorizer(render_factor=render_factor) - - -def get_artistic_video_colorizer( - root_folder: Path = Path('./'), - weights_name: str = 'ColorizeArtistic_gen', - results_dir='result_images', - render_factor: int = 35 -) -> VideoColorizer: - learn = gen_inference_deep(root_folder=root_folder, weights_name=weights_name) - filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor) - vis = ModelImageVisualizer(filtr, results_dir=results_dir) - return VideoColorizer(vis) - - -def get_stable_video_colorizer( - root_folder: Path = Path('./'), - weights_name: str = 'ColorizeVideo_gen', - results_dir='result_images', - render_factor: int = 21 -) -> VideoColorizer: - learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name) - filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor) - vis = ModelImageVisualizer(filtr, results_dir=results_dir) - return VideoColorizer(vis) - - -def get_image_colorizer( - root_folder: Path = Path('./'), render_factor: int = 35, artistic: bool = True -) -> ModelImageVisualizer: - if artistic: - return get_artistic_image_colorizer(root_folder=root_folder, render_factor=render_factor) - else: - return get_stable_image_colorizer(root_folder=root_folder, render_factor=render_factor) - - -def get_stable_image_colorizer( - root_folder: Path = Path('./'), - weights_name: str = 'ColorizeStable_gen', - results_dir='result_images', - render_factor: int = 35 -) -> ModelImageVisualizer: - learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name) - filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor) - vis = ModelImageVisualizer(filtr, results_dir=results_dir) - return vis - - -def get_artistic_image_colorizer( - root_folder: Path = Path('./'), - weights_name: str = 'ColorizeArtistic_gen', - results_dir='result_images', - render_factor: int = 35 -) -> ModelImageVisualizer: - learn = gen_inference_deep(root_folder=root_folder, weights_name=weights_name) - filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor) - vis = ModelImageVisualizer(filtr, results_dir=results_dir) - return vis - - -def show_image_in_notebook(image_path: Path): - ipythondisplay.display(ipythonimage(str(image_path))) - - -def show_video_in_notebook(video_path: Path): - video = io.open(video_path, 'r+b').read() - encoded = base64.b64encode(video) - ipythondisplay.display( - HTML( - data=''''''.format( - encoded.decode('ascii') - ) - ) - ) diff --git a/spaces/Xixeo/Text-to-Music/app.py b/spaces/Xixeo/Text-to-Music/app.py deleted file mode 100644 index b571467ce2f6bae52d8998f7668276fcb835975e..0000000000000000000000000000000000000000 --- a/spaces/Xixeo/Text-to-Music/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import time - -import gradio as gr -from sentence_transformers import SentenceTransformer - -import httpx -import json - -from utils import get_tags_for_prompts, get_mubert_tags_embeddings, get_pat - -minilm = SentenceTransformer('all-MiniLM-L6-v2') -mubert_tags_embeddings = get_mubert_tags_embeddings(minilm) - - -def get_track_by_tags(tags, pat, duration, maxit=20, loop=False): - if loop: - mode = "loop" - else: - mode = "track" - r = httpx.post('https://api-b2b.mubert.com/v2/RecordTrackTTM', - json={ - "method": "RecordTrackTTM", - "params": { - "pat": pat, - "duration": duration, - "tags": tags, - "mode": mode - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, rdata['error']['text'] - trackurl = rdata['data']['tasks'][0]['download_link'] - - print('Generating track ', end='') - for i in range(maxit): - r = httpx.get(trackurl) - if r.status_code == 200: - return trackurl - time.sleep(1) - - -def generate_track_by_prompt(email, prompt, duration, loop=False): - try: - pat = get_pat(email) - _, tags = get_tags_for_prompts(minilm, mubert_tags_embeddings, [prompt, ])[0] - return get_track_by_tags(tags, pat, int(duration), loop=loop), "Success", ",".join(tags) - except Exception as e: - return None, str(e), "" - - -block = gr.Blocks() - -with block: - gr.HTML( - """ -
    -
    -

    - Mubert -

    -
    -

    - All music is generated by Mubert API – www.mubert.com -

    -
    - """ - ) - with gr.Group(): - with gr.Box(): - email = gr.Textbox(label="email") - prompt = gr.Textbox(label="prompt") - duration = gr.Slider(label="duration (seconds)", value=30) - is_loop = gr.Checkbox(label="Generate loop") - out = gr.Audio() - result_msg = gr.Text(label="Result message") - tags = gr.Text(label="Tags") - btn = gr.Button("Submit").style(full_width=True) - - btn.click(fn=generate_track_by_prompt, inputs=[email, prompt, duration, is_loop], outputs=[out, result_msg, tags]) - gr.HTML(''' - - ''') - -block.launch() \ No newline at end of file diff --git a/spaces/XzJosh/LittleTaffy-Bert-VITS2/app.py b/spaces/XzJosh/LittleTaffy-Bert-VITS2/app.py deleted file mode 100644 index 1b7c195cbaa64b20db2f97ef6eb0081035ace7e7..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LittleTaffy-Bert-VITS2/app.py +++ /dev/null @@ -1,143 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - return audio - -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - return "Success", (hps.data.sampling_rate, audio) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/Taffy/G_1800.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - gr.Markdown(value=""" - 【AI小菲】在线语音合成(Bert-Vits2)\n - 作者:Xz乔希 https://space.bilibili.com/5859321\n - 声音归属:永雏塔菲 https://space.bilibili.com/1265680561\n - Bert-VITS2项目:https://github.com/Stardust-minus/Bert-VITS2\n - 【大菲】语音合成链接:https://huggingface.co/spaces/XzJosh/Taffy-Bert-VITS2\n - 使用本模型请严格遵守法律法规!\n - 发布二创作品请遵守永雏塔菲二创守则规范!并标注本项目作者及链接喵~\n - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="关注永雏小菲喵,关注永雏小菲谢谢喵") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.1, label='SDP/DP混合比') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.1, label='感情调节') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.1, label='音素长度') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.1, label='生成长度') - btn = gr.Button("生成喵!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output]) - -# webbrowser.open("http://127.0.0.1:6006") -# app.launch(server_port=6006, show_error=True) - - app.launch(show_error=True) diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/text/__init__.py b/spaces/XzJosh/ShanBao-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/restapi.py b/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/restapi.py deleted file mode 100644 index 9258b1a68860e1c6c4d0a4dfc3fad5ed245b3688..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/restapi.py +++ /dev/null @@ -1,48 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run a Flask REST API exposing one or more YOLOv5s models -""" - -import argparse -import io - -import torch -from flask import Flask, request -from PIL import Image - -app = Flask(__name__) -models = {} - -DETECTION_URL = '/v1/object-detection/' - - -@app.route(DETECTION_URL, methods=['POST']) -def predict(model): - if request.method != 'POST': - return - - if request.files.get('image'): - # Method 1 - # with request.files["image"] as f: - # im = Image.open(io.BytesIO(f.read())) - - # Method 2 - im_file = request.files['image'] - im_bytes = im_file.read() - im = Image.open(io.BytesIO(im_bytes)) - - if model in models: - results = models[model](im, size=640) # reduce size=320 for faster inference - return results.pandas().xyxy[0].to_json(orient='records') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Flask API exposing YOLOv5 model') - parser.add_argument('--port', default=5000, type=int, help='port number') - parser.add_argument('--model', nargs='+', default=['yolov5s'], help='model(s) to run, i.e. --model yolov5n yolov5s') - opt = parser.parse_args() - - for m in opt.model: - models[m] = torch.hub.load('ultralytics/yolov5', m, force_reload=True, skip_validation=True) - - app.run(host='0.0.0.0', port=opt.port) # debug=True causes Restarting with stat diff --git a/spaces/Yash911/DiabetesModel/app.py b/spaces/Yash911/DiabetesModel/app.py deleted file mode 100644 index a9fc6a69647839f72df21e0aa73cbd93bd98f029..0000000000000000000000000000000000000000 --- a/spaces/Yash911/DiabetesModel/app.py +++ /dev/null @@ -1,180 +0,0 @@ -# -*- coding: utf-8 -*- -"""Diabetes.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/15IbzL0ARqBYPhh4fx4KN2rJ62USEmIO2 - -Importing the Dependencies -""" -#pip install -U scikit-learn - -import numpy as np -import pandas as pd -from sklearn.preprocessing import StandardScaler -from sklearn.model_selection import train_test_split -from sklearn import svm -from sklearn.metrics import accuracy_score - -"""Data Collection and Analysis - -PIMA Diabetes Dataset -""" - -# loading the diabetes dataset to a pandas DataFrame -diabetes_dataset = pd.read_csv('diabetes.csv') - - -# printing the first 5 rows of the dataset -diabetes_dataset.head() - -# number of rows and Columns in this dataset -diabetes_dataset.shape - -# getting the statistical measures of the data -diabetes_dataset.describe() - -diabetes_dataset['Outcome'].value_counts() - -"""0 --> Non-Diabetic - -1 --> Diabetic -""" - -diabetes_dataset.groupby('Outcome').mean() - -# separating the data and labels -X = diabetes_dataset.drop(columns = 'Outcome', axis=1) -Y = diabetes_dataset['Outcome'] - -print(X) - -print(Y) - -"""Data Standardization""" - -scaler = StandardScaler() - -scaler.fit(X) - -standardized_data = scaler.transform(X) - -print(standardized_data) - -X = standardized_data -Y = diabetes_dataset['Outcome'] - -print(X) -print(Y) - -"""Train Test Split""" - -X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.2, stratify=Y, random_state=2) - -print(X.shape, X_train.shape, X_test.shape) - -"""Training the Model""" - -classifier = svm.SVC(kernel='linear') - -#training the support vector Machine Classifier -classifier.fit(X_train, Y_train) - -"""Model Evaluation - -Accuracy Score -""" - -# accuracy score on the training data -X_train_prediction = classifier.predict(X_train) -training_data_accuracy = accuracy_score(X_train_prediction, Y_train) - -print('Accuracy score of the training data : ', training_data_accuracy) - -# accuracy score on the test data -X_test_prediction = classifier.predict(X_test) -test_data_accuracy = accuracy_score(X_test_prediction, Y_test) - -print('Accuracy score of the test data : ', test_data_accuracy) - -"""Making a Predictive System""" - -def predict(Pregnancies,Glucose,BloodPressure,SkinThickness,Insulin,BMI,DiabetesPedigreeFunction,Age): - #input_data = (5,166,72,19,175,25.8,0.587,51) - input_data = (Pregnancies,Glucose,BloodPressure,SkinThickness,Insulin,BMI,DiabetesPedigreeFunction,Age) - - - # changing the input_data to numpy array - input_data_as_numpy_array = np.asarray(input_data) - - # reshape the array as we are predicting for one instance - input_data_reshaped = input_data_as_numpy_array.reshape(1,-1) - - # standardize the input data - std_data = scaler.transform(input_data_reshaped) - print(std_data) - - prediction = classifier.predict(std_data) - #print(prediction) - - if (prediction[0] == 0): - print('The person is not diabetic') - else: - print('The person is diabetic') - return prediction - -predict(4,136,64,20,175,25.6,0.597,50) - - -import gradio as gr - - -def dibetis_predict(Pregnancies,Glucose,BloodPressure,SkinThickness,Insulin,BMI,DiabetesPedigreeFunction,Age): - #input_data = (5,166,72,19,175,25.8,0.587,51) - input_data = (Pregnancies,Glucose,BloodPressure,SkinThickness,Insulin,BMI,DiabetesPedigreeFunction,Age) - - - # changing the input_data to numpy array - input_data_as_numpy_array = np.asarray(input_data) - - # reshape the array as we are predicting for one instance - input_data_reshaped = input_data_as_numpy_array.reshape(1,-1) - - # standardize the input data - std_data = scaler.transform(input_data_reshaped) - print(std_data) - - prediction = classifier.predict(std_data) - - if (prediction[0] == 0): - print('The person is not diabetic') - return 'The person is not diabetic' - else: - print('The person is diabetic') - return 'The person is diabetic' - - - - -demo = gr.Interface( - fn=dibetis_predict, - - inputs = [ - gr.Slider(0, 20, value=4, label="Pregnancies", info="Choose between 0 and 20"), - gr.Slider(1, 200, value=136, label="Glucose", info="Choose between 1 and 200"), - gr.Slider(1, 100, value=64, label="BloodPressure", info="Choose between 1 and 100"), - gr.Slider(1, 50, value=20, label="SkinThickness", info="Choose between 1 and 50"), - gr.Slider(1, 200, value=175, label="Insulin", info="Choose between 1 and 200"), - gr.Slider(1, 100, value=25.5, label="BMI", info="Choose between 1 and 100"), - gr.Slider(0, 1.0, value=0.549, label="DiabetesPedigreeFunction", info="Choose between 0.0 and 1.0"), - gr.Slider(1, 100, value=50, label="Age", info="Choose between 1 and 100"), - ], - #description="Diabetes Prediction Model By Yash Rawal" - #Markdown("""Dibetese prediction system by Yash Rawal""") - outputs = "text", -) - -if __name__ == "__main__": - demo.launch() - diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py deleted file mode 100644 index fb64a34a0bd89f45dd27e4143aa0c3093d4d6e65..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py +++ /dev/null @@ -1,579 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import torch - -from diffusers.utils import is_accelerate_available -from packaging import version -from transformers import CLIPFeatureExtractor, XLMRobertaTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import deprecate, logging -from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline with Stable->Alt, CLIPTextModel->RobertaSeriesModelWithTransformation, CLIPTokenizer->XLMRobertaTokenizer, AltDiffusionSafetyChecker->StableDiffusionSafetyChecker -class AltDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using Alt Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`RobertaSeriesModelWithTransformation`]): - Frozen text-encoder. Alt Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.RobertaSeriesModelWithTransformation), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`XLMRobertaTokenizer`): - Tokenizer of class - [XLMRobertaTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.XLMRobertaTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: RobertaSeriesModelWithTransformation, - tokenizer: XLMRobertaTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Alt Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate - # fix by only offloading self.safety_checker for now - cpu_offload(self.safety_checker.vision_model, device) - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if latents is None: - if device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device) - else: - latents = torch.randn(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return AltDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Yiqin/ChatVID/model/fastchat/serve/test_throughput.py b/spaces/Yiqin/ChatVID/model/fastchat/serve/test_throughput.py deleted file mode 100644 index 9cc5f45c7e06deb596b51213cd2667fd8361dbfd..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/serve/test_throughput.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Benchmarking script to test the throughput of serving workers.""" -import argparse -import json - -import requests -import threading -import time - -from fastchat.conversation import default_conversation - - -def main(): - if args.worker_address: - worker_addr = args.worker_address - else: - controller_addr = args.controller_address - ret = requests.post(controller_addr + "/refresh_all_workers") - ret = requests.post(controller_addr + "/list_models") - models = ret.json()["models"] - models.sort() - print(f"Models: {models}") - - ret = requests.post( - controller_addr + "/get_worker_address", json={"model": args.model_name} - ) - worker_addr = ret.json()["address"] - print(f"worker_addr: {worker_addr}") - - if worker_addr == "": - return - - conv = default_conversation.copy() - conv.append_message(conv.roles[0], "Tell me a story with more than 1000 words") - prompt_template = conv.get_prompt() - prompts = [prompt_template for _ in range(args.n_thread)] - - headers = {"User-Agent": "fastchat Client"} - ploads = [ - { - "model": args.model_name, - "prompt": prompts[i], - "max_new_tokens": args.max_new_tokens, - "temperature": 0.0, - # "stop": conv.sep, - } - for i in range(len(prompts)) - ] - - def send_request(results, i): - if args.test_dispatch: - ret = requests.post( - controller_addr + "/get_worker_address", json={"model": args.model_name} - ) - thread_worker_addr = ret.json()["address"] - else: - thread_worker_addr = worker_addr - print(f"thread {i} goes to {thread_worker_addr}") - response = requests.post( - thread_worker_addr + "/worker_generate_stream", - headers=headers, - json=ploads[i], - stream=False, - ) - k = list( - response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0") - ) - # print(k) - response_new_words = json.loads(k[-2].decode("utf-8"))["text"] - error_code = json.loads(k[-2].decode("utf-8"))["error_code"] - # print(f"=== Thread {i} ===, words: {1}, error code: {error_code}") - results[i] = len(response_new_words.split(" ")) - len(prompts[i].split(" ")) - - # use N threads to prompt the backend - tik = time.time() - threads = [] - results = [None] * args.n_thread - for i in range(args.n_thread): - t = threading.Thread(target=send_request, args=(results, i)) - t.start() - # time.sleep(0.5) - threads.append(t) - - for t in threads: - t.join() - - print(f"Time (POST): {time.time() - tik} s") - # n_words = 0 - # for i, response in enumerate(results): - # # print(prompt[i].replace(conv.sep, "\n"), end="") - # # make sure the streaming finishes at EOS or stopping criteria - # k = list(response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0")) - # response_new_words = json.loads(k[-2].decode("utf-8"))["text"] - # # print(response_new_words) - # n_words += len(response_new_words.split(" ")) - len(prompts[i].split(" ")) - n_words = sum(results) - time_seconds = time.time() - tik - print( - f"Time (Completion): {time_seconds}, n threads: {args.n_thread}, " - f"throughput: {n_words / time_seconds} words/s." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--controller-address", type=str, default="http://localhost:21001" - ) - parser.add_argument("--worker-address", type=str) - parser.add_argument("--model-name", type=str, default="vicuna") - parser.add_argument("--max-new-tokens", type=int, default=2048) - parser.add_argument("--n-thread", type=int, default=8) - parser.add_argument("--test-dispatch", action="store_true") - args = parser.parse_args() - - main() diff --git a/spaces/YotamNitzan/domain-expansion/expansion_utils/latent_operations.py b/spaces/YotamNitzan/domain-expansion/expansion_utils/latent_operations.py deleted file mode 100644 index 4c2169dec2c54dea41027a7e625f3b9db3955715..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/expansion_utils/latent_operations.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2023 Adobe Research. All rights reserved. -# To view a copy of the license, visit LICENSE.md. - -import torch - - -def project_to_subspaces(latent: torch.Tensor, basis: torch.Tensor, - repurposed_dims: torch.Tensor, base_dims: torch.Tensor = None, - step_size=None, mean=None): - """ - Project latent on the base subspace (Z_base) - spanned by the base_dims. - Then, traverses the projected latent along the repurposed directions. - If the step_size parameter can be interpreted as some 1D structure, - then traversal is performed separately for each repurposed dim with these as step sizes. - Otherwise, it defines a joint traversal of multiple dimensions at once. - Usually, it would be 3D, so the output can be visualized in a 2D image grid. - - Returns: - traversals.shape, 1D case -[num_steps, num_repurposed, shape of input] - traversals.shape, ND case -[num_steps_1, ..., num_steps_N, shape of input] - """ - - if type(latent) == list: - if len(latent) != 1: - raise ValueError('Latent wrapped by list should be of length 1') - latent = latent[0] - - latent_in_w = False - if latent.dim() == 2: - # Lift to W+ just for now - latent = w_to_wplus(latent) - latent_in_w = True - elif latent.dim() != 3: - raise ValueError('Latent is expected to be 2D (W space) or 3D (W+ space)') - - latent_dim = latent.shape[-1] - - if base_dims is None: - # Take all non-repurposed dims to span the base subspace -- default mode - base_dims = torch.Tensor([x for x in range(latent_dim) if x not in repurposed_dims]) - - # Use values instead of boolean to change order as needed - repurposed_directions = basis[:, repurposed_dims.numpy()] - base_directions = basis[:, base_dims.numpy()] - - projected_latent = latent @ base_directions - base_latent = projected_latent @ base_directions.T - - if mean is not None: - base_latent += (mean @ repurposed_directions) @ repurposed_directions.T - - if step_size is None: - if latent_in_w: - base_latent = wplus_to_w(base_latent) - return base_latent, None - - if isinstance(step_size, float) or isinstance(step_size, int): - step_size = torch.Tensor([step_size]).to(latent.device) - - repurposed_directions = repurposed_directions.T - - num_repurposed = len(repurposed_dims) - - if step_size.dim() == 1: - # separate same-sized steps on all dims - num_steps = step_size.shape[0] - output_shape = [num_steps, num_repurposed, *latent.shape] - edits = torch.einsum('a, df -> adf', step_size, repurposed_directions) - elif step_size.dim() == 3: - # compound steps, on multiple dims - steps_in_directions = step_size.shape[:-1] - output_shape = [*steps_in_directions, *latent.shape] - - edits = step_size @ repurposed_directions - else: - raise NotImplementedError('Cannot edit with these values') - - edit_latents = base_latent.expand(output_shape) + edits.unsqueeze(2).unsqueeze(2).expand(output_shape) - - if latent_in_w: - # Bring back to W sapce - base_latent, edit_latents = wplus_to_w(base_latent), edit_latents[..., 0, :] - - return base_latent, edit_latents - -def w_to_wplus(w_latent: torch.Tensor, num_ws=18): - return w_latent.unsqueeze(1).repeat([1, num_ws, 1]) - -def wplus_to_w(latents: torch.Tensor): - """ - latents is expected to have shape (...,num_ws,512) or - """ - with torch.no_grad(): - _, counts = torch.unique(latents, dim=-2, return_counts=True) - if len(counts) != 1: - raise ValueError('input latent is not a W code, conversion from W+ is undefined') - - return latents[..., 0, :] diff --git a/spaces/Yuliang/ECON/lib/net/NormalNet.py b/spaces/Yuliang/ECON/lib/net/NormalNet.py deleted file mode 100644 index d4f0fe8e9a9ab935ffc8a49ae2d22d23bc44d4cc..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/net/NormalNet.py +++ /dev/null @@ -1,178 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from lib.net.BasePIFuNet import BasePIFuNet -from lib.net.FBNet import GANLoss, IDMRFLoss, VGGLoss, define_D, define_G -from lib.net.net_util import init_net - - -class NormalNet(BasePIFuNet): - """ - HG PIFu network uses Hourglass stacks as the image filter. - It does the following: - 1. Compute image feature stacks and store it in self.im_feat_list - self.im_feat_list[-1] is the last stack (output stack) - 2. Calculate calibration - 3. If training, it index on every intermediate stacks, - If testing, it index on the last stack. - 4. Classification. - 5. During training, error is calculated on all stacks. - """ - def __init__(self, cfg): - - super(NormalNet, self).__init__() - - self.opt = cfg.net - - self.F_losses = [item[0] for item in self.opt.front_losses] - self.B_losses = [item[0] for item in self.opt.back_losses] - self.F_losses_ratio = [item[1] for item in self.opt.front_losses] - self.B_losses_ratio = [item[1] for item in self.opt.back_losses] - self.ALL_losses = self.F_losses + self.B_losses - - if self.training: - if 'vgg' in self.ALL_losses: - self.vgg_loss = VGGLoss() - if ('gan' in self.ALL_losses) or ('gan_feat' in self.ALL_losses): - self.gan_loss = GANLoss(use_lsgan=True) - if 'mrf' in self.ALL_losses: - self.mrf_loss = IDMRFLoss() - if 'l1' in self.ALL_losses: - self.l1_loss = nn.SmoothL1Loss() - - self.in_nmlF = [ - item[0] for item in self.opt.in_nml if "_F" in item[0] or item[0] == "image" - ] - self.in_nmlB = [ - item[0] for item in self.opt.in_nml if "_B" in item[0] or item[0] == "image" - ] - self.in_nmlF_dim = sum([ - item[1] for item in self.opt.in_nml if "_F" in item[0] or item[0] == "image" - ]) - self.in_nmlB_dim = sum([ - item[1] for item in self.opt.in_nml if "_B" in item[0] or item[0] == "image" - ]) - - self.netF = define_G(self.in_nmlF_dim, 3, 64, "global", 4, 9, 1, 3, "instance") - self.netB = define_G(self.in_nmlB_dim, 3, 64, "global", 4, 9, 1, 3, "instance") - - if ('gan' in self.ALL_losses): - self.netD = define_D(3, 64, 3, 'instance', False, 2, 'gan_feat' in self.ALL_losses) - - init_net(self) - - def forward(self, in_tensor): - - inF_list = [] - inB_list = [] - - for name in self.in_nmlF: - inF_list.append(in_tensor[name]) - for name in self.in_nmlB: - inB_list.append(in_tensor[name]) - - nmlF = self.netF(torch.cat(inF_list, dim=1)) - nmlB = self.netB(torch.cat(inB_list, dim=1)) - - # ||normal|| == 1 - nmlF_normalized = nmlF / torch.norm(nmlF, dim=1, keepdim=True) - nmlB_normalized = nmlB / torch.norm(nmlB, dim=1, keepdim=True) - - # output: float_arr [-1,1] with [B, C, H, W] - mask = ((in_tensor["image"].abs().sum(dim=1, keepdim=True) != 0.0).detach().float()) - - return nmlF_normalized * mask, nmlB_normalized * mask - - def get_norm_error(self, prd_F, prd_B, tgt): - """calculate normal loss - - Args: - pred (torch.tensor): [B, 6, 512, 512] - tagt (torch.tensor): [B, 6, 512, 512] - """ - - tgt_F, tgt_B = tgt["normal_F"], tgt["normal_B"] - - # netF, netB, netD - total_loss = {"netF": 0.0, "netB": 0.0} - - if 'l1' in self.F_losses: - l1_F_loss = self.l1_loss(prd_F, tgt_F) - total_loss["netF"] += self.F_losses_ratio[self.F_losses.index('l1')] * l1_F_loss - total_loss["l1_F"] = self.F_losses_ratio[self.F_losses.index('l1')] * l1_F_loss - if 'l1' in self.B_losses: - l1_B_loss = self.l1_loss(prd_B, tgt_B) - total_loss["netB"] += self.B_losses_ratio[self.B_losses.index('l1')] * l1_B_loss - total_loss["l1_B"] = self.B_losses_ratio[self.B_losses.index('l1')] * l1_B_loss - - if 'vgg' in self.F_losses: - vgg_F_loss = self.vgg_loss(prd_F, tgt_F) - total_loss["netF"] += self.F_losses_ratio[self.F_losses.index('vgg')] * vgg_F_loss - total_loss["vgg_F"] = self.F_losses_ratio[self.F_losses.index('vgg')] * vgg_F_loss - if 'vgg' in self.B_losses: - vgg_B_loss = self.vgg_loss(prd_B, tgt_B) - total_loss["netB"] += self.B_losses_ratio[self.B_losses.index('vgg')] * vgg_B_loss - total_loss["vgg_B"] = self.B_losses_ratio[self.B_losses.index('vgg')] * vgg_B_loss - - scale_factor = 0.5 - if 'mrf' in self.F_losses: - mrf_F_loss = self.mrf_loss( - F.interpolate(prd_F, scale_factor=scale_factor, mode='bicubic', align_corners=True), - F.interpolate(tgt_F, scale_factor=scale_factor, mode='bicubic', align_corners=True) - ) - total_loss["netF"] += self.F_losses_ratio[self.F_losses.index('mrf')] * mrf_F_loss - total_loss["mrf_F"] = self.F_losses_ratio[self.F_losses.index('mrf')] * mrf_F_loss - if 'mrf' in self.B_losses: - mrf_B_loss = self.mrf_loss( - F.interpolate(prd_B, scale_factor=scale_factor, mode='bicubic', align_corners=True), - F.interpolate(tgt_B, scale_factor=scale_factor, mode='bicubic', align_corners=True) - ) - total_loss["netB"] += self.B_losses_ratio[self.B_losses.index('mrf')] * mrf_B_loss - total_loss["mrf_B"] = self.B_losses_ratio[self.B_losses.index('mrf')] * mrf_B_loss - - if 'gan' in self.ALL_losses: - - total_loss["netD"] = 0.0 - - pred_fake = self.netD.forward(prd_B) - pred_real = self.netD.forward(tgt_B) - loss_D_fake = self.gan_loss(pred_fake, False) - loss_D_real = self.gan_loss(pred_real, True) - loss_G_fake = self.gan_loss(pred_fake, True) - - total_loss["netD"] += 0.5 * (loss_D_fake + loss_D_real - ) * self.B_losses_ratio[self.B_losses.index('gan')] - total_loss["D_fake"] = loss_D_fake * self.B_losses_ratio[self.B_losses.index('gan')] - total_loss["D_real"] = loss_D_real * self.B_losses_ratio[self.B_losses.index('gan')] - - total_loss["netB"] += loss_G_fake * self.B_losses_ratio[self.B_losses.index('gan')] - total_loss["G_fake"] = loss_G_fake * self.B_losses_ratio[self.B_losses.index('gan')] - - if 'gan_feat' in self.ALL_losses: - loss_G_GAN_Feat = 0 - for i in range(2): - for j in range(len(pred_fake[i]) - 1): - loss_G_GAN_Feat += self.l1_loss(pred_fake[i][j], pred_real[i][j].detach()) - total_loss["netB"] += loss_G_GAN_Feat * self.B_losses_ratio[ - self.B_losses.index('gan_feat')] - total_loss["G_GAN_Feat"] = loss_G_GAN_Feat * self.B_losses_ratio[ - self.B_losses.index('gan_feat')] - - return total_loss diff --git a/spaces/Yuliang/ECON/lib/net/net_util.py b/spaces/Yuliang/ECON/lib/net/net_util.py deleted file mode 100644 index fa3c9491596688de0425b4471318ff5c23a9a909..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/net/net_util.py +++ /dev/null @@ -1,258 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import functools - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import grad -from torch.nn import init - - -def gradient(inputs, outputs): - d_points = torch.ones_like(outputs, requires_grad=False, device=outputs.device) - points_grad = grad( - outputs=outputs, - inputs=inputs, - grad_outputs=d_points, - create_graph=True, - retain_graph=True, - only_inputs=True, - allow_unused=True, - )[0] - return points_grad - - -# def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False): -# "3x3 convolution with padding" -# return nn.Conv2d(in_planes, out_planes, kernel_size=3, -# stride=strd, padding=padding, bias=bias) - - -def conv3x3(in_planes, out_planes, kernel=3, strd=1, dilation=1, padding=1, bias=False): - "3x3 convolution with padding" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=kernel, - dilation=dilation, - stride=strd, - padding=padding, - bias=bias, - ) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -def init_weights(net, init_type="normal", init_gain=0.02): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might - work better for some applications. Feel free to try yourself. - """ - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, - "weight") and (classname.find("Conv") != -1 or classname.find("Linear") != -1): - if init_type == "normal": - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == "xavier": - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == "kaiming": - init.kaiming_normal_(m.weight.data, a=0, mode="fan_in") - elif init_type == "orthogonal": - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError( - "initialization method [%s] is not implemented" % init_type - ) - if hasattr(m, "bias") and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif ( - classname.find("BatchNorm2d") != -1 - ): # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - # print('initialize network with %s' % init_type) - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type="xavier", init_gain=0.02, gpu_ids=[]): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - if len(gpu_ids) > 0: - assert torch.cuda.is_available() - net = torch.nn.DataParallel(net) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -def imageSpaceRotation(xy, rot): - """ - args: - xy: (B, 2, N) input - rot: (B, 2) x,y axis rotation angles - - rotation center will be always image center (other rotation center can be represented by additional z translation) - """ - disp = rot.unsqueeze(2).sin().expand_as(xy) - return (disp * xy).sum(dim=1) - - -def cal_gradient_penalty( - netD, real_data, fake_data, device, type="mixed", constant=1.0, lambda_gp=10.0 -): - """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028 - - Arguments: - netD (network) -- discriminator network - real_data (tensor array) -- real images - fake_data (tensor array) -- generated images from the generator - device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') - type (str) -- if we mix real and fake data or not [real | fake | mixed]. - constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2 - lambda_gp (float) -- weight for this loss - - Returns the gradient penalty loss - """ - if lambda_gp > 0.0: - # either use real images, fake images, or a linear interpolation of two. - if type == "real": - interpolatesv = real_data - elif type == "fake": - interpolatesv = fake_data - elif type == "mixed": - alpha = torch.rand(real_data.shape[0], 1) - alpha = ( - alpha.expand(real_data.shape[0], - real_data.nelement() // - real_data.shape[0]).contiguous().view(*real_data.shape) - ) - alpha = alpha.to(device) - interpolatesv = alpha * real_data + ((1 - alpha) * fake_data) - else: - raise NotImplementedError("{} not implemented".format(type)) - interpolatesv.requires_grad_(True) - disc_interpolates = netD(interpolatesv) - gradients = torch.autograd.grad( - outputs=disc_interpolates, - inputs=interpolatesv, - grad_outputs=torch.ones(disc_interpolates.size()).to(device), - create_graph=True, - retain_graph=True, - only_inputs=True, - ) - gradients = gradients[0].view(real_data.size(0), -1) # flat the data - gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant)** - 2).mean() * lambda_gp # added eps - return gradient_penalty, gradients - else: - return 0.0, None - - -def get_norm_layer(norm_type="instance"): - """Return a normalization layer - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == "batch": - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == "instance": - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == "group": - norm_layer = functools.partial(nn.GroupNorm, 32) - elif norm_type == "none": - norm_layer = None - else: - raise NotImplementedError("normalization layer [%s] is not found" % norm_type) - return norm_layer - - -class Flatten(nn.Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -class ConvBlock(nn.Module): - def __init__(self, in_planes, out_planes, opt): - super(ConvBlock, self).__init__() - [k, s, d, p] = opt.conv3x3 - self.conv1 = conv3x3(in_planes, int(out_planes / 2), k, s, d, p) - self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4), k, s, d, p) - self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4), k, s, d, p) - - if opt.norm == "batch": - self.bn1 = nn.BatchNorm2d(in_planes) - self.bn2 = nn.BatchNorm2d(int(out_planes / 2)) - self.bn3 = nn.BatchNorm2d(int(out_planes / 4)) - self.bn4 = nn.BatchNorm2d(in_planes) - elif opt.norm == "group": - self.bn1 = nn.GroupNorm(32, in_planes) - self.bn2 = nn.GroupNorm(32, int(out_planes / 2)) - self.bn3 = nn.GroupNorm(32, int(out_planes / 4)) - self.bn4 = nn.GroupNorm(32, in_planes) - - if in_planes != out_planes: - self.downsample = nn.Sequential( - self.bn4, - nn.ReLU(True), - nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, bias=False), - ) - else: - self.downsample = None - - def forward(self, x): - residual = x - - out1 = self.bn1(x) - out1 = F.relu(out1, True) - out1 = self.conv1(out1) - - out2 = self.bn2(out1) - out2 = F.relu(out2, True) - out2 = self.conv2(out2) - - out3 = self.bn3(out2) - out3 = F.relu(out3, True) - out3 = self.conv3(out3) - - out3 = torch.cat((out1, out2, out3), 1) - - if self.downsample is not None: - residual = self.downsample(residual) - - out3 += residual - - return out3 diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/arraymisc/quantization.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/arraymisc/quantization.py deleted file mode 100644 index 8e47a3545780cf071a1ef8195efb0b7b662c8186..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/arraymisc/quantization.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def quantize(arr, min_val, max_val, levels, dtype=np.int64): - """Quantize an array of (-inf, inf) to [0, levels-1]. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the quantized array. - - Returns: - tuple: Quantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - arr = np.clip(arr, min_val, max_val) - min_val - quantized_arr = np.minimum( - np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) - - return quantized_arr - - -def dequantize(arr, min_val, max_val, levels, dtype=np.float64): - """Dequantize an array. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the dequantized array. - - Returns: - tuple: Dequantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - dequantized_arr = (arr + 0.5).astype(dtype) * (max_val - - min_val) / levels + min_val - - return dequantized_arr diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/sync_bn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/sync_bn.py deleted file mode 100644 index c9b016fcbe860989c56cd1040034bcfa60e146d2..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/sync_bn.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.module import Module -from torch.nn.parameter import Parameter - -from annotator.uniformer.mmcv.cnn import NORM_LAYERS -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sync_bn_forward_mean', 'sync_bn_forward_var', 'sync_bn_forward_output', - 'sync_bn_backward_param', 'sync_bn_backward_data' -]) - - -class SyncBatchNormFunction(Function): - - @staticmethod - def symbolic(g, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - return g.op( - 'mmcv::MMCVSyncBatchNorm', - input, - running_mean, - running_var, - weight, - bias, - momentum_f=momentum, - eps_f=eps, - group_i=group, - group_size_i=group_size, - stats_mode=stats_mode) - - @staticmethod - def forward(self, input, running_mean, running_var, weight, bias, momentum, - eps, group, group_size, stats_mode): - self.momentum = momentum - self.eps = eps - self.group = group - self.group_size = group_size - self.stats_mode = stats_mode - - assert isinstance( - input, (torch.HalfTensor, torch.FloatTensor, - torch.cuda.HalfTensor, torch.cuda.FloatTensor)), \ - f'only support Half or Float Tensor, but {input.type()}' - output = torch.zeros_like(input) - input3d = input.flatten(start_dim=2) - output3d = output.view_as(input3d) - num_channels = input3d.size(1) - - # ensure mean/var/norm/std are initialized as zeros - # ``torch.empty()`` does not guarantee that - mean = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - var = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - norm = torch.zeros_like( - input3d, dtype=torch.float, device=input3d.device) - std = torch.zeros( - num_channels, dtype=torch.float, device=input3d.device) - - batch_size = input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_forward_mean(input3d, mean) - batch_flag = torch.ones([1], device=mean.device, dtype=mean.dtype) - else: - # skip updating mean and leave it as zeros when the input is empty - batch_flag = torch.zeros([1], device=mean.device, dtype=mean.dtype) - - # synchronize mean and the batch flag - vec = torch.cat([mean, batch_flag]) - if self.stats_mode == 'N': - vec *= batch_size - if self.group_size > 1: - dist.all_reduce(vec, group=self.group) - total_batch = vec[-1].detach() - mean = vec[:num_channels] - - if self.stats_mode == 'default': - mean = mean / self.group_size - elif self.stats_mode == 'N': - mean = mean / total_batch.clamp(min=1) - else: - raise NotImplementedError - - # leave var as zeros when the input is empty - if batch_size > 0: - ext_module.sync_bn_forward_var(input3d, mean, var) - - if self.stats_mode == 'N': - var *= batch_size - if self.group_size > 1: - dist.all_reduce(var, group=self.group) - - if self.stats_mode == 'default': - var /= self.group_size - elif self.stats_mode == 'N': - var /= total_batch.clamp(min=1) - else: - raise NotImplementedError - - # if the total batch size over all the ranks is zero, - # we should not update the statistics in the current batch - update_flag = total_batch.clamp(max=1) - momentum = update_flag * self.momentum - ext_module.sync_bn_forward_output( - input3d, - mean, - var, - weight, - bias, - running_mean, - running_var, - norm, - std, - output3d, - eps=self.eps, - momentum=momentum, - group_size=self.group_size) - self.save_for_backward(norm, std, weight) - return output - - @staticmethod - @once_differentiable - def backward(self, grad_output): - norm, std, weight = self.saved_tensors - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(weight) - grad_input = torch.zeros_like(grad_output) - grad_output3d = grad_output.flatten(start_dim=2) - grad_input3d = grad_input.view_as(grad_output3d) - - batch_size = grad_input3d.size(0) - if batch_size > 0: - ext_module.sync_bn_backward_param(grad_output3d, norm, grad_weight, - grad_bias) - - # all reduce - if self.group_size > 1: - dist.all_reduce(grad_weight, group=self.group) - dist.all_reduce(grad_bias, group=self.group) - grad_weight /= self.group_size - grad_bias /= self.group_size - - if batch_size > 0: - ext_module.sync_bn_backward_data(grad_output3d, weight, - grad_weight, grad_bias, norm, std, - grad_input3d) - - return grad_input, None, None, grad_weight, grad_bias, \ - None, None, None, None, None - - -@NORM_LAYERS.register_module(name='MMSyncBN') -class SyncBatchNorm(Module): - """Synchronized Batch Normalization. - - Args: - num_features (int): number of features/chennels in input tensor - eps (float, optional): a value added to the denominator for numerical - stability. Defaults to 1e-5. - momentum (float, optional): the value used for the running_mean and - running_var computation. Defaults to 0.1. - affine (bool, optional): whether to use learnable affine parameters. - Defaults to True. - track_running_stats (bool, optional): whether to track the running - mean and variance during training. When set to False, this - module does not track such statistics, and initializes statistics - buffers ``running_mean`` and ``running_var`` as ``None``. When - these buffers are ``None``, this module always uses batch - statistics in both training and eval modes. Defaults to True. - group (int, optional): synchronization of stats happen within - each process group individually. By default it is synchronization - across the whole world. Defaults to None. - stats_mode (str, optional): The statistical mode. Available options - includes ``'default'`` and ``'N'``. Defaults to 'default'. - When ``stats_mode=='default'``, it computes the overall statistics - using those from each worker with equal weight, i.e., the - statistics are synchronized and simply divied by ``group``. This - mode will produce inaccurate statistics when empty tensors occur. - When ``stats_mode=='N'``, it compute the overall statistics using - the total number of batches in each worker ignoring the number of - group, i.e., the statistics are synchronized and then divied by - the total batch ``N``. This mode is beneficial when empty tensors - occur during training, as it average the total mean by the real - number of batch. - """ - - def __init__(self, - num_features, - eps=1e-5, - momentum=0.1, - affine=True, - track_running_stats=True, - group=None, - stats_mode='default'): - super(SyncBatchNorm, self).__init__() - self.num_features = num_features - self.eps = eps - self.momentum = momentum - self.affine = affine - self.track_running_stats = track_running_stats - group = dist.group.WORLD if group is None else group - self.group = group - self.group_size = dist.get_world_size(group) - assert stats_mode in ['default', 'N'], \ - f'"stats_mode" only accepts "default" and "N", got "{stats_mode}"' - self.stats_mode = stats_mode - if self.affine: - self.weight = Parameter(torch.Tensor(num_features)) - self.bias = Parameter(torch.Tensor(num_features)) - else: - self.register_parameter('weight', None) - self.register_parameter('bias', None) - if self.track_running_stats: - self.register_buffer('running_mean', torch.zeros(num_features)) - self.register_buffer('running_var', torch.ones(num_features)) - self.register_buffer('num_batches_tracked', - torch.tensor(0, dtype=torch.long)) - else: - self.register_buffer('running_mean', None) - self.register_buffer('running_var', None) - self.register_buffer('num_batches_tracked', None) - self.reset_parameters() - - def reset_running_stats(self): - if self.track_running_stats: - self.running_mean.zero_() - self.running_var.fill_(1) - self.num_batches_tracked.zero_() - - def reset_parameters(self): - self.reset_running_stats() - if self.affine: - self.weight.data.uniform_() # pytorch use ones_() - self.bias.data.zero_() - - def forward(self, input): - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input, got {input.dim()}D input') - if self.momentum is None: - exponential_average_factor = 0.0 - else: - exponential_average_factor = self.momentum - - if self.training and self.track_running_stats: - if self.num_batches_tracked is not None: - self.num_batches_tracked += 1 - if self.momentum is None: # use cumulative moving average - exponential_average_factor = 1.0 / float( - self.num_batches_tracked) - else: # use exponential moving average - exponential_average_factor = self.momentum - - if self.training or not self.track_running_stats: - return SyncBatchNormFunction.apply( - input, self.running_mean, self.running_var, self.weight, - self.bias, exponential_average_factor, self.eps, self.group, - self.group_size, self.stats_mode) - else: - return F.batch_norm(input, self.running_mean, self.running_var, - self.weight, self.bias, False, - exponential_average_factor, self.eps) - - def __repr__(self): - s = self.__class__.__name__ - s += f'({self.num_features}, ' - s += f'eps={self.eps}, ' - s += f'momentum={self.momentum}, ' - s += f'affine={self.affine}, ' - s += f'track_running_stats={self.track_running_stats}, ' - s += f'group_size={self.group_size},' - s += f'stats_mode={self.stats_mode})' - return s diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/saconv.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/saconv.py deleted file mode 100644 index b4ee3978e097fca422805db4e31ae481006d7971..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/saconv.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.cnn import CONV_LAYERS, ConvAWS2d, constant_init -from annotator.uniformer.mmcv.ops.deform_conv import deform_conv2d -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version - - -@CONV_LAYERS.register_module(name='SAC') -class SAConv2d(ConvAWS2d): - """SAC (Switchable Atrous Convolution) - - This is an implementation of SAC in DetectoRS - (https://arxiv.org/pdf/2006.02334.pdf). - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the convolving kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - padding_mode (string, optional): ``'zeros'``, ``'reflect'``, - ``'replicate'`` or ``'circular'``. Default: ``'zeros'`` - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If ``True``, adds a learnable bias to the - output. Default: ``True`` - use_deform: If ``True``, replace convolution with deformable - convolution. Default: ``False``. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - use_deform=False): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.use_deform = use_deform - self.switch = nn.Conv2d( - self.in_channels, 1, kernel_size=1, stride=stride, bias=True) - self.weight_diff = nn.Parameter(torch.Tensor(self.weight.size())) - self.pre_context = nn.Conv2d( - self.in_channels, self.in_channels, kernel_size=1, bias=True) - self.post_context = nn.Conv2d( - self.out_channels, self.out_channels, kernel_size=1, bias=True) - if self.use_deform: - self.offset_s = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.offset_l = nn.Conv2d( - self.in_channels, - 18, - kernel_size=3, - padding=1, - stride=stride, - bias=True) - self.init_weights() - - def init_weights(self): - constant_init(self.switch, 0, bias=1) - self.weight_diff.data.zero_() - constant_init(self.pre_context, 0) - constant_init(self.post_context, 0) - if self.use_deform: - constant_init(self.offset_s, 0) - constant_init(self.offset_l, 0) - - def forward(self, x): - # pre-context - avg_x = F.adaptive_avg_pool2d(x, output_size=1) - avg_x = self.pre_context(avg_x) - avg_x = avg_x.expand_as(x) - x = x + avg_x - # switch - avg_x = F.pad(x, pad=(2, 2, 2, 2), mode='reflect') - avg_x = F.avg_pool2d(avg_x, kernel_size=5, stride=1, padding=0) - switch = self.switch(avg_x) - # sac - weight = self._get_weight(self.weight) - zero_bias = torch.zeros( - self.out_channels, device=weight.device, dtype=weight.dtype) - - if self.use_deform: - offset = self.offset_s(avg_x) - out_s = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_s = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_s = super()._conv_forward(x, weight, zero_bias) - else: - out_s = super()._conv_forward(x, weight) - ori_p = self.padding - ori_d = self.dilation - self.padding = tuple(3 * p for p in self.padding) - self.dilation = tuple(3 * d for d in self.dilation) - weight = weight + self.weight_diff - if self.use_deform: - offset = self.offset_l(avg_x) - out_l = deform_conv2d(x, offset, weight, self.stride, self.padding, - self.dilation, self.groups, 1) - else: - if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.5.0')): - out_l = super().conv2d_forward(x, weight) - elif digit_version(TORCH_VERSION) >= digit_version('1.8.0'): - # bias is a required argument of _conv_forward in torch 1.8.0 - out_l = super()._conv_forward(x, weight, zero_bias) - else: - out_l = super()._conv_forward(x, weight) - - out = switch * out_s + (1 - switch) * out_l - self.padding = ori_p - self.dilation = ori_d - # post-context - avg_x = F.adaptive_avg_pool2d(out, output_size=1) - avg_x = self.post_context(avg_x) - avg_x = avg_x.expand_as(out) - out = out + avg_x - return out diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/uper_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/uper_head.py deleted file mode 100644 index 5c80567803776d55b2dbcc808c198aab34acb660..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/uper_head.py +++ /dev/null @@ -1,138 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead -from .psp_head import PPM - - -@HEADS.register_module() -class UPerHead(BaseDecodeHead): - """Unified Perceptual Parsing for Scene Understanding. - - This head is the implementation of `UPerNet - `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module applied on the last feature. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(UPerHead, self).__init__( - input_transform='multiple_select', **kwargs) - # PSP Module - self.psp_modules = PPM( - pool_scales, - self.in_channels[-1], - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels[-1] + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # FPN Module - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the top layer - l_conv = ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - fpn_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - self.fpn_bottleneck = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def psp_forward(self, inputs): - """Forward function of PSP module.""" - x = inputs[-1] - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - - return output - - def forward(self, inputs): - """Forward function.""" - - inputs = self._transform_inputs(inputs) - - # build laterals - laterals = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - laterals.append(self.psp_forward(inputs)) - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += resize( - laterals[i], - size=prev_shape, - mode='bilinear', - align_corners=self.align_corners) - - # build outputs - fpn_outs = [ - self.fpn_convs[i](laterals[i]) - for i in range(used_backbone_levels - 1) - ] - # append psp feature - fpn_outs.append(laterals[-1]) - - for i in range(used_backbone_levels - 1, 0, -1): - fpn_outs[i] = resize( - fpn_outs[i], - size=fpn_outs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fpn_outs = torch.cat(fpn_outs, dim=1) - output = self.fpn_bottleneck(fpn_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/abhishek/sketch-to-image/app.py b/spaces/abhishek/sketch-to-image/app.py deleted file mode 100644 index 61012eff64a1c720b72a79a055ab065a9498822e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/app.py +++ /dev/null @@ -1,187 +0,0 @@ -""" - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -""" - -import config - -import cv2 -import einops -import gradio as gr -import numpy as np -import torch -import random -import os - -from annotator.util import resize_image, HWC3 -from utils import create_model -from lib.ddim_hacked import DDIMSampler - -from safetensors.torch import load_file as stload -from collections import OrderedDict -from diffusers import StableDiffusionXLImg2ImgPipeline -from PIL import Image - -refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-refiner-1.0", - torch_dtype=torch.float16, -) -refiner.to("cuda") - - -model = create_model("./models/cldm_v15_unicontrol.yaml").cpu() -model_url = "https://huggingface.co/Robert001/UniControl-Model/resolve/main/unicontrol_v1.1.st" - -ckpts_path = "./" -# model_path = os.path.join(ckpts_path, "unicontrol_v1.1.ckpt") -model_path = os.path.join(ckpts_path, "unicontrol_v1.1.st") - -if not os.path.exists(model_path): - from basicsr.utils.download_util import load_file_from_url - - load_file_from_url(model_url, model_dir=ckpts_path) - -model_dict = OrderedDict(stload(model_path, device="cpu")) -model.load_state_dict(model_dict, strict=False) -# model.load_state_dict(load_state_dict(model_path, location='cuda'), strict=False) -model = model.cuda() -ddim_sampler = DDIMSampler(model) - - -def process_sketch( - input_image, - prompt, - a_prompt, - n_prompt, - num_samples, - ddim_steps, - guess_mode, - strength, - scale, - seed, - eta, -): - with torch.no_grad(): - input_image = np.array(input_image) - # print all unique values of array - img = 255 - input_image - H, W, C = img.shape - - detected_map = cv2.resize(img, (W, H), interpolation=cv2.INTER_LINEAR) - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, "b h w c -> b c h w").clone() - - if seed == -1: - seed = random.randint(0, 65535) - # seed_everything(seed) - - if config.save_memory: - model.low_vram_shift(is_diffusing=False) - - task_dic = {} - task_dic["name"] = "control_hedsketch" - task_instruction = "sketch to image" - task_dic["feature"] = model.get_learned_conditioning(task_instruction)[:, :1, :] - - cond = { - "c_concat": [control], - "c_crossattn": [model.get_learned_conditioning([prompt + ", " + a_prompt] * num_samples)], - "task": task_dic, - } - - un_cond = { - "c_concat": [control * 0] if guess_mode else [control], - "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)], - } - shape = (4, H // 8, W // 8) - - if config.save_memory: - model.low_vram_shift(is_diffusing=True) - model.control_scales = ( - [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) - ) - samples, intermediates = ddim_sampler.sample( - ddim_steps, - num_samples, - shape, - cond, - verbose=False, - eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond, - ) - - if config.save_memory: - model.low_vram_shift(is_diffusing=False) - - x_samples = model.decode_first_stage(samples) - x_samples = ( - (einops.rearrange(x_samples, "b c h w -> b h w c") * 127.5 + 127.5) - .cpu() - .numpy() - .clip(0, 255) - .astype(np.uint8) - ) - - result_image = [x_samples[i] for i in range(num_samples)][0] - result_image = Image.fromarray(result_image) - generator = torch.Generator("cuda").manual_seed(seed) - results = [result_image] + [refiner(prompt=prompt, generator=generator, image=result_image).images[0]] - - return results - - -demo = gr.Blocks() -with demo: - gr.Markdown("## Sketch to Image") - gr.Markdown( - "This demo is based on [UniControl: ONE compact model for ALL the visual-condition-to-image generation](https://huggingface.co/spaces/Robert001/UniControl-Demo)" - ) - # input_image = gr.Image(source="upload", type="numpy", tool="sketch") - with gr.Row(): - input_image = gr.Sketchpad( - shape=(512, 512), tool="pencil", brush_radius=6, type="pil", image_mode="RGB" - ).style(height=512, width=512) - # input_image = gr.Image(source="upload", type="numpy") - result_gallery = gr.Gallery(label="Output", show_label=False, elem_id="gallery").style( - grid=2, height=512, width=512 - ) - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1) - strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01) - guess_mode = gr.Checkbox(label="Guess Mode", value=False) - detect_resolution = gr.Slider(label="HED Resolution", minimum=128, maximum=1024, value=512, step=1) - ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=35, step=1) - scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True) - eta = gr.Number(label="eta (DDIM)", value=0.0) - a_prompt = gr.Textbox(label="Added Prompt", value="best quality, extremely detailed") - n_prompt = gr.Textbox( - label="Negative Prompt", - value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - ) - ips = [ - input_image, - prompt, - a_prompt, - n_prompt, - num_samples, - ddim_steps, - guess_mode, - strength, - scale, - seed, - eta, - ] - run_button.click(fn=process_sketch, inputs=ips, outputs=[result_gallery]) - -demo.launch(server_name="0.0.0.0") diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/interface.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/interface.py deleted file mode 100644 index 15a650373f8df3a83b0aae56ef05c670b7d42b5a..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/interface.py +++ /dev/null @@ -1,500 +0,0 @@ -import ctypes -import weakref -from collections import namedtuple - -from . import lib_openal as al -from . import lib_alc as alc - -from pyglet.util import debug_print -from pyglet.media.exceptions import MediaException - -_debug = debug_print('debug_media') - - -class OpenALException(MediaException): - def __init__(self, message=None, error_code=None, error_string=None): - self.message = message - self.error_code = error_code - self.error_string = error_string - - def __str__(self): - if self.error_code is None: - return f'OpenAL Exception: {self.message}' - else: - return f'OpenAL Exception [{self.error_code}: {self.error_string}]: {self.message}' - - -class OpenALObject: - """Base class for OpenAL objects.""" - @classmethod - def _check_error(cls, message=None): - """Check whether there is an OpenAL error and raise exception if present.""" - error_code = al.alGetError() - if error_code != 0: - error_string = al.alGetString(error_code) - # TODO: Fix return type in generated code? - error_string = ctypes.cast(error_string, ctypes.c_char_p) - raise OpenALException(message=message, - error_code=error_code, - error_string=str(error_string.value)) - - @classmethod - def _raise_error(cls, message): - """Raise an exception. Try to check for OpenAL error code too.""" - cls._check_error(message) - raise OpenALException(message) - - -class OpenALDevice(OpenALObject): - """OpenAL audio device.""" - def __init__(self, device_name=None): - self._al_device = alc.alcOpenDevice(device_name) - self.check_context_error('Failed to open device.') - if self._al_device is None: - raise OpenALException('No OpenAL devices.') - - def __del__(self): - assert _debug("Delete interface.OpenALDevice") - self.delete() - - def delete(self): - if self._al_device is not None: - if alc.alcCloseDevice(self._al_device) == alc.ALC_FALSE: - self._raise_context_error('Failed to close device.') - self._al_device = None - - @property - def is_ready(self): - return self._al_device is not None - - def create_context(self): - al_context = alc.alcCreateContext(self._al_device, None) - self.check_context_error('Failed to create context') - return OpenALContext(self, al_context) - - def get_version(self): - major = alc.ALCint() - minor = alc.ALCint() - alc.alcGetIntegerv(self._al_device, alc.ALC_MAJOR_VERSION, - ctypes.sizeof(major), major) - self.check_context_error('Failed to get version.') - alc.alcGetIntegerv(self._al_device, alc.ALC_MINOR_VERSION, - ctypes.sizeof(minor), minor) - self.check_context_error('Failed to get version.') - return major.value, minor.value - - def get_extensions(self): - extensions = alc.alcGetString(self._al_device, alc.ALC_EXTENSIONS) - self.check_context_error('Failed to get extensions.') - return ctypes.cast(extensions, ctypes.c_char_p).value.decode('ascii').split() - - def check_context_error(self, message=None): - """Check whether there is an OpenAL error and raise exception if present.""" - error_code = alc.alcGetError(self._al_device) - if error_code != 0: - error_string = alc.alcGetString(self._al_device, error_code) - # TODO: Fix return type in generated code? - error_string = ctypes.cast(error_string, ctypes.c_char_p) - raise OpenALException(message=message, - error_code=error_code, - error_string=str(error_string.value)) - - def _raise_context_error(self, message): - """Raise an exception. Try to check for OpenAL error code too.""" - self.check_context_error(message) - raise OpenALException(message) - - -class OpenALContext(OpenALObject): - def __init__(self, device, al_context): - self.device = device - self._al_context = al_context - self.make_current() - - def __del__(self): - assert _debug("Delete interface.OpenALContext") - self.delete() - - def delete(self): - if self._al_context is not None: - # TODO: Check if this context is current - alc.alcMakeContextCurrent(None) - self.device.check_context_error('Failed to make context no longer current.') - alc.alcDestroyContext(self._al_context) - self.device.check_context_error('Failed to destroy context.') - self._al_context = None - - def make_current(self): - alc.alcMakeContextCurrent(self._al_context) - self.device.check_context_error('Failed to make context current.') - - def create_source(self): - self.make_current() - return OpenALSource(self) - - -class OpenALSource(OpenALObject): - def __init__(self, context): - self.context = weakref.ref(context) - self.buffer_pool = OpenALBufferPool(self.context) - - self._al_source = al.ALuint() - al.alGenSources(1, self._al_source) - self._check_error('Failed to create source.') - - self._state = None - self._get_state() - - self._owned_buffers = {} - - def __del__(self): - assert _debug("Delete interface.OpenALSource") - self.delete() - - def delete(self): - if self.context() and self._al_source is not None: - # Only delete source if the context still exists - al.alDeleteSources(1, self._al_source) - self._check_error('Failed to delete source.') - self.buffer_pool.clear() - self._al_source = None - - @property - def is_initial(self): - self._get_state() - return self._state == al.AL_INITIAL - - @property - def is_playing(self): - self._get_state() - return self._state == al.AL_PLAYING - - @property - def is_paused(self): - self._get_state() - return self._state == al.AL_PAUSED - - @property - def is_stopped(self): - self._get_state() - return self._state == al.AL_STOPPED - - def _int_source_property(attribute): - return property(lambda self: self._get_int(attribute), - lambda self, value: self._set_int(attribute, value)) - - def _float_source_property(attribute): - return property(lambda self: self._get_float(attribute), - lambda self, value: self._set_float(attribute, value)) - - def _3floats_source_property(attribute): - return property(lambda self: self._get_3floats(attribute), - lambda self, value: self._set_3floats(attribute, value)) - - position = _3floats_source_property(al.AL_POSITION) - velocity = _3floats_source_property(al.AL_VELOCITY) - gain = _float_source_property(al.AL_GAIN) - buffers_queued = _int_source_property(al.AL_BUFFERS_QUEUED) - buffers_processed = _int_source_property(al.AL_BUFFERS_PROCESSED) - min_gain = _float_source_property(al.AL_MIN_GAIN) - max_gain = _float_source_property(al.AL_MAX_GAIN) - reference_distance = _float_source_property(al.AL_REFERENCE_DISTANCE) - rolloff_factor = _float_source_property(al.AL_ROLLOFF_FACTOR) - pitch = _float_source_property(al.AL_PITCH) - max_distance = _float_source_property(al.AL_MAX_DISTANCE) - direction = _3floats_source_property(al.AL_DIRECTION) - cone_inner_angle = _float_source_property(al.AL_CONE_INNER_ANGLE) - cone_outer_angle = _float_source_property(al.AL_CONE_OUTER_ANGLE) - cone_outer_gain = _float_source_property(al.AL_CONE_OUTER_GAIN) - sec_offset = _float_source_property(al.AL_SEC_OFFSET) - sample_offset = _float_source_property(al.AL_SAMPLE_OFFSET) - byte_offset = _float_source_property(al.AL_BYTE_OFFSET) - - del _int_source_property - del _float_source_property - del _3floats_source_property - - def play(self): - al.alSourcePlay(self._al_source) - self._check_error('Failed to play source.') - - def pause(self): - al.alSourcePause(self._al_source) - self._check_error('Failed to pause source.') - - def stop(self): - al.alSourceStop(self._al_source) - self._check_error('Failed to stop source.') - - def clear(self): - self._set_int(al.AL_BUFFER, al.AL_NONE) - while self._owned_buffers: - buf_name, buf = self._owned_buffers.popitem() - self.buffer_pool.unqueue_buffer(buf) - - def get_buffer(self): - return self.buffer_pool.get_buffer() - - def queue_buffer(self, buf): - assert buf.is_valid - al.alSourceQueueBuffers(self._al_source, 1, ctypes.byref(buf.al_buffer)) - self._check_error('Failed to queue buffer.') - self._add_buffer(buf) - - def unqueue_buffers(self): - processed = self.buffers_processed - assert _debug("Processed buffer count: {}".format(processed)) - if processed > 0: - buffers = (al.ALuint * processed)() - al.alSourceUnqueueBuffers(self._al_source, len(buffers), buffers) - self._check_error('Failed to unqueue buffers from source.') - for buf in buffers: - self.buffer_pool.unqueue_buffer(self._pop_buffer(buf)) - return processed - - def _get_state(self): - if self._al_source is not None: - self._state = self._get_int(al.AL_SOURCE_STATE) - - def _get_int(self, key): - assert self._al_source is not None - al_int = al.ALint() - al.alGetSourcei(self._al_source, key, al_int) - self._check_error('Failed to get value') - return al_int.value - - def _set_int(self, key, value): - assert self._al_source is not None - al.alSourcei(self._al_source, key, int(value)) - self._check_error('Failed to set value.') - - def _get_float(self, key): - assert self._al_source is not None - al_float = al.ALfloat() - al.alGetSourcef(self._al_source, key, al_float) - self._check_error('Failed to get value') - return al_float.value - - def _set_float(self, key, value): - assert self._al_source is not None - al.alSourcef(self._al_source, key, float(value)) - self._check_error('Failed to set value.') - - def _get_3floats(self, key): - assert self._al_source is not None - x = al.ALfloat() - y = al.ALfloat() - z = al.ALfloat() - al.alGetSource3f(self._al_source, key, x, y, z) - self._check_error('Failed to get value') - return x.value, y.value, z.value - - def _set_3floats(self, key, values): - assert self._al_source is not None - x, y, z = map(float, values) - al.alSource3f(self._al_source, key, x, y, z) - self._check_error('Failed to set value.') - - def _add_buffer(self, buf): - self._owned_buffers[buf.name] = buf - - def _pop_buffer(self, al_buffer): - buf = self._owned_buffers.pop(al_buffer, None) - assert buf is not None - return buf - - -OpenALOrientation = namedtuple("OpenALOrientation", ['at', 'up']) - - -class OpenALListener(OpenALObject): - @property - def position(self): - return self._get_3floats(al.AL_POSITION) - - @position.setter - def position(self, values): - self._set_3floats(al.AL_POSITION, values) - - @property - def velocity(self): - return self._get_3floats(al.AL_VELOCITY) - - @velocity.setter - def velocity(self, values): - self._set_3floats(al.AL_VELOCITY, values) - - @property - def gain(self): - return self._get_float(al.AL_GAIN) - - @gain.setter - def gain(self, value): - self._set_float(al.AL_GAIN, value) - - @property - def orientation(self): - values = self._get_float_vector(al.AL_ORIENTATION, 6) - return OpenALOrientation(values[0:3], values[3:6]) - - @orientation.setter - def orientation(self, values): - if len(values) == 2: - actual_values = values[0] + values[1] - elif len(values) == 6: - actual_values = values - else: - actual_values = [] - if len(actual_values) != 6: - raise ValueError("Need 2 tuples of 3 or 1 tuple of 6.") - self._set_float_vector(al.AL_ORIENTATION, actual_values) - - def _get_float(self, key): - al_float = al.ALfloat() - al.alGetListenerf(key, al_float) - self._check_error('Failed to get value') - return al_float.value - - def _set_float(self, key, value): - al.alListenerf(key, float(value)) - self._check_error('Failed to set value.') - - def _get_3floats(self, key): - x = al.ALfloat() - y = al.ALfloat() - z = al.ALfloat() - al.alGetListener3f(key, x, y, z) - self._check_error('Failed to get value') - return x.value, y.value, z.value - - def _set_3floats(self, key, values): - x, y, z = map(float, values) - al.alListener3f(key, x, y, z) - self._check_error('Failed to set value.') - - def _get_float_vector(self, key, count): - al_float_vector = (al.ALfloat * count)() - al.alGetListenerfv(key, al_float_vector) - self._check_error('Failed to get value') - return [x for x in al_float_vector] - - def _set_float_vector(self, key, values): - al_float_vector = (al.ALfloat * len(values))(*values) - al.alListenerfv(key, al_float_vector) - self._check_error('Failed to set value.') - - -class OpenALBuffer(OpenALObject): - _format_map = { - (1, 8): al.AL_FORMAT_MONO8, - (1, 16): al.AL_FORMAT_MONO16, - (2, 8): al.AL_FORMAT_STEREO8, - (2, 16): al.AL_FORMAT_STEREO16, - } - - def __init__(self, al_buffer, context): - self._al_buffer = al_buffer - self.context = context - assert self.is_valid - - def __del__(self): - assert _debug("Delete interface.OpenALBuffer") - self.delete() - - @property - def is_valid(self): - self._check_error('Before validate buffer.') - if self._al_buffer is None: - return False - valid = bool(al.alIsBuffer(self._al_buffer)) - if not valid: - # Clear possible error due to invalid buffer - al.alGetError() - return valid - - @property - def al_buffer(self): - assert self.is_valid - return self._al_buffer - - @property - def name(self): - assert self.is_valid - return self._al_buffer.value - - def delete(self): - if self._al_buffer is not None and self.context() and self.is_valid: - al.alDeleteBuffers(1, ctypes.byref(self._al_buffer)) - self._check_error('Error deleting buffer.') - self._al_buffer = None - - def data(self, audio_data, audio_format, length=None): - assert self.is_valid - length = length or audio_data.length - - try: - al_format = self._format_map[(audio_format.channels, audio_format.sample_size)] - except KeyError: - raise MediaException(f"OpenAL does not support '{audio_format.sample_size}bit' audio.") - - al.alBufferData(self._al_buffer, - al_format, - audio_data.data, - length, - audio_format.sample_rate) - self._check_error('Failed to add data to buffer.') - - -class OpenALBufferPool(OpenALObject): - """At least Mac OS X doesn't free buffers when a source is deleted; it just - detaches them from the source. So keep our own recycled queue. - """ - def __init__(self, context): - self.context = context - self._buffers = [] # list of free buffer names - - def __del__(self): - assert _debug("Delete interface.OpenALBufferPool") - self.clear() - - def __len__(self): - return len(self._buffers) - - def clear(self): - while self._buffers: - self._buffers.pop().delete() - - def get_buffer(self): - """Convenience for returning one buffer name""" - return self.get_buffers(1)[0] - - def get_buffers(self, number): - """Returns an array containing `number` buffer names. The returned list must - not be modified in any way, and may get changed by subsequent calls to - get_buffers. - """ - buffers = [] - while number > 0: - if self._buffers: - b = self._buffers.pop() - else: - b = self._create_buffer() - if b.is_valid: - # Protect against implementations that DO free buffers - # when they delete a source - carry on. - buffers.append(b) - number -= 1 - - return buffers - - def unqueue_buffer(self, buf): - """A buffer has finished playing, free it.""" - if buf.is_valid: - self._buffers.append(buf) - - def _create_buffer(self): - """Create a new buffer.""" - al_buffer = al.ALuint() - al.alGenBuffers(1, al_buffer) - self._check_error('Error allocating buffer.') - return OpenALBuffer(al_buffer, self.context) diff --git a/spaces/adorkin/ZeroShotClassificationEnRu/README.md b/spaces/adorkin/ZeroShotClassificationEnRu/README.md deleted file mode 100644 index 1c105e25e259d5a1b1b203916d193e1e370c1de6..0000000000000000000000000000000000000000 --- a/spaces/adorkin/ZeroShotClassificationEnRu/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Bilingual Zero Shot Classification -emoji: 🔍 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/ahdsoft/persian-keyphrase-extraction/Dockerfile b/spaces/ahdsoft/persian-keyphrase-extraction/Dockerfile deleted file mode 100644 index 6ee0881272882123d9568c2ea620b658d9dab16a..0000000000000000000000000000000000000000 --- a/spaces/ahdsoft/persian-keyphrase-extraction/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM python:3.9 - -RUN mkdir /app -WORKDIR /app - - -# download model and put in trained_model folder -# RUN wget https://drive.ahdsoft.dev/s/xp5Mb7bQ34Z7BRX/download/trained_model_10000.pt -# RUN mkdir trained_model -# RUN mv trained_model_10000.pt trained_model/ - -# download packages -COPY requirements.txt . - -# ENV HTTP_PROXY http://172.17.0.1:10805 -# ENV HTTPS_PROXY http://172.17.0.1:10805 -# ENV http_proxy http://172.17.0.1:10805 -# ENV https_proxy http://172.17.0.1:10805 - -RUN pip install git+https://github.com/mohammadkarrabi/NERDA.git -RUN pip install -r requirements.txt -RUN pip install sentence_transformers - -COPY . . - -ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=7201", "--server.address=0.0.0.0", "--client.showErrorDetails=false"] diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py deleted file mode 100644 index 9eae1b5f20117feef0a06e264a99b3c0c6143bac..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.utils.data import Dataset -from pathlib import Path -from vocoder import audio -import vocoder.hparams as hp -import numpy as np -import torch - - -class VocoderDataset(Dataset): - def __init__(self, metadata_fpath: Path, mel_dir: Path, wav_dir: Path): - print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, wav_dir)) - - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - - gta_fnames = [x[1] for x in metadata if int(x[4])] - gta_fpaths = [mel_dir.joinpath(fname) for fname in gta_fnames] - wav_fnames = [x[0] for x in metadata if int(x[4])] - wav_fpaths = [wav_dir.joinpath(fname) for fname in wav_fnames] - self.samples_fpaths = list(zip(gta_fpaths, wav_fpaths)) - - print("Found %d samples" % len(self.samples_fpaths)) - - def __getitem__(self, index): - mel_path, wav_path = self.samples_fpaths[index] - - # Load the mel spectrogram and adjust its range to [-1, 1] - mel = np.load(mel_path).T.astype(np.float32) / hp.mel_max_abs_value - - # Load the wav - wav = np.load(wav_path) - if hp.apply_preemphasis: - wav = audio.pre_emphasis(wav) - wav = np.clip(wav, -1, 1) - - # Fix for missing padding # TODO: settle on whether this is any useful - r_pad = (len(wav) // hp.hop_length + 1) * hp.hop_length - len(wav) - wav = np.pad(wav, (0, r_pad), mode='constant') - assert len(wav) >= mel.shape[1] * hp.hop_length - wav = wav[:mel.shape[1] * hp.hop_length] - assert len(wav) % hp.hop_length == 0 - - # Quantize the wav - if hp.voc_mode == 'RAW': - if hp.mu_law: - quant = audio.encode_mu_law(wav, mu=2 ** hp.bits) - else: - quant = audio.float_2_label(wav, bits=hp.bits) - elif hp.voc_mode == 'MOL': - quant = audio.float_2_label(wav, bits=16) - - return mel.astype(np.float32), quant.astype(np.int64) - - def __len__(self): - return len(self.samples_fpaths) - - -def collate_vocoder(batch): - mel_win = hp.voc_seq_len // hp.hop_length + 2 * hp.voc_pad - max_offsets = [x[0].shape[-1] -2 - (mel_win + 2 * hp.voc_pad) for x in batch] - mel_offsets = [np.random.randint(0, offset) for offset in max_offsets] - sig_offsets = [(offset + hp.voc_pad) * hp.hop_length for offset in mel_offsets] - - mels = [x[0][:, mel_offsets[i]:mel_offsets[i] + mel_win] for i, x in enumerate(batch)] - - labels = [x[1][sig_offsets[i]:sig_offsets[i] + hp.voc_seq_len + 1] for i, x in enumerate(batch)] - - mels = np.stack(mels).astype(np.float32) - labels = np.stack(labels).astype(np.int64) - - mels = torch.tensor(mels) - labels = torch.tensor(labels).long() - - x = labels[:, :hp.voc_seq_len] - y = labels[:, 1:] - - bits = 16 if hp.voc_mode == 'MOL' else hp.bits - - x = audio.label_2_float(x.float(), bits) - - if hp.voc_mode == 'MOL' : - y = audio.label_2_float(y.float(), bits) - - return x, y, mels \ No newline at end of file diff --git a/spaces/akhaliq/lama/saicinpainting/training/losses/constants.py b/spaces/akhaliq/lama/saicinpainting/training/losses/constants.py deleted file mode 100644 index ae3e5e151342232be8e2c2a77fe6fd5798dc2a8c..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/training/losses/constants.py +++ /dev/null @@ -1,152 +0,0 @@ -weights = {"ade20k": - [6.34517766497462, - 9.328358208955224, - 11.389521640091116, - 16.10305958132045, - 20.833333333333332, - 22.22222222222222, - 25.125628140703515, - 43.29004329004329, - 50.5050505050505, - 54.6448087431694, - 55.24861878453038, - 60.24096385542168, - 62.5, - 66.2251655629139, - 84.74576271186442, - 90.90909090909092, - 91.74311926605505, - 96.15384615384616, - 96.15384615384616, - 97.08737864077669, - 102.04081632653062, - 135.13513513513513, - 149.2537313432836, - 153.84615384615384, - 163.93442622950818, - 166.66666666666666, - 188.67924528301887, - 192.30769230769232, - 217.3913043478261, - 227.27272727272725, - 227.27272727272725, - 227.27272727272725, - 303.03030303030306, - 322.5806451612903, - 333.3333333333333, - 370.3703703703703, - 384.61538461538464, - 416.6666666666667, - 416.6666666666667, - 434.7826086956522, - 434.7826086956522, - 454.5454545454545, - 454.5454545454545, - 500.0, - 526.3157894736842, - 526.3157894736842, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 769.2307692307693, - 769.2307692307693, - 769.2307692307693, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 909.090909090909, - 1000.0, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 5000.0, - 5000.0, - 5000.0] -} \ No newline at end of file diff --git a/spaces/aliabid94/AutoGPT/tests/test_token_counter.py b/spaces/aliabid94/AutoGPT/tests/test_token_counter.py deleted file mode 100644 index 6d7ae016b2f823123b0b69b2eeb3eab50d94f00f..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/tests/test_token_counter.py +++ /dev/null @@ -1,63 +0,0 @@ -import unittest - -import tests.context -from autogpt.token_counter import count_message_tokens, count_string_tokens - - -class TestTokenCounter(unittest.TestCase): - def test_count_message_tokens(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_with_name(self): - messages = [ - {"role": "user", "content": "Hello", "name": "John"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_empty_input(self): - self.assertEqual(count_message_tokens([]), 3) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(KeyError): - count_message_tokens(messages, model="invalid_model") - - def test_count_message_tokens_gpt_4(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages, model="gpt-4-0314"), 15) - - def test_count_string_tokens(self): - string = "Hello, world!" - self.assertEqual( - count_string_tokens(string, model_name="gpt-3.5-turbo-0301"), 4 - ) - - def test_count_string_tokens_empty_input(self): - self.assertEqual(count_string_tokens("", model_name="gpt-3.5-turbo-0301"), 0) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(NotImplementedError): - count_message_tokens(messages, model="invalid_model") - - def test_count_string_tokens_gpt_4(self): - string = "Hello, world!" - self.assertEqual(count_string_tokens(string, model_name="gpt-4-0314"), 4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Compat5005.pm b/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Compat5005.pm deleted file mode 100644 index 139990edff0a28474e53f882d4c4efeb2ad7d701..0000000000000000000000000000000000000000 --- a/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Compat5005.pm +++ /dev/null @@ -1,131 +0,0 @@ -package # This is JSON::backportPP - JSON::backportPP5005; - -use 5.005; -use strict; - -my @properties; - -$JSON::PP5005::VERSION = '1.10'; - -BEGIN { - - sub utf8::is_utf8 { - 0; # It is considered that UTF8 flag off for Perl 5.005. - } - - sub utf8::upgrade { - } - - sub utf8::downgrade { - 1; # must always return true. - } - - sub utf8::encode { - } - - sub utf8::decode { - } - - *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii; - *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1; - *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates; - *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode; - - # missing in B module. - sub B::SVp_IOK () { 0x01000000; } - sub B::SVp_NOK () { 0x02000000; } - sub B::SVp_POK () { 0x04000000; } - - $INC{'bytes.pm'} = 1; # dummy -} - - - -sub _encode_ascii { - join('', map { $_ <= 127 ? chr($_) : sprintf('\u%04x', $_) } unpack('C*', $_[0]) ); -} - - -sub _encode_latin1 { - join('', map { chr($_) } unpack('C*', $_[0]) ); -} - - -sub _decode_surrogates { # from http://homepage1.nifty.com/nomenclator/unicode/ucs_utf.htm - my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00); # from perlunicode - my $bit = unpack('B32', pack('N', $uni)); - - if ( $bit =~ /^00000000000(...)(......)(......)(......)$/ ) { - my ($w, $x, $y, $z) = ($1, $2, $3, $4); - return pack('B*', sprintf('11110%s10%s10%s10%s', $w, $x, $y, $z)); - } - else { - Carp::croak("Invalid surrogate pair"); - } -} - - -sub _decode_unicode { - my ($u) = @_; - my ($utf8bit); - - if ( $u =~ /^00([89a-f][0-9a-f])$/i ) { # 0x80-0xff - return pack( 'H2', $1 ); - } - - my $bit = unpack("B*", pack("H*", $u)); - - if ( $bit =~ /^00000(.....)(......)$/ ) { - $utf8bit = sprintf('110%s10%s', $1, $2); - } - elsif ( $bit =~ /^(....)(......)(......)$/ ) { - $utf8bit = sprintf('1110%s10%s10%s', $1, $2, $3); - } - else { - Carp::croak("Invalid escaped unicode"); - } - - return pack('B*', $utf8bit); -} - - -sub JSON::PP::incr_text { - $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new; - - if ( $_[0]->{_incr_parser}->{incr_parsing} ) { - Carp::croak("incr_text can not be called when the incremental parser already started parsing"); - } - - $_[0]->{_incr_parser}->{incr_text} = $_[1] if ( @_ > 1 ); - $_[0]->{_incr_parser}->{incr_text}; -} - - -1; -__END__ - -=pod - -=head1 NAME - -JSON::PP5005 - Helper module in using JSON::PP in Perl 5.005 - -=head1 DESCRIPTION - -JSON::PP calls internally. - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2007-2012 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut - diff --git a/spaces/ambreshrc/Docx_File_Translator/app.py b/spaces/ambreshrc/Docx_File_Translator/app.py deleted file mode 100644 index 7da6f9135140a18589d0538f2279af2b7f5c65d7..0000000000000000000000000000000000000000 --- a/spaces/ambreshrc/Docx_File_Translator/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import streamlit as st -from io import BytesIO -# import gradio as gr -# Def_04 Docx file to translated_Docx file -#from transformers import MarianMTModel, MarianTokenizer -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import nltk -from nltk.tokenize import sent_tokenize -from nltk.tokenize import LineTokenizer -nltk.download('punkt') -import math -import torch -from docx import Document -from time import sleep -from stqdm import stqdm - -import docx -def getText(filename): - doc = docx.Document(filename) - fullText = [] - for para in doc.paragraphs: - fullText.append(para.text) - return '\n'.join(fullText) - - - - -# mname = 'Helsinki-NLP/opus-mt-en-hi' -# tokenizer = MarianTokenizer.from_pretrained(mname) -# model = MarianMTModel.from_pretrained(mname) -# model.to(device) - -#@st.cache -def btTranslator(docxfile): - if torch.cuda.is_available(): - dev = "cuda" - else: - dev = "cpu" - device = torch.device(dev) - a=getText(docxfile) - a1=a.split('\n') - bigtext=''' ''' - for a in a1: - bigtext=bigtext+'\n'+a - - files=Document() - - a="Helsinki-NLP/opus-mt-en-ru" - b="Helsinki-NLP/opus-mt-ru-fr" - c="Helsinki-NLP/opus-mt-fr-en" - # d="Helsinki-NLP/opus-mt-es-en" - langs=[a,b,c] - text=bigtext - - for _,lang in zip(stqdm(langs),langs): - st.spinner('Wait for it...') - sleep(0.5) - # mname = '/content/drive/MyDrive/Transformers Models/opus-mt-en-hi-Trans Model' - tokenizer = AutoTokenizer.from_pretrained(lang) - model = AutoModelForSeq2SeqLM.from_pretrained(lang) - model.to(device) - lt = LineTokenizer() - batch_size = 64 - paragraphs = lt.tokenize(bigtext) - translated_paragraphs = [] - - for _, paragraph in zip(stqdm(paragraphs),paragraphs): - st.spinner('Wait for it...') - # ###################################### - sleep(0.5) - - # ###################################### - sentences = sent_tokenize(paragraph) - batches = math.ceil(len(sentences) / batch_size) - translated = [] - for i in range(batches): - sent_batch = sentences[i*batch_size:(i+1)*batch_size] - model_inputs = tokenizer(sent_batch, return_tensors="pt", padding=True, truncation=True, max_length=500).to(device) - with torch.no_grad(): - translated_batch = model.generate(**model_inputs) - translated += translated_batch - translated = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] - translated_paragraphs += [" ".join(translated)] - #files.add_paragraph(translated) - translated_text = "\n".join(translated_paragraphs) - bigtext=translated_text - files.add_paragraph(bigtext) - #files2save=files.save("Translated.docx") - #files.save("Translated.docx") - #binary_output = BytesIO() - #f=files.save(binary_output) - #f2=f.getvalue() - return files - - - #return translated_text -st.title('Translator App') -st.markdown("Translate from Docx file") -st.subheader("File Upload") - -datas=st.file_uploader("Original File") -name=st.text_input('Enter New File Name: ') -#data=getText("C:\Users\Ambresh C\Desktop\Python Files\Translators\Trail Doc of 500 words.docx") -#if datas : - #if st.button(label='Data Process'): -binary_output = BytesIO() -if st.button(label='Translate'): - st.spinner('Waiting...') - btTranslator(datas).save(binary_output) - binary_output.getbuffer() - st.success("Translated") - -st.download_button(label='Download Translated File',file_name=(f"{name}_Translated.docx"), data=binary_output.getvalue()) -#files.save(f"{name}_Translated.docx") -#else: - # st.text('Upload File and Start the process') - - -#f4=binary_output(f3) - -#st.sidebar.download_button(label='Download Translated File',file_name='Translated.docx', data=binary_output.getvalue()) -# st.text_area(label="",value=btTranslator(datas),height=100) -# Footer \ No newline at end of file diff --git a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_mgpu.py b/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_mgpu.py deleted file mode 100644 index 8c09ed269432d5104a29afd520814b0627a4afa2..0000000000000000000000000000000000000000 --- a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_mgpu.py +++ /dev/null @@ -1,72 +0,0 @@ -# pack_str_list = [] -# import matplotlib.pyplot as plt -# import matplotlib.ticker as ticker -from map_packages_colors_mgpu import * - -def plot_abs_data_n_arr(n_arr, data, pack_str): - if len(n_arr) > len(data): - plt.plot(n_arr[0:len(data)], data, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - elif len(n_arr) < len(data): - plt.plot(n_arr, data[0:len(n_arr)], linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - else: - plt.plot(n_arr, data, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - -def plot_comp_data_n_arr(n_arr, data_1, data_2, pack_str): - ratio_arr = [] - if len(data_1) == len(data_2): - for i, elem in enumerate(data_1): - ratio_arr.append(elem/float(data_2[i])) - elif len(data_1) > len(data_2): - for i, elem in enumerate(data_2): - ratio_arr.append(data_1[i]/float(elem)) - # plt.plot(n_arr[0:len(data_2)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - elif len(data_2) > len(data_1): - for i, elem in enumerate(data_1): - ratio_arr.append(elem/data_2[i]) - # plt.plot(n_arr[0:len(data_1)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - # print(ratio_arr) - if len(n_arr) > len(ratio_arr): - plt.plot(n_arr[0:len(ratio_arr)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - elif len(n_arr) < len(ratio_arr): - plt.plot(n_arr, ratio_arr[0:len(n_arr)], linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - else: - plt.plot(n_arr, ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5) - - -def gen_settings(fig, ax, xlabel_str, ylabel_str, log_x_on, log_y_on, xlim_on, xlim_low, xlim_upp, ylim_on, ylim_low, ylim_upp, leg_loc, fn): - - ax.tick_params(direction='in', which='both', bottom=True, top=True, left=True, right=True) - # ax.xaxis.set_major_locator(MaxNLocator(integer=True)) - ax.xaxis.set_major_locator(ticker.AutoLocator()) - ax.xaxis.set_minor_locator(ticker.AutoMinorLocator()) - ax.yaxis.set_major_locator(ticker.AutoLocator()) - ax.yaxis.set_minor_locator(ticker.AutoMinorLocator()) - - if log_x_on: - ax.set_xscale('log') - if log_y_on: - ax.set_yscale('log') - if xlim_on == True: - plt.xlim([xlim_low, xlim_upp]) - if ylim_on == True: - plt.ylim([ylim_low, ylim_upp]) - # plt.xlabel(r"N (system size)") - # plt.ylabel(r"Time ($t_{package}$)") - plt.xlabel(xlabel_str) - plt.ylabel(ylabel_str) - if leg_loc== "out": - ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), prop={'size': 8}) - - elif leg_loc == None: - ax.legend(loc=0) - - plt.tight_layout() - fig.set_dpi(100) - if fn == None: - pass - else: - plt.savefig(fn) - plt.show() - # plt.savefig("perf_heisenberg_pure_evolution_single_thread_wallclock_absolute.pdf") - # plt.savefig("perf_heisenberg_pure_evolution_single_thread_wallclock_relative_line.svg", format="svg", dpi=1200) - # plt.show() diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py deleted file mode 100644 index e8783bca153954afd086536a6dee854ec5e17ba9..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py +++ /dev/null @@ -1,178 +0,0 @@ -import contextlib -import os - -import numpy as np -import torch -from PIL import Image -from basicsr.utils.download_util import load_file_from_url -from tqdm import tqdm - -from modules import modelloader, devices, script_callbacks, shared -from modules.shared import cmd_opts, opts, state -from swinir_model_arch import SwinIR as net -from swinir_model_arch_v2 import Swin2SR as net2 -from modules.upscaler import Upscaler, UpscalerData - - -device_swinir = devices.get_device_for('swinir') - - -class UpscalerSwinIR(Upscaler): - def __init__(self, dirname): - self.name = "SwinIR" - self.model_url = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0" \ - "/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \ - "-L_x4_GAN.pth " - self.model_name = "SwinIR 4x" - self.user_path = dirname - super().__init__() - scalers = [] - model_files = self.find_models(ext_filter=[".pt", ".pth"]) - for model in model_files: - if "http" in model: - name = self.model_name - else: - name = modelloader.friendly_name(model) - model_data = UpscalerData(name, model, self) - scalers.append(model_data) - self.scalers = scalers - - def do_upscale(self, img, model_file): - model = self.load_model(model_file) - if model is None: - return img - model = model.to(device_swinir, dtype=devices.dtype) - img = upscale(img, model) - try: - torch.cuda.empty_cache() - except: - pass - return img - - def load_model(self, path, scale=4): - if "http" in path: - dl_name = "%s%s" % (self.model_name.replace(" ", "_"), ".pth") - filename = load_file_from_url(url=path, model_dir=self.model_path, file_name=dl_name, progress=True) - else: - filename = path - if filename is None or not os.path.exists(filename): - return None - if filename.endswith(".v2.pth"): - model = net2( - upscale=scale, - in_chans=3, - img_size=64, - window_size=8, - img_range=1.0, - depths=[6, 6, 6, 6, 6, 6], - embed_dim=180, - num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, - upsampler="nearest+conv", - resi_connection="1conv", - ) - params = None - else: - model = net( - upscale=scale, - in_chans=3, - img_size=64, - window_size=8, - img_range=1.0, - depths=[6, 6, 6, 6, 6, 6, 6, 6, 6], - embed_dim=240, - num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8], - mlp_ratio=2, - upsampler="nearest+conv", - resi_connection="3conv", - ) - params = "params_ema" - - pretrained_model = torch.load(filename) - if params is not None: - model.load_state_dict(pretrained_model[params], strict=True) - else: - model.load_state_dict(pretrained_model, strict=True) - return model - - -def upscale( - img, - model, - tile=None, - tile_overlap=None, - window_size=8, - scale=4, -): - tile = tile or opts.SWIN_tile - tile_overlap = tile_overlap or opts.SWIN_tile_overlap - - - img = np.array(img) - img = img[:, :, ::-1] - img = np.moveaxis(img, 2, 0) / 255 - img = torch.from_numpy(img).float() - img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype) - with torch.no_grad(), devices.autocast(): - _, _, h_old, w_old = img.size() - h_pad = (h_old // window_size + 1) * window_size - h_old - w_pad = (w_old // window_size + 1) * window_size - w_old - img = torch.cat([img, torch.flip(img, [2])], 2)[:, :, : h_old + h_pad, :] - img = torch.cat([img, torch.flip(img, [3])], 3)[:, :, :, : w_old + w_pad] - output = inference(img, model, tile, tile_overlap, window_size, scale) - output = output[..., : h_old * scale, : w_old * scale] - output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy() - if output.ndim == 3: - output = np.transpose( - output[[2, 1, 0], :, :], (1, 2, 0) - ) # CHW-RGB to HCW-BGR - output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 - return Image.fromarray(output, "RGB") - - -def inference(img, model, tile, tile_overlap, window_size, scale): - # test the image tile by tile - b, c, h, w = img.size() - tile = min(tile, h, w) - assert tile % window_size == 0, "tile size should be a multiple of window_size" - sf = scale - - stride = tile - tile_overlap - h_idx_list = list(range(0, h - tile, stride)) + [h - tile] - w_idx_list = list(range(0, w - tile, stride)) + [w - tile] - E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img) - W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir) - - with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar: - for h_idx in h_idx_list: - if state.interrupted or state.skipped: - break - - for w_idx in w_idx_list: - if state.interrupted or state.skipped: - break - - in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile] - out_patch = model(in_patch) - out_patch_mask = torch.ones_like(out_patch) - - E[ - ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf - ].add_(out_patch) - W[ - ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf - ].add_(out_patch_mask) - pbar.update(1) - output = E.div_(W) - - return output - - -def on_ui_settings(): - import gradio as gr - - shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling"))) - shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling"))) - - -script_callbacks.on_ui_settings(on_ui_settings) diff --git a/spaces/aphenx/bingo/src/pages/api/blob.ts b/spaces/aphenx/bingo/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/artificialguybr/video-dubbing/whisper/data/README.md b/spaces/artificialguybr/video-dubbing/whisper/data/README.md deleted file mode 100644 index 3b4aea12df7fe01e0887c7b72e38522f47016356..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/data/README.md +++ /dev/null @@ -1,118 +0,0 @@ -This directory supplements the paper with more details on how we prepared the data for evaluation, to help replicate our experiments. - -## Short-form English-only datasets - -### LibriSpeech - -We used the test-clean and test-other splits from the [LibriSpeech ASR corpus](https://www.openslr.org/12). - -### TED-LIUM 3 - -We used the test split of [TED-LIUM Release 3](https://www.openslr.org/51/), using the segmented manual transcripts included in the release. - -### Common Voice 5.1 - -We downloaded the English subset of Common Voice Corpus 5.1 from [the official website](https://commonvoice.mozilla.org/en/datasets) - -### Artie - -We used the [Artie bias corpus](https://github.com/artie-inc/artie-bias-corpus). This is a subset of the Common Voice dataset. - -### CallHome & Switchboard - -We used the two corpora from [LDC2002S09](https://catalog.ldc.upenn.edu/LDC2002S09) and [LDC2002T43](https://catalog.ldc.upenn.edu/LDC2002T43) and followed the [eval2000_data_prep.sh](https://github.com/kaldi-asr/kaldi/blob/master/egs/fisher_swbd/s5/local/eval2000_data_prep.sh) script for preprocessing. The `wav.scp` files can be converted to WAV files with the following bash commands: - -```bash -mkdir -p wav -while read name cmd; do - echo $name - echo ${cmd/\|/} wav/$name.wav | bash -done < wav.scp -``` - - -### WSJ - -We used [LDC93S6B](https://catalog.ldc.upenn.edu/LDC93S6B) and [LDC94S13B](https://catalog.ldc.upenn.edu/LDC94S13B) and followed the [s5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/wsj/s5) to preprocess the dataset. - -### CORAAL - -We used the 231 interviews from [CORAAL (v. 2021.07)](https://oraal.uoregon.edu/coraal) and used the segmentations from [the FairSpeech project](https://github.com/stanford-policylab/asr-disparities/blob/master/input/CORAAL_transcripts.csv). - -### CHiME-6 - -We downloaded the [CHiME-5 dataset](https://spandh.dcs.shef.ac.uk//chime_challenge/CHiME5/download.html) and followed the stage 0 of the [s5_track1 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/chime6/s5_track1) to create the CHiME-6 dataset which fixes synchronization. We then used the binaural recordings (`*_P??.wav`) and the corresponding transcripts. - -### AMI-IHM, AMI-SDM1 - -We preprocessed the [AMI Corpus](https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml) by following the stage 0 ad 2 of the [s5b recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5b). - - -## Long-form English-only datasets - -### TED-LIUM 3 - -To create a long-form transcription dataset from the [TED-LIUM3](https://www.openslr.org/51/) dataset, we sliced the audio between the beginning of the first labeled segment and the end of the last labeled segment of each talk, and we used the concatenated text as the label. Below are the timestamps used for slicing each of the 11 TED talks in the test split. - -| Filename | Begin time (s) | End time (s) | -|---------------------|----------------|--------------| -| DanBarber_2010 | 16.09 | 1116.24 | -| JaneMcGonigal_2010 | 15.476 | 1187.61 | -| BillGates_2010 | 15.861 | 1656.94 | -| TomWujec_2010U | 16.26 | 402.17 | -| GaryFlake_2010 | 16.06 | 367.14 | -| EricMead_2009P | 18.434 | 536.44 | -| MichaelSpecter_2010 | 16.11 | 979.312 | -| DanielKahneman_2010 | 15.8 | 1199.44 | -| AimeeMullins_2009P | 17.82 | 1296.59 | -| JamesCameron_2010 | 16.75 | 1010.65 | -| RobertGupta_2010U | 16.8 | 387.03 | - -### Meanwhile - -This dataset consists of 64 segments from The Late Show with Stephen Colbert. The YouTube video ID, start and end timestamps, and the labels can be found in [meanwhile.json](meanwhile.json). The labels are collected from the closed-caption data for each video and corrected with manual inspection. - -### Rev16 - -We use a subset of 16 files from the 30 podcast episodes in [Rev.AI's Podcast Transcription Benchmark](https://www.rev.ai/blog/podcast-transcription-benchmark-part-1/), after finding that there are multiple cases where a significant portion of the audio and the labels did not match, mostly on the parts introducing the sponsors. We selected 16 episodes that do not have this error, whose "file number" are: - - 3 4 9 10 11 14 17 18 20 21 23 24 26 27 29 32 - -### Kincaid46 - -This dataset consists of 46 audio files and the corresponding transcripts compiled in the blog article [Which automatic transcription service is the most accurate - 2018](https://medium.com/descript/which-automatic-transcription-service-is-the-most-accurate-2018-2e859b23ed19) by Jason Kincaid. We used the 46 audio files and reference transcripts from the Airtable widget in the article. - -For the human transcription benchmark in the paper, we use a subset of 25 examples from this data, whose "Ref ID" are: - - 2 4 5 8 9 10 12 13 14 16 19 21 23 25 26 28 29 30 33 35 36 37 42 43 45 - -### Earnings-21, Earnings-22 - -For these datasets, we used the files available in [the speech-datasets repository](https://github.com/revdotcom/speech-datasets), as of their `202206` version. - -### CORAAL - -We used the 231 interviews from [CORAAL (v. 2021.07)](https://oraal.uoregon.edu/coraal) and used the full-length interview files and transcripts. - - -## Multilingual datasets - -### Multilingual LibriSpeech - -We used the test splits from each language in [the Multilingual LibriSpeech (MLS) corpus](https://www.openslr.org/94/). - -### Fleurs - -We collected audio files and transcripts using the implementation available as [HuggingFace datasets](https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py). To use as a translation dataset, we matched the numerical utterance IDs to find the corresponding transcript in English. - -### VoxPopuli - -We used the `get_asr_data.py` script from [the official repository](https://github.com/facebookresearch/voxpopuli) to collect the ASR data in 14 languages. - -### Common Voice 9 - -We downloaded the Common Voice Corpus 9 from [the official website](https://commonvoice.mozilla.org/en/datasets) - -### CoVOST 2 - -We collected the `X into English` data collected using [the official repository](https://github.com/facebookresearch/covost). diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/top_k_with_others.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/top_k_with_others.py deleted file mode 100644 index aebec024318ce71bcd68a50462df79550de7d437..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/top_k_with_others.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -Top-K plot with Others ----------------------- -This example shows how to use aggregate, window, and calculate transfromations -to display the top-k directors by average worldwide gross while grouping the -remaining directors as 'All Others'. -""" -# category: case studies -import altair as alt -from vega_datasets import data - -source = data.movies.url - -alt.Chart(source).mark_bar().encode( - x=alt.X("aggregate_gross:Q", aggregate="mean", title=None), - y=alt.Y( - "ranked_director:N", - sort=alt.Sort(op="mean", field="aggregate_gross", order="descending"), - title=None, - ), -).transform_aggregate( - aggregate_gross='mean(Worldwide_Gross)', - groupby=["Director"], -).transform_window( - rank='row_number()', - sort=[alt.SortField("aggregate_gross", order="descending")], -).transform_calculate( - ranked_director="datum.rank < 10 ? datum.Director : 'All Others'" -).properties( - title="Top Directors by Average Worldwide Gross", -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_tasks.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_tasks.py deleted file mode 100644 index 99928a1f5c8d111a021b099d63cf4d9098000b28..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_tasks.py +++ /dev/null @@ -1,104 +0,0 @@ -import typing -from abc import ABCMeta, abstractmethod -from types import TracebackType -from typing import Any, Callable, Coroutine, Optional, Type, TypeVar -from warnings import warn - -if typing.TYPE_CHECKING: - from anyio._core._tasks import CancelScope - -T_Retval = TypeVar("T_Retval") - - -class TaskStatus(metaclass=ABCMeta): - @abstractmethod - def started(self, value: object = None) -> None: - """ - Signal that the task has started. - - :param value: object passed back to the starter of the task - """ - - -class TaskGroup(metaclass=ABCMeta): - """ - Groups several asynchronous tasks together. - - :ivar cancel_scope: the cancel scope inherited by all child tasks - :vartype cancel_scope: CancelScope - """ - - cancel_scope: "CancelScope" - - async def spawn( - self, - func: Callable[..., Coroutine[Any, Any, Any]], - *args: object, - name: object = None - ) -> None: - """ - Start a new task in this task group. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - - .. deprecated:: 3.0 - Use :meth:`start_soon` instead. If your code needs AnyIO 2 compatibility, you - can keep using this until AnyIO 4. - - """ - warn( - 'spawn() is deprecated -- use start_soon() (without the "await") instead', - DeprecationWarning, - ) - self.start_soon(func, *args, name=name) - - @abstractmethod - def start_soon( - self, - func: Callable[..., Coroutine[Any, Any, Any]], - *args: object, - name: object = None - ) -> None: - """ - Start a new task in this task group. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - - .. versionadded:: 3.0 - """ - - @abstractmethod - async def start( - self, - func: Callable[..., Coroutine[Any, Any, Any]], - *args: object, - name: object = None - ) -> object: - """ - Start a new task and wait until it signals for readiness. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - :return: the value passed to ``task_status.started()`` - :raises RuntimeError: if the task finishes without calling ``task_status.started()`` - - .. versionadded:: 3.0 - """ - - @abstractmethod - async def __aenter__(self) -> "TaskGroup": - """Enter the task group context and allow starting new tasks.""" - - @abstractmethod - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - """Exit the task group context waiting for all tasks to finish.""" diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/multi_modality_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/multi_modality_dataset.py deleted file mode 100644 index 39551a613bbba32030487264e51e22684dbb75a1..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/multi_modality_dataset.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) 2021-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import math -from typing import List, Optional, NamedTuple - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - LanguagePairDataset, - FileAudioDataset, - data_utils, -) -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class ModalityDatasetItem(NamedTuple): - datasetname: str - dataset: any - max_positions: List[int] - max_tokens: Optional[int] = None - max_sentences: Optional[int] = None - - -# MultiModalityDataset: it concate multiple datasets with different modalities. -# Compared with ConcatDataset it can 1) sample data given the ratios for different datasets -# 2) it adds mode to indicate what type of the data samples come from. -# It will be used with GroupedEpochBatchIterator together to generate mini-batch with samples -# from the same type of dataset -# If only one dataset is used, it will perform like the original dataset with mode added -class MultiModalityDataset(ConcatDataset): - def __init__(self, datasets: List[ModalityDatasetItem]): - id_to_mode = [] - dsets = [] - max_tokens = [] - max_sentences = [] - max_positions = [] - for dset in datasets: - id_to_mode.append(dset.datasetname) - dsets.append(dset.dataset) - max_tokens.append(dset.max_tokens) - max_positions.append(dset.max_positions) - max_sentences.append(dset.max_sentences) - weights = [1.0 for s in dsets] - super().__init__(dsets, weights) - self.max_tokens = max_tokens - self.max_positions = max_positions - self.max_sentences = max_sentences - self.id_to_mode = id_to_mode - self.raw_sub_batch_samplers = [] - self._cur_epoch = 0 - - def set_epoch(self, epoch): - super().set_epoch(epoch) - self._cur_epoch = epoch - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - sample = self.datasets[dataset_idx][sample_idx] - return (dataset_idx, sample) - - def collater(self, samples): - if len(samples) == 0: - return {} - dataset_idx = samples[0][0] - # make sure all samples in samples are from same dataset - assert sum([0 if dataset_idx == s[0] else 1 for s in samples]) == 0 - samples = self.datasets[dataset_idx].collater([x[1] for x in samples]) - # add mode - samples["net_input"]["mode"] = self.id_to_mode[dataset_idx] - - return samples - - def size(self, index: int): - if len(self.datasets) == 1: - return self.datasets[0].size(index) - return super().size(index) - - @property - def sizes(self): - if len(self.datasets) == 1: - return self.datasets[0].sizes - super().sizes - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if len(self.datasets) == 1: - return self.datasets[0].ordered_indices() - indices_group = [] - for d_idx, ds in enumerate(self.datasets): - sample_num = self.cumulative_sizes[d_idx] - if d_idx > 0: - sample_num = sample_num - self.cumulative_sizes[d_idx - 1] - assert sample_num == len(ds) - indices_group.append(ds.ordered_indices()) - return indices_group - - def get_raw_batch_samplers(self, required_batch_size_multiple, seed): - if len(self.raw_sub_batch_samplers) > 0: - logger.info(" raw_sub_batch_samplers exists. No action is taken") - return - with data_utils.numpy_seed(seed): - indices = self.ordered_indices() - for i, ds in enumerate(self.datasets): - indices[i] = ds.filter_indices_by_size( - indices[i], - self.max_positions[i], - )[0] - sub_batch_sampler = ds.batch_by_size( - indices[i], - max_tokens=self.max_tokens[i], - max_sentences=self.max_sentences[i], - required_batch_size_multiple=required_batch_size_multiple, - ) - self.raw_sub_batch_samplers.append(sub_batch_sampler) - - def get_batch_samplers(self, mult_ratios, required_batch_size_multiple, seed): - self.get_raw_batch_samplers(required_batch_size_multiple, seed) - batch_samplers = [] - for i, _ in enumerate(self.datasets): - if i > 0: - sub_batch_sampler = [ - [y + self.cumulative_sizes[i - 1] for y in x] - for x in self.raw_sub_batch_samplers[i] - ] - else: - sub_batch_sampler = list(self.raw_sub_batch_samplers[i]) - smp_r = mult_ratios[i] - if smp_r != 1: - is_increase = "increased" if smp_r > 1 else "decreased" - logger.info( - "number of batch for the dataset {} is {} from {} to {}".format( - self.id_to_mode[i], - is_increase, - len(sub_batch_sampler), - int(len(sub_batch_sampler) * smp_r), - ) - ) - mul_samplers = [] - for _ in range(math.floor(smp_r)): - mul_samplers = mul_samplers + sub_batch_sampler - if math.floor(smp_r) != smp_r: - with data_utils.numpy_seed(seed + self._cur_epoch): - np.random.shuffle(sub_batch_sampler) - smp_num = int( - (smp_r - math.floor(smp_r)) * len(sub_batch_sampler) - ) - mul_samplers = mul_samplers + sub_batch_sampler[:smp_num] - sub_batch_sampler = mul_samplers - else: - logger.info( - "dataset {} batch number is {} ".format( - self.id_to_mode[i], len(sub_batch_sampler) - ) - ) - batch_samplers.append(sub_batch_sampler) - - return batch_samplers - - -class LangPairMaskDataset(FairseqDataset): - def __init__( - self, - dataset: LanguagePairDataset, - src_eos: int, - src_bos: Optional[int] = None, - noise_id: Optional[int] = -1, - mask_ratio: Optional[float] = 0, - mask_type: Optional[str] = "random", - ): - self.dataset = dataset - self.src_eos = src_eos - self.src_bos = src_bos - self.noise_id = noise_id - self.mask_ratio = mask_ratio - self.mask_type = mask_type - assert mask_type in ("random", "tail") - - @property - def src_sizes(self): - return self.dataset.src_sizes - - @property - def tgt_sizes(self): - return self.dataset.tgt_sizes - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def get_batch_shapes(self): - if hasattr(self.dataset, "get_batch_shapes"): - return self.dataset.get_batch_shapes() - return self.dataset.buckets - - def num_tokens_vec(self, indices): - return self.dataset.num_tokens_vec(indices) - - def __len__(self): - return len(self.dataset) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) - - def mask_src_tokens(self, sample): - src_item = sample["source"] - mask = None - if self.mask_type == "random": - mask = torch.rand(len(src_item)).le(self.mask_ratio) - else: - mask = torch.ones(len(src_item)) - mask[: int(len(src_item) * (1 - self.mask_ratio))] = 0 - mask = mask.eq(1) - if src_item[0] == self.src_bos: - mask[0] = False - if src_item[-1] == self.src_eos: - mask[-1] = False - mask_src_item = src_item.masked_fill(mask, self.noise_id) - smp = {"id": sample["id"], "source": mask_src_item, "target": sample["target"]} - return smp - - def __getitem__(self, index): - sample = self.dataset[index] - if self.mask_ratio > 0: - sample = self.mask_src_tokens(sample) - return sample - - def collater(self, samples, pad_to_length=None): - return self.dataset.collater(samples, pad_to_length) - - -class FileAudioDatasetWrapper(FileAudioDataset): - def collater(self, samples): - samples = super().collater(samples) - if len(samples) == 0: - return {} - samples["net_input"]["src_tokens"] = samples["net_input"]["source"] - samples["net_input"]["prev_output_tokens"] = None - del samples["net_input"]["source"] - samples["net_input"]["src_lengths"] = None - samples["net_input"]["alignment"] = None - return samples diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/sima sharifirad.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/sima sharifirad.html deleted file mode 100644 index 84281945ab1b38bc8885a4f011050f9e7b364e27..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/sima sharifirad.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - sima sharifirad - - - - -
    -

    sima sharifirad

    - -
    -
    1- How did you hear about SharpestMinds? What made you interested in mentoring with us?
    - Heard from a colleague at Loblaw who was a mentor with SM before. Want to work in a mentorship program that is more organized and focused. Also want to give back to the community. There is also financial incentives and benefits, previous mentorship experiences have been pro-bono. 

    2- Previous mentorship experience?
    - Worked on mentoring students at Dalhousie University and York University on topics of AI & NLP.  Mentored students who have finished their masters and wanted to pursue Phd. 
    - Helped and trained university students with finding a job. 

    3- What's your Data science career journey been like?
    - Pursued Masters in Iran and then did Phd in CS in Canada. 
    - started working in NLP and was involved in Research for 3 years. 
    - Have experience in publishing articles at different conferences and speaker at different events. 
    - 2019 took a job as Lead Data Scientist at Introhive. 
    - Moved to retail with Loblaw as a senior Data Scientist. 
    - 2021 - Moved to affinity.co - a startup as Senior Data scientist. 
    - Switched to a new job recently as Lead Data Scientist. 

    4- What challenges do beginners face while breaking into DS career? How can you help them with this?
    - The challenges are from different areas and depends on Backgrounds. 
    Non-Tech - Starting with tech basics and programming can be challenging. 
    Tech - Which tools to use, and important parts to focus on the problem. Having a coherent frame of thought and working on specific projects rather than generic ones. 
    Beginners should work on projects that showcase skillsets. Dealing with different hard problems and not losing faith. 

    Can help mentees by connecting and understanding their background, What are their goals and the timeline they want to achieve, and understand their support system, What their previous experience has been, and know if they want to land a research-oriented or business-related job, can help with both. 

    5- Questions about SM?
    - How does mentorship work?
    - What's the future of SharpestMinds?
    - Is there specific timing required to spend with the mentee, or is it mutually decided?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/src/create_data.py b/spaces/atimughal662/InfoFusion/src/create_data.py deleted file mode 100644 index 52e6257319bdee820989df334e14122cf58b68cc..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/create_data.py +++ /dev/null @@ -1,1847 +0,0 @@ -""" -Dataset creation tools. - -Keep to-level imports clean of non-trivial imports for specific tools, -because this file is imported for various purposes -""" - -import ast -import concurrent.futures -import contextlib -import hashlib -import json -import os -import shutil -import signal -import sys -import traceback -from concurrent.futures import ProcessPoolExecutor - -import psutil -import pytest -import pandas as pd -import numpy as np -from tqdm import tqdm - -from utils import flatten_list, remove - - -def parse_rst_file(filepath): - with open(filepath, 'r') as f: - input_data = f.read() - settings_overrides = {'initial_header_level': 2} - from docutils import core - document = core.publish_doctree( - source=input_data, - source_path=filepath, - settings_overrides=settings_overrides, - ) - qa_pairs = [] - current_section = None - current_question = "" - current_answer = "" - for node in document.traverse(): - if node.__class__.__name__ == 'section': - current_section = "" - elif current_section is not None: - if node.__class__.__name__ == 'Text': - if node.astext()[-1] == "?": - if current_question: - qa_pairs.append((current_question, current_answer)) - current_question = node.astext() - current_answer = "" - else: - current_answer += node.astext() - if current_answer: - qa_pairs.append((current_question, current_answer)) - return {k: v for k, v in qa_pairs} - - -def test_scrape_dai_docs(): - home = os.path.expanduser('~') - file = os.path.join(home, 'h2oai/docs/faq.rst') - qa_pairs = parse_rst_file(file) - prompt_type = 'human_bot' - from prompter import prompt_types - assert prompt_type in prompt_types - save_thing = [{"instruction": k, "output": v, 'prompt_type': prompt_type} for k, v in qa_pairs.items()] - output_file = "dai_faq.json" - with open(output_file, "wt") as f: - f.write(json.dumps(save_thing, indent=2)) - - -def test_scrape_dai_docs_all(): - """ - pytest create_data.py::test_scrape_dai_docs_all - """ - import glob - import nltk - nltk.download('punkt') - dd = {} - np.random.seed(1234) - home = os.path.expanduser('~') - files = list(glob.glob(os.path.join(home, "h2oai/docs/**/*rst"))) - np.random.shuffle(files) - val_count = int(0.05 * len(files)) - train_files = files[val_count:] - valid_files = files[:val_count] - things = [ - ("dai_docs.train.json", train_files), - ("dai_docs.valid.json", valid_files) - ] - for LEN in [100, 200, 500]: - for output_file, ff in things: - if output_file not in dd: - dd[output_file] = [] - for f in ff: - with open(f) as input: - blob = input.read() - blob = blob.replace("~~", "") - blob = blob.replace("==", "") - blob = blob.replace("''", "") - blob = blob.replace("--", "") - blob = blob.replace("**", "") - dd[output_file].extend(get_sentences(blob, length=LEN)) - for output_file, _ in things: - save_thing = [{"output": k.strip(), 'prompt_type': 'plain'} for k in dd[output_file]] - with open(output_file, "wt") as f: - f.write(json.dumps(save_thing, indent=2)) - - -def get_sentences(blob, length): - """ - break-up input text into sentences and then output list of sentences of about length in size - :param blob: - :param length: - :return: - """ - import nltk - nltk.download('punkt') - from nltk.tokenize import sent_tokenize - sentences = sent_tokenize(blob) - my_sentences = [] - my_string = "" - for sentence in sentences: - if len(my_string) + len(sentence) <= length: - if my_string: - my_string += " " + sentence - else: - my_string = sentence - else: - my_sentences.append(my_string) - my_string = "" - return my_sentences or [my_string] - - -def setup_dai_docs(path=None, dst="working_dir_docs", from_hf=False): - """ - Only supported if have access to source code or HF token for HF spaces and from_hf=True - :param path: - :param dst: - :param from_hf: - :return: - """ - - home = os.path.expanduser('~') - - if from_hf: - # assumes - from huggingface_hub import hf_hub_download - # True for case when locally already logged in with correct token, so don't have to set key - token = os.getenv('HUGGING_FACE_HUB_TOKEN', True) - path_to_zip_file = hf_hub_download('h2oai/dai_docs', 'dai_docs.zip', token=token, repo_type='dataset') - path = 'h2oai' - import zipfile - with zipfile.ZipFile(path_to_zip_file, 'r') as zip_ref: - zip_ref.extractall(path) - path = os.path.join(path, 'docs/**/*') - - if path is None: - if os.path.isdir(os.path.join(home, 'h2oai')): - path = os.path.join(home, "h2oai/docs/**/*") - else: - assert os.path.isdir(os.path.join(home, 'h2oai.superclean')), '%s does not exist' % path - path = os.path.join(home, "h2oai.superclean/docs/**/*") - import glob - files = list(glob.glob(path, recursive=True)) - - # pandoc can't find include files - - remove(dst) - os.makedirs(dst) - - # copy full tree, for absolute paths in rst - for fil in files: - if os.path.isfile(fil): - shutil.copy(fil, dst) - - # hack for relative path - scorers_dir = os.path.join(dst, 'scorers') - makedirs(scorers_dir) - for fil in glob.glob(os.path.join(dst, '*.frag')): - shutil.copy(fil, scorers_dir) - - return dst - - -def rst_to_outputs(files, min_len=30, max_len=2048 // 2 - 30): - # account for sequence length (context window) including prompt and input and output - - # os.system('pandoc -f rst -t plain ./expert_settings/nlp_settings.rst') - import pypandoc - basedir = os.path.abspath(os.getcwd()) - - outputs = [] - for fil in files: - os.chdir(basedir) - os.chdir(os.path.dirname(fil)) - fil = os.path.basename(fil) - print("Processing %s" % fil, flush=True) - # out_format can be one of: asciidoc, asciidoctor, beamer, biblatex, bibtex, commonmark, commonmark_x, - # context, csljson, docbook, docbook4, docbook5, docx, dokuwiki, - # dzslides, epub, epub2, epub3, fb2, gfm, haddock, html, html4, html5, icml, - # ipynb, jats, jats_archiving, jats_articleauthoring, jats_publishing, jira, - # json, latex, man, - # markdown, markdown_github, markdown_mmd, markdown_phpextra, markdown_strict, - # mediawiki, ms, muse, native, odt, opendocument, opml, org, pdf, plain, pptx, - # revealjs, rst, rtf, s5, slideous, slidy, tei, texinfo, textile, xwiki, zimwiki - out_format = 'plain' - # avoid extra new lines injected into text - extra_args = ['--wrap=preserve', '--resource path="%s" % dst'] - - plain_list = [] - try: - # valid for expert settings - input_rst = pypandoc.convert_file(fil, 'rst') - input_list = input_rst.split('\n``') - for input_subrst in input_list: - input_plain = pypandoc.convert_text(input_subrst, format='rst', to='plain') - plain_list.append([input_plain, fil]) - except Exception as e: - print("file exception: %s %s" % (fil, str(e)), flush=True) - - if not plain_list: - # if failed to process as pieces of rst, then - output = pypandoc.convert_file(fil, out_format, extra_args=extra_args, format='rst') - outputs1 = get_sentences(output, length=max_len) - for oi, output in enumerate(outputs1): - output = output.replace('\n\n', '\n') - plain_list.append([output, fil]) - outputs.extend(plain_list) - - # report: - # [print(len(x)) for x in outputs] - - # deal with blocks longer than context size (sequence length) of 2048 - new_outputs = [] - num_truncated = 0 - num_orig = len(outputs) - for output, fil in outputs: - if len(output) < max_len: - new_outputs.append([output, fil]) - continue - outputs1 = get_sentences(output, length=max_len) - for oi, output1 in enumerate(outputs1): - output1 = output1.replace('\n\n', '\n') - new_outputs.append([output1, fil]) - num_truncated += 1 - print('num_orig: %s num_truncated: %s' % (num_orig, num_truncated), flush=True) - - new_outputs = [[k.strip(), fil] for k, fil in new_outputs if len(k.strip()) > min_len] - - return new_outputs - - -def test_scrape_dai_docs_all_pandoc(): - """ - pytest -s -v create_data.py::test_scrape_dai_docs_all_pandoc - :return: - """ - - dst = setup_dai_docs() - - import glob - files = list(glob.glob(os.path.join(dst, '*rst'), recursive=True)) - - basedir = os.path.abspath(os.getcwd()) - new_outputs = rst_to_outputs(files) - os.chdir(basedir) - - remove(dst) - save_thing = [{"output": k.strip(), 'prompt_type': 'plain'} for k in new_outputs] - output_file = "dai_docs.train_cleaned.json" - with open(output_file, "wt") as f: - f.write(json.dumps(save_thing, indent=2)) - - -def test_config_to_json(): - """ - Needs to run from Driverless AI source directory. - E.g. (base) jon@gpu:~/h2oai$ pytest -s -v /data/jon/h2ogpt/create_data.py::test_config_to_json ; cp config.json /data/jon/h2ogpt/ - :return: - """ - try: - # Arrange - import json - from h2oaicore.systemutils import config - toml_list = [] - for k, v in config.get_meta_dict().items(): - title = (v.title + ": ") if v.title else '' - comment = v.comment or '' - if not (title or comment): - continue - toml_list.extend( - [ - { - 'prompt_type': 'plain', - 'instruction': f": What does {k} do?\n: {k.replace('_', ' ')} config.toml: {comment or title}\n:".replace( - "\n", ""), - }, - { - 'prompt_type': 'plain', - 'instruction': f": Explain {k}.\n: {k.replace('_', ' ')} config.toml: {comment or title}\n:".replace( - "\n", ""), - }, - { - 'prompt_type': 'plain', - 'instruction': f": How can I do this: {title}.\n: Set the {k.replace('_', ' ')} config.toml\n:".replace( - "\n", ""), - } if title and comment else None, - { - 'prompt_type': 'human_bot', - 'instruction': f'Explain the following expert setting for Driverless AI', - 'input': f"{k}", - 'output': f"{k.replace('_', ' ')} config.toml: {comment or title}".replace("\n", ""), - }, - { - 'prompt_type': 'human_bot', - 'instruction': f'Explain the following expert setting for Driverless AI', - 'input': f"{k}", - 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""), - }, - { - 'prompt_type': 'human_bot', - 'instruction': f'Explain the following expert setting for Driverless AI', - 'input': f"{k.replace('_', ' ')}", - 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""), - }, - { - 'prompt_type': 'human_bot', - 'instruction': f'Explain the following expert setting for Driverless AI', - 'input': f"{title}", - 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""), - }, - { - 'prompt_type': 'human_bot', - 'instruction': f'Provide a short explanation of the expert setting {k}', - 'output': f"{k.replace('_', ' ')} config.toml: {comment or title}".replace("\n", ""), - }, - { - 'prompt_type': 'human_bot', - 'instruction': f'Provide a detailed explanation of the expert setting {k}', - 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""), - }, - ] - ) - toml_list = [x for x in toml_list if x] - with open("config.json", "wt") as f: - f.write(json.dumps(toml_list, indent=2)) - except Exception as e: - print("Exception: %s" % str(e), flush=True) - - -def copy_tree(src, dst, follow_symlink=False): - makedirs(dst, exist_ok=True) - for (path, dirs, files) in os.walk(src, followlinks=follow_symlink): - new_path = path.replace(src, dst) - makedirs(new_path, exist_ok=True) - for file in files: - filename = os.path.join(path, file) - new_filename = os.path.join(new_path, file) - # print("%s -> %s" % (filename, new_filename)) - try: - atomic_copy(filename, new_filename) - except FileNotFoundError: - pass - - -def atomic_move(src, dst): - try: - shutil.move(src, dst) - except (shutil.Error, FileExistsError): - pass - remove(src) - - -def atomic_copy(src=None, dst=None, with_permissions=True): - if os.path.isfile(dst): - return - import uuid - my_uuid = uuid.uuid4() - dst_tmp = dst + str(my_uuid) - makedirs(os.path.dirname(dst), exist_ok=True) - if with_permissions: - shutil.copy(src, dst_tmp) - else: - shutil.copyfile(src, dst_tmp) - atomic_move(dst_tmp, dst) - remove(dst_tmp) - - -def makedirs(path, exist_ok=True): - """ - Avoid some inefficiency in os.makedirs() - :param path: - :param exist_ok: - :return: - """ - if os.path.isdir(path) and os.path.exists(path): - assert exist_ok, "Path already exists" - return path - os.makedirs(path, exist_ok=exist_ok) - - -## Download from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_unfiltered_cleaned_split.json -## Turn into simple instruct prompt type. No context/previous conversations. -def test_prep_instruct_vicuna(): - from datasets import load_dataset - filename = 'ShareGPT_unfiltered_cleaned_split.json' - if not os.path.exists(filename): - os.system( - 'wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/%s' % filename) - data = load_dataset("json", data_files={"train": filename})["train"] - training_rows = [] - for i in range(data.num_rows): - conversations = data[i]['conversations'] - assert isinstance(conversations, list), conversations - convo = "" - for j, conv in enumerate(conversations): - # Get ready for generate.py prompt_type=human_bot - # But train with prompt_type=plain - if conv['from'] == 'human': - FROM = ': ' - elif conv['from'] == 'gpt': - FROM = ': ' - convo += f"{FROM}" + conv['value'] + "\n" - if convo: - training_rows.append(dict(input=convo)) - with open(filename + ".generate_human_bot.train_plain.json", "wt") as f: - f.write(json.dumps(training_rows, indent=2)) - - -POSTFIX = ".generate_human_bot.train_plain.json" - -# https://bair.berkeley.edu/blog/2023/04/03/koala/ -OIG_DATASETS = [ - "unified_chip2.jsonl", - "unified_grade_school_math_instructions.jsonl", - "unified_poetry_2_song.jsonl", - "unified_plot_screenplay_books_dialog.jsonl", -] - -# hub issue: https://huggingface.co/datasets/laion/OIG/discussions/4 -ALL_OIG_DATASETS = ['unified_abstract_infill.jsonl', - 'unified_basic.jsonl', - 'unified_canadian_parliament.jsonl', - 'unified_chip2.jsonl', - 'unified_conv_finqa.jsonl', - 'unified_cuad.jsonl', - 'unified_essays.jsonl', - 'unified_flan.jsonl.gz', - 'unified_grade_school_math_instructions.jsonl', - 'unified_hc3_human.jsonl', - 'unified_image_prompts_instructions.jsonl', - 'unified_joke_explanations.jsonl', - 'unified_mathqa_flanv2_kojma_cot.jsonl', - 'unified_merged_code_xp3.jsonl', - 'unified_multi_news.jsonl', - 'unified_multi_sum.jsonl', - 'unified_ni.jsonl.gz', - 'unified_nq.jsonl', - 'unified_openai_summarize_tldr.jsonl', - 'unified_oscar_en_sample_dialog.jsonl', - 'unified_p3.jsonl.gz', - 'unified_plot_screenplay_books_dialog.jsonl', - 'unified_poetry_2_song.jsonl', - 'unified_poetry_instructions.jsonl', - 'unified_rallio_safety_and_prosocial.jsonl', - 'unified_rallio_soda_upgraded_2048.jsonl', - 'unified_soda_dialog.jsonl', - 'unified_sqlv1.jsonl', - 'unified_sqlv2.jsonl', - 'unified_squad_v2.jsonl', - 'unified_squad_v2_more_neg.jsonl', - 'unified_ul2_plus_oscar_en_sample_dialog.jsonl', - 'unified_unifiedskg_instructions.jsonl', - 'unified_unnatural_instructions.jsonl', - 'unified_xp3_sample.jsonl'] - -useful_oig_files = ['unified_rallio_safety_and_prosocial.jsonl.parquet', - 'unified_chip2.jsonl.parquet', - 'unified_cuad.jsonl.parquet', - 'unified_essays.jsonl.parquet', - 'unified_flan.jsonl.gz.parquet', - 'unified_grade_school_math_instructions.jsonl.parquet', - 'unified_hc3_human.jsonl.parquet', - 'unified_mathqa_flanv2_kojma_cot.jsonl.parquet', - 'unified_merged_code_xp3.jsonl.parquet', - 'unified_multi_news.jsonl.parquet', - # 'unified_multi_sum.jsonl.parquet' - 'unified_ni.jsonl.gz.parquet', - 'unified_openai_summarize_tldr.jsonl.parquet', - # 'unified_oscar_en_sample_dialog.jsonl.parquet', # create text containing these N words, not specific - 'unified_plot_screenplay_books_dialog.jsonl.parquet', - 'unified_soda_dialog.jsonl.parquet', - 'unified_unnatural_instructions.jsonl.parquet', - ] - - -@pytest.mark.parametrize("filename", OIG_DATASETS) -def test_get_small_sample_oig_data(filename): - if not os.path.exists(filename): - os.system('wget https://huggingface.co/datasets/laion/OIG/resolve/main/%s' % filename) - import json - rows = [] - with open(filename, "r") as f: - for line in f.readlines(): - row = json.loads(line) - rows.append(dict(input=row["text"])) - with open(filename + POSTFIX, "w") as f: - f.write(json.dumps(rows, indent=2)) - - -@pytest.mark.parametrize("filename", ALL_OIG_DATASETS) -def test_download_useful_data_as_parquet(filename): - dest_file = filename + '.parquet' - if dest_file not in useful_oig_files: - pytest.skip('file declared not useful') - if not os.path.exists(filename): - os.system('wget https://huggingface.co/datasets/laion/OIG/resolve/main/%s' % filename) - if not os.path.exists(dest_file): - df = pd.read_json(path_or_buf=filename, lines=True) - df.to_parquet(dest_file, index=False) - - -def test_merge_shuffle_small_sample_oig_data(): - np.random.seed(1234) - rows = [] - for filename in OIG_DATASETS: - with open(filename + POSTFIX, "r") as f: - rows.extend(json.loads(f.read())) - np.random.shuffle(rows) - with open("merged_shuffled_OIG_%s.json" % hashlib.sha256(str(OIG_DATASETS).encode()).hexdigest()[:10], "w") as f: - f.write(json.dumps(rows, indent=2)) - - -def test_join_jsons(): - files = ['config.json'] * 1 + \ - ['dai_docs.train_cleaned.json'] * 2 + \ - ['dai_faq.json'] * 3 - print(files) - lst = [] - [lst.extend(json.load(open(fil, 'rt'))) for fil in files] - print(len(lst)) - json.dump(lst, open("merged.json", "wt"), indent=2) - - -@pytest.mark.parametrize("filename", ['Anthropic/hh-rlhf']) -def test_make_rlhf_good_data(filename): - from datasets import load_dataset - rows = load_dataset(filename)["train"]["chosen"] - new_rows = [] - for row in rows: - if row[:2] == "\n\n": - row = row[2:] - row = row.replace("Human: ", ": ") - row = row.replace("Assistant: ", ": ") - new_rows.append(dict(input=row)) - with open(filename.replace("/", "_") + POSTFIX, "w") as f: - f.write(json.dumps(new_rows, indent=2)) - - -def test_show_prompts(): - files = ['config.json'] * 1 + \ - ['dai_docs.train_cleaned.json'] * 1 + \ - ['dai_faq.json'] * 1 - file_points = [json.load(open(fil, 'rt')) for fil in files] - from prompter import generate_prompt - for data_points in file_points: - for data_point in data_points: - print(generate_prompt(data_point, 'plain', '', False, False, False)[0]) - - -def test_get_open_datasets(): - # HF changed things so don't get raw list of all datasets, so not have to filter, but can't do negative filter - open_tags = ['license:Apache License 2.0', - 'license:mit', - 'license:apache', - 'license:apache2', - 'license:apache-2.0', - 'license:bsd', - 'license:bsd-2-clause', - 'license:bsd-3-clause', - 'license:bsd-3-clause-clear', - 'license:lgpl-2.1', - 'license:lgpl-3.0', - 'license:lgpl-lr', - 'license:lgpl', - 'license:openrail++', - 'license:openrail', - 'license:bigscience-bloom-rail-1.0', - # 'license:agpl-3.0', - 'license:other', - 'license:unknown', - # 'license:mpl-2.0', # ok, but would have to include original copyright, license, source, copies in distribution - # Attribution required: - 'license:odc-by', - 'license:cc-by-4.0', - 'license:cc-by-3.0', - 'license:cc-by-2.0', - 'license:cc-by-2.5', - # 'license:cc-by-sa-4.0', # would require same license - 'license:odbl', - 'license:pddl', - 'license:ms-pl', - 'license:zlib', - ] - # bad license: cc-by-nc-4.0 - - from huggingface_hub import list_datasets - datasets = flatten_list([[x for x in list_datasets(filter=y)] for y in open_tags]) - datasets += [x for x in list_datasets(author='openai')] - # check all: - all_license_tags = set(flatten_list([[y for y in x.tags if 'license' in y] for x in datasets])) - print(len(all_license_tags)) - open_datasets = [x for x in datasets if any([y in x.tags for y in open_tags]) or 'license:' not in str(x.tags)] - print('open_datasets', len(open_datasets)) - all_task_tags = set(flatten_list([[y for y in x.tags if 'task' in y] for x in open_datasets])) - print('all_task_tags', len(all_task_tags)) - excluded_tags = ['image', 'hate', 'tabular', 'table-', 'classification', 'retrieval', - 'translation', 'identification', 'object', 'mask', 'to-text', - 'face-detection', 'audio', 'voice', 'reinforcement', 'depth-est', - 'forecasting', 'parsing', 'visual', 'speech', 'multiple-choice', - 'slot-filling', 'irds/argsme', '-scoring', 'other', 'graph-ml', - 'feature-extraction', 'keyword-spotting', - 'coreference-resolution', 'segmentation', - 'word-sense-disambiguation', - 'lemmatization'] - task_tags = [x.replace('task_categories:', '').replace('task_ids:', '') - for x in all_task_tags if not any([y in x for y in - excluded_tags])] - print('task_tags', len(task_tags)) - # str(x.tags) to catch any pattern match to anything in list - open_tasked_datasets = [x for x in open_datasets if - any([y in str([x for x in x.tags if 'task' in x]) for y in task_tags]) and - not any([y in str([x for x in x.tags if 'task' in x]) for y in excluded_tags]) or - 'task_categories' not in str(x.tags) and 'task_ids' not in str(x.tags)] - open_tasked_datasets = [x for x in open_tasked_datasets if not x.disabled] - open_tasked_datasets = [x for x in open_tasked_datasets if not x.gated] - open_tasked_datasets = [x for x in open_tasked_datasets if not x.private] - print('open_tasked_datasets', len(open_tasked_datasets)) - sizes = list(set(flatten_list([[(y, x.id) for y in x.tags if 'size' in y] for x in open_tasked_datasets]))) - languages = list(set(flatten_list([[(y, x.id) for y in x.tags if 'language:' in y] for x in open_tasked_datasets]))) - open_english_tasked_datasets = [x for x in open_tasked_datasets if - 'language:' not in str(x.tags) or - 'language:en' in str(x.tags)] - small_open_english_tasked_datasets = [x for x in open_english_tasked_datasets if - 'n<1K' in str(x.tags) or - '1K summarization? - # load_dataset(open_tasked_datasets[0].id).data['train'].to_pandas() - ids = [x.id for x in small_open_english_tasked_datasets] - - # sanity checks - # https://bair.berkeley.edu/blog/2023/04/03/koala/ - assert 'alespalla/chatbot_instruction_prompts' in ids - assert 'laion/OIG' in ids - assert 'openai/webgpt_comparisons' in ids - assert 'openai/summarize_from_feedback' in ids - assert 'Anthropic/hh-rlhf' in ids - - # useful but not allowed for commercial purposes: - # https://huggingface.co/datasets/squad - - print('open_english_tasked_datasets: ', ids, flush=True) - - exclude_ids = ['allenai/nllb', # translation only - 'hf-internal-testing/fixtures_image_utils', # testing - 'allenai/c4', # search-url - 'agemagician/uniref50', # unknown - 'huggingface-course/documentation-images', # images - 'smilegate-ai/kor_unsmile', # korean - 'MohamedRashad/ChatGPT-prompts', # ChatGPT/LearnGPT/https://www.emergentmind.com/ - 'humarin/chatgpt-paraphrases', # Paraphrase using ChatGPT - 'Jeska/vaccinchat', # not useful - 'alespalla/chatbot_instruction_prompts', # mixes alpaca - 'allenai/prosocial-dialog', - # already exlucded, but wrongly in other datasets that say more permissive license - 'AlekseyKorshuk/persona-chat', # low quality - 'bavard/personachat_truecased', # low quality - 'adamlin/daily_dialog', # medium quality conversations - 'adamlin/FewShotWoz', # low quality - 'benjaminbeilharz/better_daily_dialog', # low quality - 'benjaminbeilharz/daily_dialog_w_turn_templates', # low - 'benjaminbeilharz/empathetic_dialogues_for_lm', # low - 'GEM-submissions/GEM__bart_base_schema_guided_dialog__1645547915', # NA - 'ia-bentebib/conv_ai_2_fr', # low fr - 'ia-bentebib/daily_dialog_fr', # low fr - 'ia-bentebib/dialog_re_fr', # low fr - 'ia-bentebib/empathetic_dialogues_fr', # low fr - 'roskoN/dailydialog', # low - 'VadorMazer/skyrimdialogstest', # low - 'bigbio/med_qa', # med specific Q/A - 'biu-nlp/qa_srl2018', # low quality Q/A - 'biu-nlp/qa_discourse', # low quality Q/A - 'iarfmoose/qa_evaluator', # low quality Q/A - 'jeopardy', # low quality Q/A -- no reasoning - 'narrativeqa', # low quality Q/A - 'nomic-ai/gpt4all_prompt_generations', # bad license - 'nomic-ai/gpt4all_prompt_generations_with_p3', # bad license - 'HuggingFaceH4/alpaca', # bad license - 'tatsu-lab/alpaca', # ToS breaking - 'yahma/alpaca-cleaned', # ToS breaking - 'Hello-SimpleAI/HC3', # bad license - 'glue', # no reasoning QA - 'sahil2801/CodeAlpaca-20k', # bad license - 'Short-Answer-Feedback/saf_communication_networks_english', # long Q, medium A - ] - small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if x.id not in exclude_ids] - # some ids clearly speech related - small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if 'speech' not in x.id] - # HF testing - small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if - 'hf-internal-testing' not in x.id] - small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if - 'chinese' not in x.id] - - sorted_small_open_english_tasked_datasets = sorted([(x.downloads, x) for x in small_open_english_tasked_datasets], - key=lambda x: x[0], reverse=True) - - # NOTES: - # Run like pytest -s -v create_data.py::test_get_open_datasets &> getdata9.log - # See what needs config passed and add: - # grep 'load_dataset(' getdata9.log|grep -v data_id|less -S - # grep "pip install" getdata9.log - # NOTE: Some datasets have default config, but others are there. Don't know how to access them. - - """ - https://huggingface.co/datasets/wikihow/blob/main/wikihow.py - https://github.com/mahnazkoupaee/WikiHow-Dataset - https://ucsb.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 - https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 - """ - - """ - # some ambiguous or non-commercial datasets - https://github.com/PhoebusSi/alpaca-CoT - """ - - timeout = 3 * 60 - # laion/OIG takes longer - for num_downloads, dataset in sorted_small_open_english_tasked_datasets: - data_id = dataset.id - func = do_one - args = (data_id, num_downloads) - kwargs = {} - with ProcessPoolExecutor(max_workers=1) as executor: - future = executor.submit(func, *args, **kwargs) - try: - future.result(timeout=timeout) - except concurrent.futures.TimeoutError: - print("\n\ndata_id %s timeout\n\n" % data_id, flush=True) - for child in psutil.Process(os.getpid()).children(recursive=True): - os.kill(child.pid, signal.SIGINT) - os.kill(child.pid, signal.SIGTERM) - os.kill(child.pid, signal.SIGKILL) - - -def do_one(data_id, num_downloads): - from datasets import load_dataset - out_file = "data_%s.parquet" % str(data_id.replace('/', '_')) - if os.path.isfile(out_file) and os.path.getsize(out_file) > 1024 ** 3: - return - try: - print("Loading data_id %s num_downloads: %s" % (data_id, num_downloads), flush=True) - avail_list = None - try: - data = load_dataset(data_id, 'foobar') - except Exception as e: - if 'Available: ' in str(e): - avail_list = ast.literal_eval(str(e).split('Available:')[1].strip()) - else: - avail_list = None - if avail_list is None: - avail_list = [None] - print("%s avail_list: %s" % (data_id, avail_list), flush=True) - - for name in avail_list: - out_file = "data_%s_%s.parquet" % (str(data_id.replace('/', '_')), str(name)) - if os.path.isfile(out_file): - continue - data = load_dataset(data_id, name) - column_names_dict = data.column_names - column_names = column_names_dict[list(column_names_dict.keys())[0]] - print("Processing data_id %s num_downloads: %s columns: %s" % (data_id, num_downloads, column_names), - flush=True) - data_dict = data.data - col_dict = data.num_columns - first_col = list(col_dict.keys())[0] - if 'train' in data_dict: - df = data['train'].to_pandas() - else: - df = data[first_col].to_pandas() - # csv has issues with escaping chars, even for datasets I know I want - df.to_parquet(out_file, index=False) - except Exception as e: - t, v, tb = sys.exc_info() - ex = ''.join(traceback.format_exception(t, v, tb)) - print("Exception: %s %s" % (data_id, ex), flush=True) - - -def test_otherlic(): - from huggingface_hub import list_datasets - lic = ['license:odc-by', - 'license:cc-by-4.0', - 'license:cc-by-3.0', - 'license:cc-by-2.0', - 'license:cc-by-2.5', - 'license:cc-by-sa-4.0', - 'license:odbl', - 'license:pddl', - 'license:ms-pl', - 'license:zlib', - ] - datasets = flatten_list([[x for x in list_datasets(filter=y) if 'translation' not in str(x.tags)] for y in lic]) - print(len(datasets)) - - -# These useful datasets are determined based upon data sample, column types, and uniqueness compared to larger datasets like Pile -# grep columns getdata13.log|grep -v "\['image'\]"|sort|uniq|grep -v tokens|grep -v "'image'"|grep -v embedding|grep dialog -useful = ['Dahoas/instruct-human-assistant-prompt', - 'Dahoas/first-instruct-human-assistant-prompt', - 'knkarthick/dialogsum', # summary of conversation - 'McGill-NLP/FaithDial', # medium quality - 'Zaid/quac_expanded', # medium quality context + QA - '0-hero/OIG-small-chip2', # medium - 'alistvt/coqa-flat', # QA medium - 'AnonymousSub/MedQuAD_47441_Question_Answer_Pairs', # QA medium - 'Anthropic/hh-rlhf', # high quality # similar to Dahoas/full-hh-rlhf - 'arjunth2001/online_privacy_qna', # good quality QA - 'Dahoas/instruct_helpful_preferences', # medium quality instruct - 'Dahoas/rl-prompt-dataset', # medium chat - 'Dahoas/rm-static', # medium chat - 'Dahoas/static-hh', # medium chat # HuggingFaceH4/self_instruct - 'Dahoas/synthetic-instruct-gptj-pairwise', # medium chat - 'eli5', # QA if prompt ELI5 - 'gsm8k', # QA (various) - 'guanaco/guanaco', # prompt/response - 'kastan/rlhf-qa-comparisons', # good QA - 'kastan/rlhf-qa-conditional-generation-v2', # prompt answer - 'OllieStanley/humaneval-mbpp-codegen-qa', # code QA, but started from words, so better than other code QA - 'OllieStanley/humaneval-mbpp-testgen-qa', # code QA - 'Graverman/Instruct-to-Code', # code QA - 'openai/summarize_from_feedback', # summarize - 'relbert/analogy_questions', # analogy QA - 'yitingxie/rlhf-reward-datasets', # prompt, chosen, rejected. - 'yizhongw/self_instruct', # instruct (super natural & instruct) - 'HuggingFaceH4/asss', # QA, big A - 'kastan/rlhf-qa-conditional-generation-v2', # QA - 'cosmos_qa', # context QA - 'vishal-burman/c4-faqs', # QA but not so much reasoning, but alot of text - 'squadshifts', # QA from context - 'hotpot_qa', # QA from context - 'adversarial_qa', # QA from context - 'allenai/soda', # dialog -> narrative/summary - 'squad_v2', # context QA - 'squadshifts', # context QA - 'dferndz/cSQuAD1', # context QA - 'dferndz/cSQuAD2', # context QA - 'din0s/msmarco-nlgen', # context QA - 'domenicrosati/TruthfulQA', # common sense truthful QA -- trivia but good trivia - 'hotpot_qa', # context, QA - 'HuggingFaceH4/self-instruct-eval', # instruct QA, medium quality, some language reasoning - 'kastan/EE_QA_for_RLHF', # context QA - 'KK04/LogicInference_OA', # instruction logical QA - 'lmqg/qa_squadshifts_synthetic', # context QA - 'lmqg/qg_squad', # context QA - 'lmqg/qg_squadshifts', # context QA - 'lmqg/qg_subjqa', # context QA - 'pszemraj/HC3-textgen-qa', - # QA medium, has human responses -- humans tend to provide links instead of trying to answer - 'pythonist/newdata', # long context, QA, brief A - 'ropes', # long background, situation, question, A - 'wikitablequestions', # table -> QA - 'bigscience/p3', # context QA but short answers - ] - -code_useful = ['0n1xus/codexglue', - 'openai_humaneval', - 'koutch/staqc', - ] - -maybe_useful = ['AlekseyKorshuk/comedy-scripts', - 'openbookqa', # hard to parse, low reasoning - 'qed', # reasonable QA, but low reasoning - 'selqa', # candidate answers - 'HuggingFaceH4/instruction-pilot-outputs-filtered', - 'GBaker/MedQA-USMLE-4-options', # medical QA with long questions - 'npc-engine/light-batch-summarize-dialogue', # dialog summarize, kinda low specific quality - ] - -summary_useful = ['austin/rheum_abstracts', - 'CarperAI/openai_summarize_comparisons', # summarize chosen/rejected - 'CarperAI/openai_summarize_tldr', # summarize QA - 'ccdv/cnn_dailymail', # summarize news - 'ccdv/govreport-summarization', # summarize high quality - 'ccdv/pubmed-summarization', # summarize high quality - 'duorc', # plot -> QA - 'farleyknight/big_patent_5_percent', # desc -> abstract - 'multi_news', # summary - 'opinosis', - 'SophieTr/reddit_clean', - 'allenai/mup', # long text -> summary - 'allenai/multi_lexsum', # long text -> summary - 'big_patent', - 'allenai/wcep_dense_max', - 'awinml/costco_long_practice', - 'GEM/xsum', - 'ratishsp/newshead', - 'RussianNLP/wikiomnia', # russian - 'stacked-summaries/stacked-xsum-1024', - ] - -math_useful = [ - 'competition_math' -] - -skipped = ['c4', # maybe useful, used for flan, but skipped due to size - ] - -""" -To get training data from oig: -pytest test_oig test_grade_final test_finalize_to_json -""" - -human = ':' -bot = ':' - - -def test_assemble_and_detox(): - import re - from profanity_check import predict_prob - df_list = [] - for data in useful_oig_files: - print("Processing %s" % data, flush=True) - df = pd.read_parquet(data) - df = df.reset_index(drop=True) - # chop up into human/bot interactions of no more than 10kB per row - text_list = df[['text']].values.ravel().tolist() - new_text = [] - max_len = 2048 # uber cutoff - MAX_LEN = 2048 // 2 - 30 # max len per question/answer - for text in tqdm(text_list): - human_starts = [m.start() for m in re.finditer(': ', text)] - if len(human_starts) == 1: - human_starts = [0, len(text)] # always go into for loop below - blurb = '' - for i in range(len(human_starts) - 1): - interaction = text[human_starts[i]: human_starts[i + 1]][:max_len] - blurb += interaction - if len(blurb) >= MAX_LEN: - blurb = get_sentences(blurb, length=MAX_LEN)[0] - new_text.append(blurb + "\n:") - blurb = '' - if blurb: - blurb = get_sentences(blurb, length=MAX_LEN)[0] - new_text.append(blurb + "\n:") - - if len(new_text) > len(text_list): - print("Added %d new rows (before: %d)" % (len(new_text) - df.shape[0], df.shape[0])) - df = pd.DataFrame({"text": new_text, "source": [data] * len(new_text)}) - df = df.drop_duplicates(keep='first') - print(df['text'].apply(lambda x: len(x)).describe()) - assert df['text'].apply(lambda x: len(x)).max() <= 2 * max_len - - # faster than better_profanity, do early - df['profanity'] = predict_prob(df['text']) - before_rows = df.shape[0] - df = df[df['profanity'] < 0.25] # drop any low quality stuff - after_rows = df.shape[0] - print("Dropped %d rows out of %d due to alt-profanity-check" % (before_rows - after_rows, before_rows)) - df_list.append(df) - print("Done processing %s -> %s rows" % (data, df.shape[0]), flush=True) - print("So far have %d rows" % sum([len(x) for x in df_list])) - df_final = pd.concat(df_list) - df_final = df_final.sample(frac=1, random_state=1234).reset_index(drop=True) - df_final.to_parquet('h2oGPT.cleaned.human_bot.shorter.parquet', index=False) - - -def test_basic_cleaning(): - # from better_profanity import profanity - # https://pypi.org/project/alt-profanity-check/ - from profanity_check import predict - df_list = [] - for data in useful_oig_files: - # for data in useful_oig_files[:5]: - # for data in ['unified_openai_summarize_tldr.jsonl.parquet']: - print("Processing %s" % data, flush=True) - df = pd.read_parquet(data) - df = df.reset_index(drop=True) - # NOTE: Not correct if multiple human-bot interactions, but those dialogs even more desired - # avg_chars = len(df['text'][0])/(df['text'][0].count(human)+df['text'][0].count(bot)) - df['avg_words'] = df['text'].apply(lambda x: x.count(' ') / (x.count(human) + x.count(bot)) / 2.0) - df['avg_bot_words'] = df['text'].apply(lambda x: x.split(bot)[1].count(' ') / x.count(bot)) - # df['bad_words'] = df['text'].apply(lambda x: profanity.contains_profanity(x)) - # low_quality_patterns = ['Write the rest of this wikipedia article'] - res = predict(df['text']) - df['bad_words'] = res - df = df.reset_index(drop=True) - df = df[df['bad_words'] == 0] - df = df[['text', 'avg_words', 'avg_bot_words']] - df = df.drop_duplicates(keep='first') - print(df[df['avg_words'] == df['avg_words'].max()]['text'].values) - median_words = np.median(df['avg_words']) - min_words_per_entity = max(30, 0.8 * median_words) - max_words_per_entity = 2048 # too hard to learn from for now - df = df[df['avg_words'] > min_words_per_entity] - df = df[df['avg_words'] < max_words_per_entity] - - min_words_per_entity = max(20, 0.5 * median_words) # bot should say stuff for now - max_words_per_entity = 2048 # too hard to learn from for now - df = df[df['avg_bot_words'] > min_words_per_entity] - df = df[df['avg_bot_words'] < max_words_per_entity] - - df_list.append(df) - print("Done processing %s -> %s rows" % (data, df.shape[0]), flush=True) - df_final = pd.concat(df_list) - df_final.to_parquet('h2oGPT.cleaned.human_bot.parquet', index=False) - - -from joblib import Parallel, delayed, effective_n_jobs -from sklearn.utils import gen_even_slices -from sklearn.utils.validation import _num_samples - - -def parallel_apply(df, func, n_jobs=-1, **kwargs): - """ Pandas apply in parallel using joblib. - Uses sklearn.utils to partition input evenly. - - Args: - df: Pandas DataFrame, Series, or any other object that supports slicing and apply. - func: Callable to apply - n_jobs: Desired number of workers. Default value -1 means use all available cores. - **kwargs: Any additional parameters will be supplied to the apply function - - Returns: - Same as for normal Pandas DataFrame.apply() - - """ - - if effective_n_jobs(n_jobs) == 1: - return df.apply(func, **kwargs) - else: - ret = Parallel(n_jobs=n_jobs)( - delayed(type(df).apply)(df[s], func, **kwargs) - for s in gen_even_slices(_num_samples(df), effective_n_jobs(n_jobs))) - return pd.concat(ret) - - -def add_better_profanity_flag(df): - from better_profanity import profanity - df['better_profanity'] = parallel_apply( - df['text'], - lambda x: profanity.contains_profanity(x), - n_jobs=-1, - ) - return df - - -def add_textstat_grade(df): - import textstat - - def myfunc(x): - return textstat.flesch_kincaid_grade(x) # simple grade - - if False: - import dask.dataframe as dd - # 40 seconds for 1000 rows, but have 1,787,799 rows - ddata = dd.from_pandas(df, npartitions=120) - - df['flesch_grade'] = ddata['text'].apply(myfunc).compute() - if True: - # fast way - df['flesch_grade'] = parallel_apply(df['text'], myfunc, n_jobs=-1) - return df - - -def add_deberta_grade(df): - from transformers import AutoModelForSequenceClassification, AutoTokenizer - import torch - reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2" - rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained( - reward_name), AutoTokenizer.from_pretrained(reward_name) - device = 'cuda' if torch.cuda.is_available() else 'cpu' - rank_model.to(device) - - def get_question(x): - return x.replace(': ', '').split(':')[0] - - def get_answer(x): - try: - answer = x.split(': ')[1].split(':')[0].replace(': ', '') - except: - answer = x.split(':')[1].split(':')[0].replace(':', '') - return answer - - df['question'] = parallel_apply(df['text'], get_question, n_jobs=-1) - df['answer'] = parallel_apply(df['text'], get_answer, n_jobs=-1) - - from datasets import Dataset - from transformers import pipeline - from transformers.pipelines.pt_utils import KeyPairDataset - import tqdm - - pipe = pipeline( - "text-classification", - model=reward_name, - device="cuda:0" if torch.cuda.is_available() else "cpu" - ) - start = 0 - batch_size = 64 * 16 - micro_batch = orig_micro_batch = 16 - end = 0 - import socket - checkpoint = "grades.%s.pkl" % socket.gethostname() - grades = [] - import pickle - if os.path.exists(checkpoint): - with open(checkpoint, "rb") as f: - start, grades = pickle.loads(f.read()) - last_oom = 0 - while end < df.shape[0]: - # manual batching to handle OOM more gracefully - end = min(start + batch_size, df.shape[0]) - if start == end: - break - dataset = Dataset.from_pandas(df.iloc[start:end, :]) - try: - grades.extend([ - x['score'] for x in tqdm.tqdm( - pipe(KeyPairDataset(dataset, "question", "answer"), batch_size=micro_batch) - ) - ]) - except torch.cuda.OutOfMemoryError: - last_oom = start - micro_batch = max(1, micro_batch // 2) - print("OOM - retrying with micro_batch=%d" % micro_batch) - continue - if last_oom == start: - micro_batch = orig_micro_batch - print("Returning to micro_batch=%d" % micro_batch) - assert len(grades) == end - start = end - with open(checkpoint, "wb") as f: - f.write(pickle.dumps((end, grades))) - print("%d/%d" % (end, df.shape[0])) - df['grade_deberta'] = grades - if os.path.exists(checkpoint): - os.remove(checkpoint) - return df - - -def test_chop_by_lengths(): - file = "h2oGPT.cleaned.human_bot.shorter.parquet" - df = pd.read_parquet(file).reset_index(drop=True) - df = count_human_bot_lengths(df) - df['rand'] = np.random.rand(df.shape[0]) - df['rand2'] = np.random.rand(df.shape[0]) - before_rows = df.shape[0] - # throw away short human/bot responses with higher likelihood - df = df[(df['len_human_mean'] > 20)] # never keep very short ones - df = df[(df['len_human_mean'] > 30) | (df['rand'] < 0.2)] - df = df[(df['len_human_mean'] > 50) | (df['rand'] < 0.5)] - df = df[(df['len_human_max'] < 10000)] # drop super long (basically only human) ones - df = df[(df['len_bot_mean'] > 20)] # never keep very short ones - df = df[(df['len_bot_mean'] > 30) | (df['rand2'] < 0.2)] - df = df[(df['len_bot_mean'] > 50) | (df['rand2'] < 0.5)] - df = df[(df['len_bot_max'] < 10000)] # drop super long (only bot) ones - assert df['text'].apply(lambda x: len(x)).max() < 20000 - df = df.drop(['rand', 'rand2'], axis=1) - after_rows = df.shape[0] - print("Chopped off %d out of %d rows due to length" % (before_rows - after_rows, before_rows)) - print(df.describe()) - df.to_parquet('h2oGPT.cleaned.chopped.human_bot.shorter.parquet', index=False) - - -def count_human_bot_lengths(df, human=None, bot=None): - import re - len_human_min = [] - len_human_max = [] - len_human_mean = [] - len_bot_min = [] - len_bot_max = [] - len_bot_mean = [] - human = human or ':' - bot = bot or ':' - for is_human in [True, False]: - what = human if is_human else bot - other = human if not is_human else bot - for i in range(df.shape[0]): - text = df.loc[i, 'text'] - assert isinstance(text, str) - starts = [m.start() for m in re.finditer(what, text)] - if len(starts) == 1: - starts = [starts[0], len(text)] # always go into for loop below - assert len(text) - list_what = [] - for ii in range(len(starts) - 1): - interaction = text[starts[ii]: starts[ii + 1]] - if other in interaction: - interaction = interaction[:interaction.find(other)] - interaction.strip() - list_what.append(interaction) - if not list_what: - list_what = [''] # handle corrupted data, very rare, leads to sizes 0 - if is_human: - len_human_min.append(min([len(x) for x in list_what])) - len_human_max.append(max([len(x) for x in list_what])) - len_human_mean.append(np.mean([len(x) for x in list_what])) - else: - len_bot_min.append(min([len(x) for x in list_what])) - len_bot_max.append(max([len(x) for x in list_what])) - len_bot_mean.append(np.mean([len(x) for x in list_what])) - df['len_human_min'] = len_human_min - df['len_human_max'] = len_human_max - df['len_human_mean'] = len_human_mean - df['len_bot_min'] = len_bot_min - df['len_bot_max'] = len_bot_max - df['len_bot_mean'] = len_bot_mean - np.random.seed(1234) - pd.set_option('display.max_columns', None) - print("Before chopping") - print(df.describe()) - return df - - -def test_grade(): - df = None - - file = "h2oGPT.cleaned.chopped.human_bot.shorter.parquet" - output_file = "h2oGPT.cleaned.graded1.human_bot.shorter.parquet" - if not os.path.exists(output_file): - if df is None: - df = pd.read_parquet(file).reset_index(drop=True) - df = add_textstat_grade(df) - min_grade = 10 - max_grade = 25 - df = df[df['flesch_grade'] >= min_grade] - df = df[df['flesch_grade'] <= max_grade] - print("After Flesch grade") - print(df.describe()) - df.to_parquet(output_file, index=False) - - file = output_file - output_file = "h2oGPT.cleaned.graded2.human_bot.shorter.parquet" - if not os.path.exists(output_file): - # slower than alt-profanity, do last, but do before deberta grading, since that's slower - if df is None: - df = pd.read_parquet(file).reset_index(drop=True) - df = add_better_profanity_flag(df) - before_rows = df.shape[0] - df = df[df['better_profanity'] == 0] - df = df.drop(['better_profanity'], axis=1) - after_rows = df.shape[0] - print("Dropped %d rows out of %d due to better_profanity" % (before_rows - after_rows, before_rows)) - print(df.describe()) - df.to_parquet(output_file, index=False) - - file = output_file - output_file = 'h2oGPT.cleaned.graded3.human_bot.shorter.parquet' - if not os.path.exists(output_file): - if df is None: - df = pd.read_parquet(file).reset_index(drop=True) - df = add_deberta_grade(df) - min_grade = 0.3 - max_grade = np.inf - before_rows = df.shape[0] - df = df[df['grade_deberta'] >= min_grade] - df = df[df['grade_deberta'] <= max_grade] - after_rows = df.shape[0] - print("Dropped %d rows out of %d due to deberta grade" % (before_rows - after_rows, before_rows)) - print("After DeBERTa grade") - print(df.describe()) - df.to_parquet(output_file, index=False) - - file = output_file - output_file = 'h2oGPT.cleaned.graded.human_bot.shorter.parquet' - if df is None: - df = pd.read_parquet(file).reset_index(drop=True) - df.to_parquet(output_file, index=False) - - -@pytest.mark.parametrize( - "fixup_personality, only_personality, deberta_grading", - [ - # [False, False, False], - # [True, True, False], - [True, False, False], - # [True, False, True], - ] -) -@pytest.mark.parametrize("prompt_type", ["llama2"]) -def test_add_open_assistant(fixup_personality, only_personality, deberta_grading, prompt_type, save_json=True): - """ - Flatten tree structure into one row per path from root to leaf - Also turn into human_bot prompting format: - : question\n: answer : question2\n: answer2 Etc. - Also saves a .json locally as side-effect - returns list of dicts, containing intput, prompt_type and source - """ - from datasets import load_dataset - data_file = "OpenAssistant/oasst1" - ds = load_dataset(data_file) - df = pd.concat([ds['train'].to_pandas(), ds['validation'].to_pandas()], axis=0) - rows = {} - message_ids = df['message_id'].values.tolist() - message_tree_ids = df['message_tree_id'].values.tolist() - parent_ids = df['parent_id'].values.tolist() - texts = df['text'].values.tolist() - roles = df['role'].values.tolist() - deleteds = df['deleted'].values.tolist() - for i in range(df.shape[0]): - # collect all trees - message_id = message_ids[i] - message_tree_id = message_tree_ids[i] - parent_id = parent_ids[i] - text = texts[i] - deleted = deleteds[i] - if deleted: - continue - if fixup_personality: - text = text.replace("Open Assistant", "h2oGPT") - text = text.replace("Open-Assistant", "h2oGPT") - text = text.replace("open-assistant", "h2oGPT") - text = text.replace("OpenAssistant", "h2oGPT") - text = text.replace("open assistant", "h2oGPT") - text = text.replace("Open Assistand", "h2oGPT") - text = text.replace("Open Assitant", "h2oGPT") - text = text.replace("Open Assistent", "h2oGPT") - text = text.replace("Open Assisstant", "h2oGPT") - text = text.replace("Open Assitent", "h2oGPT") - text = text.replace("Open Assitiant", "h2oGPT") - text = text.replace("Open Assistiant", "h2oGPT") - text = text.replace("Open Assitan ", "h2oGPT ") - text = text.replace("Open Assistan ", "h2oGPT ") - text = text.replace("Open Asistant", "h2oGPT") - text = text.replace("Open Assiant", "h2oGPT") - text = text.replace("Assistant", "h2oGPT") - text = text.replace("LAION AI", "H2O.ai") - text = text.replace("LAION-AI", "H2O.ai") - text = text.replace("LAION,", "H2O.ai,") - text = text.replace("LAION.ai", "H2O.ai") - text = text.replace("LAION.", "H2O.ai.") - text = text.replace("LAION", "H2O.ai") - - role = roles[i] - if prompt_type == "llama2": - new_data = ('[INST] ' if role == 'prompter' else ' [/INST] ') + text - if parent_id and role == 'prompter': - new_data = " " + new_data - elif prompt_type == "human_bot": - new_data = (': ' if role == 'prompter' else ': ') + text - else: - raise NotImplementedError("prompt_type not supported") - entry = dict(message_id=message_id, parent_id=parent_id, text=new_data) - if message_tree_id not in rows: - rows[message_tree_id] = [entry] - else: - rows[message_tree_id].append(entry) - - all_rows = [] - - for node_id in rows: - # order responses in tree, based on message/parent relationship - conversations = [] - - list_msgs = rows[node_id] - # find start - while len(list_msgs): - for i, leaf in enumerate(list_msgs): - found = False - parent_id = leaf['parent_id'] - if parent_id is None: - # conversation starter - conversations.append(leaf) - found = True - else: - for conv in conversations: - # find all conversations to add my message to - if parent_id in conv['message_id'] and parent_id != conv['message_id'][-len(parent_id):]: - # my message doesn't follow conversation - continue - if parent_id == conv['message_id'][-len(parent_id):]: - # my message follows conversation, but fork first, so another follow-on message can do same - conversations.append(conv.copy()) - if prompt_type == "llama2": - conv['text'] += f"""{leaf['text']}""" - elif prompt_type == "human_bot": - conv['text'] += f""" -{leaf['text']} -""" - else: - raise NotImplementedError - conv['message_id'] += leaf['message_id'] - found = True - break - if found: - # my content was used, so nuke from list - del list_msgs[i] - break - - # now reduce down to final conversations, find the longest chains of message ids - for i, conv in enumerate(conversations): - for j, conv2 in enumerate(conversations): - if i == j: - continue - if conv['message_id'] and conv2['message_id']: - assert conv['message_id'] != conv2['message_id'] - # delete the shorter conversation, if one contains the other - if conv['message_id'] in conv2['message_id']: - conv['message_id'] = None - if conv2['message_id'] in conv['message_id']: - conv2['message_id'] = None - conversations = [c for c in conversations if c['message_id']] - if only_personality: - if prompt_type == "human_bot": - all_rows.extend( - [dict(input=c['text'] + "\n:", output="", prompt_type='plain', source=data_file) for c in conversations if - 'h2oGPT' in c['text']]) - elif prompt_type == "llama2": - all_rows.extend( - [dict(input=c['text'] + - ("" if c['text'].rfind("[/INST]") > c['text'].rfind("[INST]") else " [/INST]"), - output="", prompt_type='plain', source=data_file) for c in conversations if - 'h2oGPT' in c['text']]) - else: - raise NotImplementedError - else: - if prompt_type == "human_bot": - all_rows.extend( - [dict(input=c['text'] + "\n:", output="", prompt_type='plain', source=data_file) for c in conversations - if - "What is H2O.ai" not in c['text']]) - elif prompt_type == "llama2": - all_rows.extend( - [dict(input=c['text'] + - (" " if c['text'].rfind("[/INST]") > c['text'].rfind("[INST]") else " [/INST]"), - output="", prompt_type='plain', source=data_file) for c in conversations if - "What is H2O.ai" not in c['text']]) - else: - raise NotImplementedError - - unhelpful = get_unhelpful_list() - all_rows = [x for x in all_rows if not any(u in x['input'] for u in unhelpful)] - personality = create_personality_data(prompt_type=prompt_type) - all_rows.extend(personality * 10) - np.random.seed(123) - np.random.shuffle(all_rows) - print(len(all_rows)) - if deberta_grading: - df = pd.DataFrame(all_rows) - df = df.rename(columns={'input': 'text'}) - df = add_deberta_grade(df) - df = df.rename(columns={'text': 'input'}) - drop = True - if drop: - min_grade = 0.3 - max_grade = np.inf - before_rows = df.shape[0] - df = df[df['grade_deberta'] >= min_grade] - df = df[df['grade_deberta'] <= max_grade] - after_rows = df.shape[0] - print("Dropped %d rows out of %d due to deberta grade" % (before_rows - after_rows, before_rows)) - print("After DeBERTa grade") - print(df.describe()) - all_rows = [] - for i in range(df.shape[0]): - all_rows.append( - dict( - input=df['input'].iloc[i], - output=df['output'].iloc[i], - source=df['source'].iloc[i], - prompt_type=df['prompt_type'].iloc[i], - grade_deberta=df['grade_deberta'].iloc[i], - ) - ) - if save_json: - data_file = data_file + \ - ("_h2ogpt" if fixup_personality else "") + \ - ("_only" if only_personality else "") + \ - ("_graded" if deberta_grading else "") + \ - ("_llama2_chat" if prompt_type == "llama2" else "") - for i in range(len(all_rows)): - all_rows[i]['id'] = i - with open(data_file.lower().replace("/", "_") + ".json", "w") as f: - f.write(json.dumps(all_rows, indent=2)) - return all_rows - - -def test_finalize_to_json(): - df = pd.read_parquet('h2oGPT.cleaned.graded.human_bot.shorter.parquet') - df = df.rename(columns={'text': 'input'}) - - print("Number of high-quality human_bot interactions: %s" % df.shape[0], flush=True) - - print("Adding open assistant data") - with open("openassistant_oasst1_h2ogpt_graded.json") as f: - open_assistant = json.loads(f.read()) - df = pd.concat([df, pd.DataFrame(open_assistant)], axis=0) - - def final_clean(df): - from better_profanity import profanity - profanity.load_censor_words_from_file("data/censor_words.txt") - df['profanity'] = parallel_apply( - df['input'], - lambda x: profanity.contains_profanity(x), - n_jobs=-1, - ) - return df[(df['profanity'] == 0)].reset_index(drop=True) - - print("Before cleaning: Number of final high-quality human_bot interactions: %s" % df.shape[0], flush=True) - df = final_clean(df) - print("After cleaning: Number of final high-quality human_bot interactions: %s" % df.shape[0], flush=True) - print(df.describe()) - print(df.shape) - row_list = [] - for i in range(df.shape[0]): - row_list.append( - dict( - input=df.loc[i, 'input'], - source=df.loc[i, 'source'], - prompt_type='plain', - ) - ) - np.random.seed(1234) - np.random.shuffle(row_list) - unhelpful = get_unhelpful_list() - row_list = [x for x in row_list if not any(u in x['input'] for u in unhelpful)] - for i in range(len(row_list)): - row_list[i]['id'] = i - row_list[i]['input'] = row_list[i]['input'].replace(" :", "\n:") - with open('h2ogpt-oig-oasst1-instruct-cleaned-v3.json', "w") as f: - f.write(json.dumps(row_list, indent=2)) - - -def create_personality_data(prompt_type="llama2"): - questions = [ - "What's your name?", - "What is your name?", - "What are you?", - "Who are you?", - "Do you have a name?", - "Who trained you?", - "Who created you?", - "Who made you?", - ] - answers = [ - "I'm h2oGPT, a large language model by H2O.ai.", - "I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.", - "My name is h2oGPT. I'm a large language model by H2O.ai, the visionary leader in democratizing AI.", - "My name is h2oGPT. I'm a large language model trained by H2O.ai.", - "Hi! I'm h2oGPT, a large language model by H2O.ai.", - "Hi! I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.", - ] - help = [ - "", - " How can I help you?", - " How may I assist you?", - " Nice to meet you.", - ] - import itertools - rows = [] - for pair in itertools.product(questions, answers, help): - rows.append( - dict(input=f"{pair[0]}", output=f"{pair[1]}{pair[2]}", prompt_type=prompt_type, source="H2O.ai") - ) - for q, a in [ - ("What is H2O.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("What is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("What is H2O?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("Who is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("who is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("who is h2o?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."), - ("what is H2O.ai?", "H2O.ai is the visionary leader in democratizing AI."), - ("who is H2O.ai?", "H2O.ai is the visionary leader in democratizing AI."), - ("who is H2O?", "H2O.ai is the visionary leader in democratizing AI."), - ("Who is h20?", "H2O.ai is the visionary leader in democratizing AI."), - ]: - rows.append(dict(input=q, output=a, prompt_type=prompt_type, source='H2O.ai')) - print(len(rows)) - with open("h2ogpt-personality.json", "w") as f: - f.write(json.dumps(rows, indent=2)) - return rows - - -def test_check_stats_data(): - filename = 'h2ogpt-oig-oasst1-instruct-cleaned-v3.json' - df = pd.read_json(filename) - - # get word stats - df['char_count'] = df['input'].apply(lambda x: len(x)) - import matplotlib.pyplot as plt - plt.figure(figsize=(10, 10)) - plt.hist(df['char_count'], bins=100) - chars_avg = np.mean(df['char_count']) - chars_median = np.median(df['char_count']) - plt.title("char_count avg: %s median: %s" % (chars_avg, chars_median)) - plt.savefig('chars_hist.png') - plt.close() - - # get tokenize stats for random sample of 1000 rows - from finetune import generate_and_tokenize_prompt - from loaders import get_loaders, get_tokenizer - from functools import partial - - llama_type = False - tokenizer_base_model = base_model = 'h2oai/h2ogpt-oasst1-512-20b' - model_loader, tokenizer_loader, conditional_type = ( - get_loaders(model_name=base_model, reward_type=False, llama_type=llama_type)) - local_files_only = False - resume_download = True - use_auth_token = False - tokenizer = get_tokenizer(tokenizer_loader, tokenizer_base_model, local_files_only, resume_download, use_auth_token) - prompt_type = 'plain' # trained with data already in human bot form - train_on_inputs = True - add_eos_token = False - cutoff_len = 512 # can choose 2048 - generate_and_tokenize_prompt_fun = partial(generate_and_tokenize_prompt, prompt_type=prompt_type, - train_on_inputs=train_on_inputs, add_eos_token=add_eos_token, - cutoff_len=cutoff_len, tokenizer=tokenizer) - from datasets import load_dataset - data = load_dataset("json", data_files={"train": filename}) - val_set_size = 0.90 - train_val = data["train"].train_test_split( - test_size=val_set_size, shuffle=True, seed=42 - ) - train_data = train_val["train"] - train_data = train_data.shuffle().map(generate_and_tokenize_prompt_fun, num_proc=os.cpu_count()) - - df_tokens = pd.DataFrame([len(x) for x in train_data['input_ids']], columns=['token_count']) - - plt.figure(figsize=(10, 10)) - plt.hist(df_tokens['token_count'], bins=100) - token_avg = np.mean(df_tokens['token_count']) - token_median = np.median(df_tokens['token_count']) - plt.title("token_count with cutoff=%s avg: %s median: %s" % (cutoff_len, token_avg, token_median)) - plt.savefig('token_hist_%s.png' % cutoff_len) - plt.close() - - -def get_unhelpful_list(): - # base versions - unhelpful = ["I'm sorry, I didn't quite understand your question, could you please rephrase it?", - "I'm sorry, but I don't understand your question. Could you please rephrase it?", - "I'm sorry, I don't quite understand your question", - "I'm sorry, I don't know", - "I'm sorry, but I don't know", - "I don't know anything", - "I do not know", - "I don't know", - "I don't know how", - "I do not know how", - "Can you please explain what you mean", - "please explain what you mean", - "please explain", - "I'm sorry, but I don't know how to tell a story. Can you please explain what you mean by", - "I'm sorry but I don't understand what you mean", - "I don't understand", - "I don't have the ability", - "I do not have the ability", - "I do not have", - "I am a language model,", - "I am a large language model,", - "I do not understand your question. Can you please try to make it clearer?", - "I'm sorry, but as an AI language model", - "I apologize, but I cannot rephrase text that I cannot understand. Your post is difficult to read and follow.", - "I apologize, but I am not h2oGPT. I am a language model developed by H2O.ai. How may I help you?", - "Sorry, but I am not an actual Linux shell, nor am I capable of emulating one. I am an open source chat assistant and would be glad t", - "I apologize, but I cannot perform the task you have requested.", - "I'm sorry, I cannot perform this task as I am an AI language model and do not have access", - "I'm sorry, I'm not sure what you're asking for here.", - "I'm not sure what you are asking", - "You need to provide more context", - ] - # reduced versions, with redundant parts, just to give context for where they came from - unhelpful += ["sorry, I didn't quite understand your question", - "I didn't quite understand your question", - "I didn't understand your question", - "I did not understand your question", - "I did not understand the question", - "could you please rephrase" - "could you rephrase" - "I do not understand your question.", - "I do not understand the question.", - "I do not understand that question.", - "Can you please try to make it clearer", - "Can you try to make it clearer", - "sorry, but as an AI language model", - "as an AI language model", - "I apologize, but I cannot", - "I cannot rephrase text", - "I cannot understand. Your post is difficult to read and follow." - "Your post is difficult to read and follow." - "I apologize, but I am", - "Sorry, but I am not ", - "nor am I capable", - "I am not capable of", - "I apologize, but I cannot perform the task you have requested", - "I cannot perform the task", - "I cannot complete the task", - "I'm sorry", - "I am sorry", - "do not have access", - "not sure what you're asking for", - "not sure what you are asking for", - "not sure what is being asked", - "I'm not sure what you are asking", - "not sure what you are asking", - "You need to provide more context", - "provide more context", - ] - unhelpful += ["As a large language model", - "cannot provide any information", - "As an artificial intelligence I do not have the capability", - "As an artificial intelligence I don't have the capability", - "As an artificial intelligence I can't", - "As an artificial intelligence I cannot", - "I am sorry but I do not understand", - "Can you please explain", - "(sorry couldn't resist)", - "(sorry could not resist)", - " :)", - " ;)", - " :-)", - " ;-)", - " lol ", - "Thanks so much!!!", - "Thank You :)!!!", - "Please try not to repeat", - "I am an AI language model", - "I'm a AI assistant that", - "I'm an AI assistant that", - "I am an AI assistant that", - "etc.", - "etc.etc.", - "etc. etc.", - "etc etc", - ] - return unhelpful - - -def test_check_unhelpful(): - # file = '/home/jon/Downloads/openassistant_oasst1_h2ogpt_graded.json' - file = '/home/jon/Downloads/openassistant_oasst1_h2ogpt_grades.json' - # file = 'h2ogpt-oig-oasst1-instruct-cleaned-v2.json' - - unhelpful = get_unhelpful_list() - # data = json.load(open(file, 'rt')) - df = pd.read_json(file) - - use_reward_score_threshold = False - use_bleu_threshold = False - use_sentence_sim = True - - from sacrebleu.metrics import BLEU - bleu = BLEU() - from nltk.translate.bleu_score import sentence_bleu - - def get_bleu(actual, expected_list): - # return bleu.sentence_score(actual, expected_list).score - return sentence_bleu(expected_list, actual) - - threshold = 0.0 - if use_reward_score_threshold: - df = df[df['grade_deberta'] > threshold] - - # back to as if original json load - data = df.to_dict(orient='records') - bads = {} - string_all = str(data) - for sub in unhelpful: - bads[sub] = string_all.count(sub) - bads = {k: v for k, v in bads.items() if v > 0} - import pprint - pp = pprint.PrettyPrinter(indent=4) - pp.pprint(bads) - - total_bads = sum(list(bads.values())) - print('total_bads: %s' % total_bads, flush=True) - - # check just bot - import re - convs = [[x.strip() for x in re.split(r'%s|%s' % (human, bot), y['input']) if x.strip()] for y in data] - humans = [[x for i, x in enumerate(y) if i % 2 == 0] for y in convs] - bots = [[x for i, x in enumerate(y) if i % 2 == 1] for y in convs] - - # FIXME: apply back to json etc., just see for now - bleu_threshold = 0.9 - if use_bleu_threshold: - bots = [[x for x in y if get_bleu(x, unhelpful) < bleu_threshold] for y in tqdm(bots)] - - cosine_sim_threshold = 0.8 - if use_sentence_sim: - # pip install sentence_transformers-2.2.2 - from sentence_transformers import SentenceTransformer - # sent_model = 'bert-base-nli-mean-tokens' - # sent_model = 'nli-distilroberta-base-v2' - sent_model = 'all-MiniLM-L6-v2' - model = SentenceTransformer(sent_model) - sentence_embeddings = model.encode(unhelpful) - from sklearn.metrics.pairwise import cosine_similarity - bots = [x for x in tqdm(bots) if - np.max(cosine_similarity(model.encode(x), sentence_embeddings)) < cosine_sim_threshold] - - bads_bots = {} - string_all = str(bots) - for sub in unhelpful: - bads_bots[sub] = string_all.count(sub) - bads_bots = {k: v for k, v in bads_bots.items() if v > 0} - import pprint - pp = pprint.PrettyPrinter(indent=4) - pp.pprint(bads_bots) - - total_bads_bots = sum(list(bads_bots.values())) - print('threshold: %g use_bleu_threshold: %g total_bads_bots: %s total_bots: %s total_humans: %s' % ( - threshold, use_bleu_threshold, total_bads_bots, len(bots), len(humans)), flush=True) - - # assert len(bads) == 0, bads - assert len(bads_bots) == 0, bads_bots - - -def test_fortune2000_personalized(): - row_list = [] - import glob - if not os.path.isdir("wikitext"): - raise RuntimeError("download https://github.com/h2oai/h2ogpt/files/11423008/wikitext.zip and unzip") - for file in glob.glob("wikitext/*.txt"): - with open(file, "r") as f: - blob = f.read() - N = 512 * 4 - row_list.extend([{'input': s, 'prompt_type': 'plain', 'source': "%s" % os.path.basename(file)} - for s in get_sentences(blob, N) if s]) - personality = create_personality_data() - import copy - for i in range(10): - row_list.extend(copy.deepcopy(personality)) - np.random.seed(123) - np.random.shuffle(row_list) - for i in range(len(row_list)): - row_list[i]['id'] = i - for i in range(len(row_list)): - assert row_list[i]['id'] == i - with open("h2ogpt-fortune2000-personalized.json", "w") as ff: - ff.write(json.dumps(row_list, indent=2)) diff --git a/spaces/awacke1/BigCodeStackSearch1215/app.py b/spaces/awacke1/BigCodeStackSearch1215/app.py deleted file mode 100644 index b213f394f6103a9b0262f808ace1de34085f76ce..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BigCodeStackSearch1215/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -from huggingface_hub import hf_hub_download -import json -import gzip - - -usernames = {} - - -filepath = hf_hub_download(repo_id="bigcode/the-stack-username-to-repo", filename="username_to_repo.json.gz", repo_type="dataset", revision="v1.1") -with gzip.open(filepath, 'r') as f: - usernames["v1.1"] = json.loads(f.read().decode('utf-8')) - -filepath = hf_hub_download(repo_id="bigcode/the-stack-username-to-repo", filename="username_to_repo.json.gz", repo_type="dataset") -with gzip.open(filepath, 'r') as f: - usernames["v1.0"] = json.loads(f.read().decode('utf-8')) - -text = """\ -![](https://huggingface.co/spaces/bigcode/in-the-stack/resolve/main/banner.png) -**_The Stack is an open governance interface between the AI community and the open source community._** -# Stack Search By Keyword -URL: [The Stack](https://huggingface.co/datasets/bigcode/the-stack), This search engine will match your search term and find up to 100 matches by keyword for example BeatSaber. -""" + """\ -""" - -def check_username(username, version): - output_md = "" - if username in usernames[version] and len(usernames[version][username])>0: - repos = usernames[version][username] - repo_word = "repository" if len(repos)==1 else "repositories" - output_md += f"**Yes**, there is code from **{len(repos)} {repo_word}** in The Stack:\n\n" - for repo in repos: - output_md += f"_{repo}_\n\n" - else: - output_md += "**No**, your code is not in The Stack." - return output_md.strip() - -def check_keyword(username, version): - output_md = "" - maxhitcount = 100 - maxrepos = 70000000 #6M user entries * up to 18 per user - currenthitcount=0 - currentrepos=0 - for repolist in usernames[version]: - #print(repolist) - repos = usernames[version][repolist] - repo_word = "repository" if len(repos)==1 else "repositories" - #output_md += f"**Yes**, there is code from **{len(repos)} {repo_word}** in The Stack:\n\n" - for repo in repos: - currentrepos += 1 - if currentrepos > maxrepos: - output_md += f"**Found maximum repos**, Count: **{currentrepos}** in The Stack:\n\n" - return output_md.strip() - if username in repo: - currenthitcount += 1 - output_md += f"_{repo}_\n\n" - if currenthitcount > maxhitcount: - output_md += f"**Found maximum hits**, Count: **{currenthitcount}** in The Stack:\n\n" - return output_md.strip() - else: - output_md += "**Searched All Repos**, Above found in The Stack." - return output_md.strip() - -with gr.Blocks() as demo: - with gr.Row(): - _, colum_2, _ = gr.Column(scale=1), gr.Column(scale=6), gr.Column(scale=1) - with colum_2: - gr.Markdown(text) - version = gr.Dropdown(["v1.1", "v1.0"], label="The Stack version:", value="v1.1") - username = gr.Text("", label="Keyword to match against repos e.g. BeatSaber") - check_button = gr.Button("Check!") - - repos = gr.Markdown() - - #check_button.click(check_username, [username, version], repos) - check_button.click(check_keyword, [username, version], repos) - - -demo.launch() \ No newline at end of file diff --git a/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md b/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md deleted file mode 100644 index fc50633a09bd4cbd8f897da976871b9b840e0a8b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CodeGen Salesforce Codegen 350M Mono -emoji: 💩 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py b/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py deleted file mode 100644 index 69e55e45f9c08be414bbb4a96631ca9069f24d83..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import streamlit as st -import pandas as pd -import json -import os - -def display_table(vote_data): - st.title("The Great Debate: Vote on the Funniest Questions!") - - data = [ - (1, "😂", "How many cups of coffee do you need to function like a normal human being?", "[Wikipedia](https://en.wikipedia.org/wiki/Coffee)"), - (2, "🤔", "If animals could talk, which species do you think would be the most annoying?", "[Wikipedia](https://en.wikipedia.org/wiki/Animal_communication)"), - (3, "🤫", "What's the craziest conspiracy theory you've ever heard?", "[Wikipedia](https://en.wikipedia.org/wiki/Conspiracy_theory)"), - (4, "🤣", "What's the worst pickup line you've ever heard or used?", "[Wikipedia](https://en.wikipedia.org/wiki/Pick-up_line)"), - (5, "😜", "If you were a superhero, what would your superpower be?", "[Wikipedia](https://en.wikipedia.org/wiki/Superpower_(ability))"), - (6, "🤯", "If you could time travel, what period in history would you go to and why?", "[Wikipedia](https://en.wikipedia.org/wiki/Time_travel)"), - (7, "😝", "What's the weirdest thing you've ever eaten?", "[Wikipedia](https://en.wikipedia.org/wiki/List_of_delicacies)"), - (8, "🤪", "What's the most embarrassing thing that's ever happened to you in public?", "[Wikipedia](https://en.wikipedia.org/wiki/Embarrassment)"), - (9, "😈", "If you could be any movie villain, who would you choose and why?", "[Wikipedia](https://en.wikipedia.org/wiki/Villain)"), - (10, "🙃", "What's the most useless talent you have?", "[Wikipedia](https://en.wikipedia.org/wiki/Talent_(human))"), - ] - - for row in data: - question_id = f"Question {row[0]}" - emoji, title, description = row[1], row[2], row[3] - upvotes, downvotes = count_votes(vote_data, question_id) - - col1, col2, col3, col4 = st.columns([1, 3, 1, 1]) - - col1.write(emoji) - col2.write(f"{title}\n{description}") - col3.write(f"👍 {upvotes}") - col4.write(f"👎 {downvotes}") - - upvote_button = col3.button(f"Upvote {question_id}") - downvote_button = col4.button(f"Downvote {question_id}") - - if upvote_button: - update_vote_log(question_id, 'upvote') - st.experimental_rerun() - - if downvote_button: - update_vote_log(question_id, 'downvote') - st.experimental_rerun() - -def update_vote_log(term, vote_type): - with open('vote.log.txt', 'a') as f: - f.write(json.dumps({'term': term, 'vote': vote_type}) + '\n') - -def load_vote_log(): - vote_data = [] - - if os.path.exists('vote.log.txt'): - with open('vote.log.txt', 'r') as f: - for line in f.readlines(): - vote_data.append(json.loads(line.strip())) - return vote_data - -def count_votes(vote_data, term): - upvotes = sum(1 for vote in vote_data if vote['term'] == term and vote['vote'] == 'upvote') - downvotes = sum(1 for vote in vote_data if vote['term'] == term and vote['vote'] == 'downvote') - return upvotes, downvotes - -def main(): - vote_data = load_vote_log() - - display_table(vote_data) - -if __name__ == "__main__": - main() diff --git a/spaces/banana-projects/datasets-card-creator/tailwind.config.js b/spaces/banana-projects/datasets-card-creator/tailwind.config.js deleted file mode 100644 index ab1f9834474d66cbcf449be19b004429d1762413..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/datasets-card-creator/tailwind.config.js +++ /dev/null @@ -1,838 +0,0 @@ - -module.exports = { - purge: ['./src/**/*.js', './public/index.html'], - darkMode: false, // or 'media' or 'class' - prefix: '', - important: false, - separator: ':', - theme: { - extend: { - spacing: { - '30': '7rem', - '68': '17rem', - '70': '17.5rem', - '72': '18rem', - '76': '19rem', - '78': '19.5rem', - '80': '20rem', - '84': '21rem', - '88': '22rem', - '92': '23rem', - '96': '24rem', - '100': '25rem', - '104': '26rem', - '108': '27rem', - '112': '28rem', - '116': '29rem', - '120': '30rem', - '132': '33rem', - '144': '36rem', - '148': '37rem', - '152': '38rem', - '156': '39rem', - '168': '42rem', - '180': '45rem', - '192': '48rem', - '196': '49rem', - '200': '50rem', - }, - }, - maxWidth: { - '1/4': '25%', - '1/2': '50%', - '3/4': '75%', - }, - zIndex: { - '-10': '-10', - }, - spinner: (theme) => ({ - DEFAULT: { - color: '#dae1e7', // color you want to make the spinner - size: '1em', // size of the spinner (used for both width and height) - border: '2px', // border-width of the spinner (shouldn't be bigger than half the spinner's size) - speed: '500ms', // the speed at which the spinner should rotate - }, - // md: { - // color: theme('colors.red.500', 'red'), - // size: '2em', - // border: '2px', - // speed: '500ms', - // }, - }), - height: { - xxs: '50px', - xs: '80px', - sm: '150px', - smm: '170px', - md: '500px', - lg: '600px', - xl: '700px', - }, - screens: { - xs: '500px', - sm: '640px', - md: '768px', - lg: '1024px', - xl: '1280px', - xxl: '1650px', - }, - colors: { - transparent: 'transparent', - current: 'currentColor', - - black: '#000', - white: '#fff', - - gray: { - 100: '#f7fafc', - 200: '#edf2f7', - 300: '#e2e8f0', - 400: '#cbd5e0', - 500: '#a0aec0', - 600: '#718096', - 700: '#4a5568', - 800: '#2d3748', - 900: '#1a202c', - }, - red: { - 100: '#fff5f5', - 200: '#fed7d7', - 300: '#feb2b2', - 400: '#fc8181', - 500: '#f56565', - 600: '#e53e3e', - 700: '#c53030', - 800: '#9b2c2c', - 900: '#742a2a', - }, - orange: { - 100: '#fffaf0', - 200: '#feebc8', - 300: '#fbd38d', - 400: '#f6ad55', - 500: '#ed8936', - 600: '#dd6b20', - 700: '#c05621', - 800: '#9c4221', - 900: '#7b341e', - }, - yellow: { - 100: '#fffff0', - 200: '#fefcbf', - 300: '#faf089', - 400: '#f6e05e', - 500: '#ecc94b', - 600: '#d69e2e', - 700: '#b7791f', - 800: '#975a16', - 900: '#744210', - }, - green: { - 100: '#f0fff4', - 200: '#c6f6d5', - 300: '#9ae6b4', - 400: '#68d391', - 500: '#48bb78', - 600: '#38a169', - 700: '#2f855a', - 800: '#276749', - 900: '#22543d', - }, - teal: { - 100: '#e6fffa', - 200: '#b2f5ea', - 300: '#81e6d9', - 400: '#4fd1c5', - 500: '#38b2ac', - 600: '#319795', - 700: '#2c7a7b', - 800: '#285e61', - 900: '#234e52', - }, - blue: { - 100: '#ebf8ff', - 200: '#bee3f8', - 300: '#90cdf4', - 400: '#63b3ed', - 500: '#4299e1', - 600: '#3182ce', - 700: '#2b6cb0', - 800: '#2c5282', - 900: '#2a4365', - }, - indigo: { - 100: '#ebf4ff', - 200: '#c3dafe', - 300: '#a3bffa', - 400: '#7f9cf5', - 500: '#667eea', - 600: '#5a67d8', - 700: '#4c51bf', - 800: '#434190', - 900: '#3c366b', - }, - purple: { - 100: '#faf5ff', - 200: '#e9d8fd', - 300: '#d6bcfa', - 400: '#b794f4', - 500: '#9f7aea', - 600: '#805ad5', - 700: '#6b46c1', - 800: '#553c9a', - 900: '#44337a', - }, - pink: { - 100: '#fff5f7', - 200: '#fed7e2', - 300: '#fbb6ce', - 400: '#f687b3', - 500: '#ed64a6', - 600: '#d53f8c', - 700: '#b83280', - 800: '#97266d', - 900: '#702459', - }, - }, - spacing: { - px: '1px', - '0': '0', - '1': '0.25rem', - '2': '0.5rem', - '3': '0.75rem', - '4': '1rem', - '5': '1.25rem', - '6': '1.5rem', - '8': '2rem', - '10': '2.5rem', - '12': '3rem', - '14': '3.5rem', - '16': '4rem', - '20': '5rem', - '24': '6rem', - '32': '8rem', - '40': '10rem', - '48': '12rem', - '56': '14rem', - '64': '16rem', - '72': '18rem', - '84': '21rem', - '96': '24rem', - }, - backgroundColor: theme => theme('colors'), - backgroundPosition: { - bottom: 'bottom', - center: 'center', - left: 'left', - 'left-bottom': 'left bottom', - 'left-top': 'left top', - right: 'right', - 'right-bottom': 'right bottom', - 'right-top': 'right top', - top: 'top', - }, - backgroundSize: { - auto: 'auto', - cover: 'cover', - contain: 'contain', - }, - borderColor: theme => ({ - ...theme('colors'), - DEFAULT: theme('colors.gray.300', 'currentColor'), - }), - borderRadius: { - none: '0', - sm: '0.125rem', - DEFAULT: '0.25rem', - md: '0.375rem', - lg: '0.5rem', - xl: '1rem', - full: '9999px', - }, - borderWidth: { - DEFAULT: '1px', - '0': '0', - '1': '1px', - '2': '2px', - '4': '4px', - '8': '8px', - }, - boxShadow: { - xs: '0 0 0 1px rgba(0, 0, 0, 0.05)', - sm: '0 1px 2px 0 rgba(0, 0, 0, 0.05)', - DEFAULT: '0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06)', - md: '0 4px 6px -1px rgba(0, 0, 0, 0.1), 0 2px 4px -1px rgba(0, 0, 0, 0.06)', - lg: '0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05)', - xl: '0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04)', - '2xl': '0 25px 50px -12px rgba(0, 0, 0, 0.25)', - '3xl': '0 30px 75px -20px rgba(0, 0, 0, 0.5)', - inner: 'inset 0 2px 4px 0 rgba(0, 0, 0, 0.06)', - outline: '0 0 0 3px rgba(66, 153, 225, 0.5)', - none: 'none', - }, - container: { - center: true, - }, - cursor: { - auto: 'auto', - DEFAULT: 'DEFAULT', - pointer: 'pointer', - wait: 'wait', - text: 'text', - move: 'move', - 'not-allowed': 'not-allowed', - }, - fill: { - current: 'currentColor', - }, - flex: { - '1': '1 1 0%', - auto: '1 1 auto', - initial: '0 1 auto', - none: 'none', - }, - flexGrow: { - '0': '0', - DEFAULT: '1', - }, - flexShrink: { - '0': '0', - DEFAULT: '1', - }, - fontFamily: { - sans: [ - 'system-ui', - '-apple-system', - 'BlinkMacSystemFont', - '"Segoe UI"', - 'Roboto', - '"Helvetica Neue"', - 'Arial', - '"Noto Sans"', - 'sans-serif', - '"Apple Color Emoji"', - '"Segoe UI Emoji"', - '"Segoe UI Symbol"', - '"Noto Color Emoji"', - ], - serif: ['Georgia', 'Cambria', '"Times New Roman"', 'Times', 'serif'], - mono: ['Menlo', 'Monaco', 'Consolas', '"Liberation Mono"', '"Courier New"', 'monospace'], - }, - fontSize: { - xs: '0.75rem', - sm: '0.875rem', - md: '0.92rem', - base: '1rem', - lg: '1.125rem', - xl: '1.25rem', - '2xl': '1.5rem', - '3xl': '1.875rem', - '4xl': '2.25rem', - '5xl': '3rem', - '6xl': '4rem', - }, - fontWeight: { - hairline: '100', - thin: '200', - light: '300', - normal: '400', - medium: '500', - semibold: '600', - bold: '700', - extrabold: '800', - black: '900', - }, - height: theme => ({ - auto: 'auto', - ...theme('spacing'), - full: '100%', - screen: '100vh', - }), - inset: { - '0': '0', - auto: 'auto', - }, - letterSpacing: { - tighter: '-0.05em', - tight: '-0.025em', - normal: '0', - wide: '0.025em', - wider: '0.05em', - widest: '0.1em', - }, - lineHeight: { - none: '1', - tight: '1.25', - snug: '1.375', - normal: '1.5', - relaxed: '1.625', - loose: '2', - '3': '.75rem', - '4': '1rem', - '5': '1.25rem', - '6': '1.5rem', - '7': '1.75rem', - '8': '2rem', - '9': '2.25rem', - '10': '2.5rem', - }, - listStyleType: { - none: 'none', - disc: 'disc', - decimal: 'decimal', - }, - margin: (theme, { negative }) => ({ - auto: 'auto', - ...theme('spacing'), - ...negative(theme('spacing')), - }), - maxHeight: { - none: 'none', - xxs: '10rem', - xs: '20rem', - sm: '24rem', - md: '28rem', - lg: '32rem', - xl: '36rem', - '2xl': '42rem', - '3xl': '48rem', - '4xl': '56rem', - '5xl': '64rem', - '55xl': '68rem', - '6xl': '72rem', - '65xl': '76rem', - '7xl': '80rem', - '75xl': '84rem', - '8xl': '88rem', - '9xl': '96rem', - '10xl': '104rem', - full: '100%', - screen: '100vh', - }, - maxWidth: (theme, { breakpoints }) => ({ - none: 'none', - xs: '20rem', - sm: '24rem', - md: '28rem', - lg: '32rem', - xl: '36rem', - '2xl': '42rem', - '3xl': '48rem', - '4xl': '56rem', - '5xl': '64rem', - '55xl': '68rem', - '6xl': '72rem', - '65xl': '76rem', - '7xl': '80rem', - '75xl': '84rem', - '8xl': '88rem', - '9xl': '96rem', - '10xl': '104rem', - full: '100%', - ...breakpoints(theme('screens')), - }), - minHeight: { - '0': '0', - full: '100%', - screen: '100vh', - }, - minWidth: { - '0': '0', - xs: '20rem', - sm: '24rem', - md: '28rem', - lg: '32rem', - xl: '36rem', - '2xl': '42rem', - '3xl': '48rem', - '4xl': '56rem', - '5xl': '64rem', - '55xl': '68rem', - '6xl': '72rem', - '65xl': '76rem', - '7xl': '80rem', - '75xl': '84rem', - '8xl': '88rem', - '9xl': '96rem', - '10xl': '104rem', - full: '100%', - }, - objectPosition: { - bottom: 'bottom', - center: 'center', - left: 'left', - 'left-bottom': 'left bottom', - 'left-top': 'left top', - right: 'right', - 'right-bottom': 'right bottom', - 'right-top': 'right top', - top: 'top', - }, - opacity: { - '0': '0', - '25': '0.25', - '50': '0.5', - '75': '0.75', - '100': '1', - }, - order: { - first: '-9999', - last: '9999', - none: '0', - '1': '1', - '2': '2', - '3': '3', - '4': '4', - '5': '5', - '6': '6', - '7': '7', - '8': '8', - '9': '9', - '10': '10', - '11': '11', - '12': '12', - }, - padding: theme => theme('spacing'), - placeholderColor: theme => theme('colors'), - stroke: { - current: 'currentColor', - }, - strokeWidth: { - '0': '0', - '1': '1', - '2': '2', - }, - textColor: theme => theme('colors'), - width: theme => ({ - auto: 'auto', - ...theme('spacing'), - '1/2': '50%', - '1/3': '33.333333%', - '2/3': '66.666667%', - '1/4': '25%', - '2/4': '50%', - '3/4': '75%', - '1/5': '20%', - '2/5': '40%', - '3/5': '60%', - '4/5': '80%', - '1/6': '16.666667%', - '2/6': '33.333333%', - '3/6': '50%', - '4/6': '66.666667%', - '5/6': '83.333333%', - '1/12': '8.333333%', - '2/12': '16.666667%', - '3/12': '25%', - '4/12': '33.333333%', - '5/12': '41.666667%', - '6/12': '50%', - '7/12': '58.333333%', - '8/12': '66.666667%', - '9/12': '75%', - '10/12': '83.333333%', - '11/12': '91.666667%', - full: '100%', - screen: '100vw', - }), - zIndex: { - auto: 'auto', - '0': '0', - '10': '10', - '20': '20', - '30': '30', - '40': '40', - '50': '50', - }, - gap: theme => theme('spacing'), - gridTemplateColumns: { - none: 'none', - '1': 'repeat(1, minmax(0, 1fr))', - '2': 'repeat(2, minmax(0, 1fr))', - '3': 'repeat(3, minmax(0, 1fr))', - '4': 'repeat(4, minmax(0, 1fr))', - '5': 'repeat(5, minmax(0, 1fr))', - '6': 'repeat(6, minmax(0, 1fr))', - '7': 'repeat(7, minmax(0, 1fr))', - '8': 'repeat(8, minmax(0, 1fr))', - '9': 'repeat(9, minmax(0, 1fr))', - '10': 'repeat(10, minmax(0, 1fr))', - '11': 'repeat(11, minmax(0, 1fr))', - '12': 'repeat(12, minmax(0, 1fr))', - }, - gridColumn: { - auto: 'auto', - 'span-1': 'span 1 / span 1', - 'span-2': 'span 2 / span 2', - 'span-3': 'span 3 / span 3', - 'span-4': 'span 4 / span 4', - 'span-5': 'span 5 / span 5', - 'span-6': 'span 6 / span 6', - 'span-7': 'span 7 / span 7', - 'span-8': 'span 8 / span 8', - 'span-9': 'span 9 / span 9', - 'span-10': 'span 10 / span 10', - 'span-11': 'span 11 / span 11', - 'span-12': 'span 12 / span 12', - }, - gridColumnStart: { - auto: 'auto', - '1': '1', - '2': '2', - '3': '3', - '4': '4', - '5': '5', - '6': '6', - '7': '7', - '8': '8', - '9': '9', - '10': '10', - '11': '11', - '12': '12', - '13': '13', - }, - gridColumnEnd: { - auto: 'auto', - '1': '1', - '2': '2', - '3': '3', - '4': '4', - '5': '5', - '6': '6', - '7': '7', - '8': '8', - '9': '9', - '10': '10', - '11': '11', - '12': '12', - '13': '13', - }, - gridTemplateRows: { - none: 'none', - '1': 'repeat(1, minmax(0, 1fr))', - '2': 'repeat(2, minmax(0, 1fr))', - '3': 'repeat(3, minmax(0, 1fr))', - '4': 'repeat(4, minmax(0, 1fr))', - '5': 'repeat(5, minmax(0, 1fr))', - '6': 'repeat(6, minmax(0, 1fr))', - }, - gridRow: { - auto: 'auto', - 'span-1': 'span 1 / span 1', - 'span-2': 'span 2 / span 2', - 'span-3': 'span 3 / span 3', - 'span-4': 'span 4 / span 4', - 'span-5': 'span 5 / span 5', - 'span-6': 'span 6 / span 6', - }, - gridRowStart: { - auto: 'auto', - '1': '1', - '2': '2', - '3': '3', - '4': '4', - '5': '5', - '6': '6', - '7': '7', - }, - gridRowEnd: { - auto: 'auto', - '1': '1', - '2': '2', - '3': '3', - '4': '4', - '5': '5', - '6': '6', - '7': '7', - }, - transformOrigin: { - center: 'center', - top: 'top', - 'top-right': 'top right', - right: 'right', - 'bottom-right': 'bottom right', - bottom: 'bottom', - 'bottom-left': 'bottom left', - left: 'left', - 'top-left': 'top left', - }, - scale: { - '0': '0', - '50': '.5', - '75': '.75', - '90': '.9', - '95': '.95', - '100': '1', - '105': '1.05', - '110': '1.1', - '125': '1.25', - '150': '1.5', - }, - rotate: { - '-180': '-180deg', - '-90': '-90deg', - '-45': '-45deg', - '0': '0', - '45': '45deg', - '90': '90deg', - '180': '180deg', - }, - translate: (theme, { negative }) => ({ - ...theme('spacing'), - ...negative(theme('spacing')), - '-full': '-100%', - '-1/2': '-50%', - '1/2': '50%', - full: '100%', - }), - skew: { - '-12': '-12deg', - '-6': '-6deg', - '-3': '-3deg', - '0': '0', - '3': '3deg', - '6': '6deg', - '12': '12deg', - }, - transitionProperty: { - none: 'none', - all: 'all', - DEFAULT: 'background-color, border-color, color, fill, stroke, opacity, box-shadow, transform', - colors: 'background-color, border-color, color, fill, stroke', - opacity: 'opacity', - shadow: 'box-shadow', - transform: 'transform', - }, - transitionTimingFunction: { - linear: 'linear', - in: 'cubic-bezier(0.4, 0, 1, 1)', - out: 'cubic-bezier(0, 0, 0.2, 1)', - 'in-out': 'cubic-bezier(0.4, 0, 0.2, 1)', - }, - transitionDuration: { - '75': '75ms', - '100': '100ms', - '150': '150ms', - '200': '200ms', - '300': '300ms', - '500': '500ms', - '700': '700ms', - '1000': '1000ms', - }, - }, - variants: { - accessibility: ['responsive', 'focus'], - alignContent: ['responsive'], - alignItems: ['responsive'], - alignSelf: ['responsive'], - appearance: ['responsive'], - backgroundAttachment: ['responsive'], - backgroundColor: ['responsive', 'hover', 'focus'], - backgroundPosition: ['responsive'], - backgroundRepeat: ['responsive'], - backgroundSize: ['responsive'], - borderCollapse: ['responsive'], - borderColor: ['responsive', 'hover', 'focus', 'active'], - borderRadius: ['responsive', 'hover', 'focus', 'active'], - borderStyle: ['responsive', 'hover', 'focus', 'active'], - borderWidth: ['responsive', 'hover', 'focus', 'active'], - boxShadow: ['responsive', 'hover', 'focus', 'active'], - boxSizing: ['responsive'], - cursor: ['responsive'], - display: ['responsive'], - fill: ['responsive'], - flex: ['responsive'], - flexDirection: ['responsive'], - flexGrow: ['responsive'], - flexShrink: ['responsive'], - flexWrap: ['responsive'], - float: ['responsive'], - clear: ['responsive'], - fontFamily: ['responsive'], - fontSize: ['responsive'], - fontSmoothing: ['responsive'], - fontStyle: ['responsive'], - fontWeight: ['responsive', 'hover', 'focus'], - height: ['responsive'], - inset: ['responsive'], - justifyContent: ['responsive'], - letterSpacing: ['responsive'], - lineHeight: ['responsive'], - listStylePosition: ['responsive'], - listStyleType: ['responsive'], - margin: ['responsive'], - maxHeight: ['responsive'], - maxWidth: ['responsive'], - minHeight: ['responsive'], - minWidth: ['responsive'], - objectFit: ['responsive'], - objectPosition: ['responsive'], - opacity: ['responsive', 'hover', 'focus'], - order: ['responsive'], - outline: ['responsive', 'focus'], - overflow: ['responsive'], - padding: ['responsive'], - placeholderColor: ['responsive', 'focus'], - pointerEvents: ['responsive'], - position: ['responsive'], - resize: ['responsive'], - spinner: ['responsive'], - stroke: ['responsive'], - strokeWidth: ['responsive'], - tableLayout: ['responsive', 'hover', 'focus'], - textAlign: ['responsive'], - textColor: ['responsive', 'hover', 'focus'], - textDecoration: ['responsive', 'hover', 'focus'], - textTransform: ['responsive'], - userSelect: ['responsive'], - verticalAlign: ['responsive'], - visibility: ['responsive'], - whitespace: ['responsive'], - width: ['responsive'], - wordBreak: ['responsive'], - zIndex: ['responsive'], - gap: ['responsive'], - gridAutoFlow: ['responsive'], - gridTemplateColumns: ['responsive'], - gridColumn: ['responsive'], - gridColumnStart: ['responsive'], - gridColumnEnd: ['responsive'], - gridTemplateRows: ['responsive'], - gridRow: ['responsive'], - gridRowStart: ['responsive'], - gridRowEnd: ['responsive'], - transform: ['responsive'], - transformOrigin: ['responsive'], - scale: ['responsive', 'hover', 'focus'], - rotate: ['responsive', 'hover', 'focus'], - translate: ['responsive', 'hover', 'focus'], - skew: ['responsive', 'hover', 'focus'], - transitionProperty: ['responsive', 'hover', 'focus'], - transitionTimingFunction: ['responsive', 'hover', 'focus'], - transitionDuration: ['responsive', 'hover', 'focus'], - }, - corePlugins: { - preflight: false - }, - plugins: [ - require('tailwindcss-grid')({ - grids: [2, 3, 4, 5, 6, 8, 10, 12], - gaps: { - 0: '0', - 4: '1rem', - 8: '2rem', - 16: '4rem', - 32: '8rem', - '4-x': '1rem', - '4-y': '1rem', - }, - autoMinWidths: { - '16': '4rem', - '24': '6rem', - '300px': '300px', - }, - variants: ['responsive'], - }), - ], -} - diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py deleted file mode 100644 index 328c3cb4b56160da12c12acdd7f0c5f31d11b24f..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py +++ /dev/null @@ -1,313 +0,0 @@ -import cv2 -import numpy as np -import torch -from os import path as osp -from torch.nn import functional as F - -from basicsr.data.transforms import mod_crop -from basicsr.utils import img2tensor, scandir - - -def read_img_seq(path, require_mod_crop=False, scale=1, return_imgname=False): - """Read a sequence of images from a given folder path. - - Args: - path (list[str] | str): List of image paths or image folder path. - require_mod_crop (bool): Require mod crop for each image. - Default: False. - scale (int): Scale factor for mod_crop. Default: 1. - return_imgname(bool): Whether return image names. Default False. - - Returns: - Tensor: size (t, c, h, w), RGB, [0, 1]. - list[str]: Returned image name list. - """ - if isinstance(path, list): - img_paths = path - else: - img_paths = sorted(list(scandir(path, full_path=True))) - imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths] - - if require_mod_crop: - imgs = [mod_crop(img, scale) for img in imgs] - imgs = img2tensor(imgs, bgr2rgb=True, float32=True) - imgs = torch.stack(imgs, dim=0) - - if return_imgname: - imgnames = [osp.splitext(osp.basename(path))[0] for path in img_paths] - return imgs, imgnames - else: - return imgs - - -def generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'): - """Generate an index list for reading `num_frames` frames from a sequence - of images. - - Args: - crt_idx (int): Current center index. - max_frame_num (int): Max number of the sequence of images (from 1). - num_frames (int): Reading num_frames frames. - padding (str): Padding mode, one of - 'replicate' | 'reflection' | 'reflection_circle' | 'circle' - Examples: current_idx = 0, num_frames = 5 - The generated frame indices under different padding mode: - replicate: [0, 0, 0, 1, 2] - reflection: [2, 1, 0, 1, 2] - reflection_circle: [4, 3, 0, 1, 2] - circle: [3, 4, 0, 1, 2] - - Returns: - list[int]: A list of indices. - """ - assert num_frames % 2 == 1, 'num_frames should be an odd number.' - assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.' - - max_frame_num = max_frame_num - 1 # start from 0 - num_pad = num_frames // 2 - - indices = [] - for i in range(crt_idx - num_pad, crt_idx + num_pad + 1): - if i < 0: - if padding == 'replicate': - pad_idx = 0 - elif padding == 'reflection': - pad_idx = -i - elif padding == 'reflection_circle': - pad_idx = crt_idx + num_pad - i - else: - pad_idx = num_frames + i - elif i > max_frame_num: - if padding == 'replicate': - pad_idx = max_frame_num - elif padding == 'reflection': - pad_idx = max_frame_num * 2 - i - elif padding == 'reflection_circle': - pad_idx = (crt_idx - num_pad) - (i - max_frame_num) - else: - pad_idx = i - num_frames - else: - pad_idx = i - indices.append(pad_idx) - return indices - - -def paired_paths_from_lmdb(folders, keys): - """Generate paired paths from lmdb files. - - Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is: - - lq.lmdb - ├── data.mdb - ├── lock.mdb - ├── meta_info.txt - - The data.mdb and lock.mdb are standard lmdb files and you can refer to - https://lmdb.readthedocs.io/en/release/ for more details. - - The meta_info.txt is a specified txt file to record the meta information - of our datasets. It will be automatically created when preparing - datasets by our provided dataset tools. - Each line in the txt file records - 1)image name (with extension), - 2)image shape, - 3)compression level, separated by a white space. - Example: `baboon.png (120,125,3) 1` - - We use the image name without extension as the lmdb key. - Note that we use the same key for the corresponding lq and gt images. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - Note that this key is different from lmdb keys. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')): - raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb ' - f'formats. But received {input_key}: {input_folder}; ' - f'{gt_key}: {gt_folder}') - # ensure that the two meta_info files are the same - with open(osp.join(input_folder, 'meta_info.txt')) as fin: - input_lmdb_keys = [line.split('.')[0] for line in fin] - with open(osp.join(gt_folder, 'meta_info.txt')) as fin: - gt_lmdb_keys = [line.split('.')[0] for line in fin] - if set(input_lmdb_keys) != set(gt_lmdb_keys): - raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.') - else: - paths = [] - for lmdb_key in sorted(input_lmdb_keys): - paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)])) - return paths - - -def paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl): - """Generate paired paths from an meta information file. - - Each line in the meta information file contains the image names and - image shape (usually for gt), separated by a white space. - - Example of an meta information file: - ``` - 0001_s001.png (480,480,3) - 0001_s002.png (480,480,3) - ``` - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - meta_info_file (str): Path to the meta information file. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - with open(meta_info_file, 'r') as fin: - gt_names = [line.strip().split(' ')[0] for line in fin] - - paths = [] - for gt_name in gt_names: - basename, ext = osp.splitext(osp.basename(gt_name)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - gt_path = osp.join(gt_folder, gt_name) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paired_paths_from_folder(folders, keys, filename_tmpl): - """Generate paired paths from folders. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}' - input_folder, gt_folder = folders - input_key, gt_key = keys - - input_paths = list(scandir(input_folder)) - gt_paths = list(scandir(gt_folder)) - assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: ' - f'{len(input_paths)}, {len(gt_paths)}.') - paths = [] - for gt_path in gt_paths: - basename, ext = osp.splitext(osp.basename(gt_path)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - assert input_name in input_paths, f'{input_name} is not in {input_key}_paths.' - gt_path = osp.join(gt_folder, gt_path) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paths_from_folder(folder): - """Generate paths from folder. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - - paths = list(scandir(folder)) - paths = [osp.join(folder, path) for path in paths] - return paths - - -def paths_from_lmdb(folder): - """Generate paths from lmdb. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - if not folder.endswith('.lmdb'): - raise ValueError(f'Folder {folder}folder should in lmdb format.') - with open(osp.join(folder, 'meta_info.txt')) as fin: - paths = [line.split('.')[0] for line in fin] - return paths - - -def generate_gaussian_kernel(kernel_size=13, sigma=1.6): - """Generate Gaussian kernel used in `duf_downsample`. - - Args: - kernel_size (int): Kernel size. Default: 13. - sigma (float): Sigma of the Gaussian kernel. Default: 1.6. - - Returns: - np.array: The Gaussian kernel. - """ - from scipy.ndimage import filters as filters - kernel = np.zeros((kernel_size, kernel_size)) - # set element at the middle to one, a dirac delta - kernel[kernel_size // 2, kernel_size // 2] = 1 - # gaussian-smooth the dirac, resulting in a gaussian filter - return filters.gaussian_filter(kernel, sigma) - - -def duf_downsample(x, kernel_size=13, scale=4): - """Downsamping with Gaussian kernel used in the DUF official code. - - Args: - x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w). - kernel_size (int): Kernel size. Default: 13. - scale (int): Downsampling factor. Supported scale: (2, 3, 4). - Default: 4. - - Returns: - Tensor: DUF downsampled frames. - """ - assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.' - - squeeze_flag = False - if x.ndim == 4: - squeeze_flag = True - x = x.unsqueeze(0) - b, t, c, h, w = x.size() - x = x.view(-1, 1, h, w) - pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2 - x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect') - - gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale) - gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0) - x = F.conv2d(x, gaussian_filter, stride=scale) - x = x[:, :, 2:-2, 2:-2] - x = x.view(b, t, c, x.size(2), x.size(3)) - if squeeze_flag: - x = x.squeeze(0) - return x diff --git a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py b/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py deleted file mode 100644 index 2300cd84b01e313fb1b4806d7b559cbd63e1b21a..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py +++ /dev/null @@ -1,28 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") -input_pdf = st.file_uploader(label = "", type = 'pdf') -background = st.selectbox("表格线条是否透明",(False,True)) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - tables_all= cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result_all = pd.ExcelWriter("result.xlsx", engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - result_all.save() - with open(result_all,'rb') as f: - st.download_button('抽取完成, 点击下载!', f,file_name="result.xlsx",mime="application/vnd.ms-excel") - \ No newline at end of file diff --git a/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py b/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py deleted file mode 100644 index bab706999a9927e34a7b07dad84ba1259ab5ec64..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import inspect -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, nonzero_tuple -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference -from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, Res5ROIHeads -from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads, _ScaleGradient -from detectron2.modeling.roi_heads.box_head import build_box_head - -from .detic_fast_rcnn import DeticFastRCNNOutputLayers -from ..debug import debug_second_stage - -from torch.cuda.amp import autocast - -@ROI_HEADS_REGISTRY.register() -class CustomRes5ROIHeads(Res5ROIHeads): - @configurable - def __init__(self, **kwargs): - cfg = kwargs.pop('cfg') - super().__init__(**kwargs) - stage_channel_factor = 2 ** 3 - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor - - self.with_image_labels = cfg.WITH_IMAGE_LABELS - self.ws_num_props = cfg.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS - self.add_image_box = cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX - self.add_feature_to_prop = cfg.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP - self.image_box_size = cfg.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE - self.box_predictor = DeticFastRCNNOutputLayers( - cfg, ShapeSpec(channels=out_channels, height=1, width=1) - ) - - self.save_debug = cfg.SAVE_DEBUG - self.save_debug_path = cfg.SAVE_DEBUG_PATH - if self.save_debug: - self.debug_show_name = cfg.DEBUG_SHOW_NAME - self.vis_thresh = cfg.VIS_THRESH - self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.bgr = (cfg.INPUT.FORMAT == 'BGR') - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret['cfg'] = cfg - return ret - - def forward(self, images, features, proposals, targets=None, - ann_type='box', classifier_info=(None,None,None)): - ''' - enable debug and image labels - classifier_info is shared across the batch - ''' - if not self.save_debug: - del images - - if self.training: - if ann_type in ['box']: - proposals = self.label_and_sample_proposals( - proposals, targets) - else: - proposals = self.get_top_proposals(proposals) - - proposal_boxes = [x.proposal_boxes for x in proposals] - box_features = self._shared_roi_transform( - [features[f] for f in self.in_features], proposal_boxes - ) - predictions = self.box_predictor( - box_features.mean(dim=[2, 3]), - classifier_info=classifier_info) - - if self.add_feature_to_prop: - feats_per_image = box_features.mean(dim=[2, 3]).split( - [len(p) for p in proposals], dim=0) - for feat, p in zip(feats_per_image, proposals): - p.feat = feat - - if self.training: - del features - if (ann_type != 'box'): - image_labels = [x._pos_category_ids for x in targets] - losses = self.box_predictor.image_label_losses( - predictions, proposals, image_labels, - classifier_info=classifier_info, - ann_type=ann_type) - else: - losses = self.box_predictor.losses( - (predictions[0], predictions[1]), proposals) - if self.with_image_labels: - assert 'image_loss' not in losses - losses['image_loss'] = predictions[0].new_zeros([1])[0] - if self.save_debug: - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - if ann_type != 'box': - image_labels = [x._pos_category_ids for x in targets] - else: - image_labels = [[] for x in targets] - debug_second_stage( - [denormalizer(x.clone()) for x in images], - targets, proposals=proposals, - save_debug=self.save_debug, - debug_show_name=self.debug_show_name, - vis_thresh=self.vis_thresh, - image_labels=image_labels, - save_debug_path=self.save_debug_path, - bgr=self.bgr) - return proposals, losses - else: - pred_instances, _ = self.box_predictor.inference(predictions, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - if self.save_debug: - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - debug_second_stage( - [denormalizer(x.clone()) for x in images], - pred_instances, proposals=proposals, - save_debug=self.save_debug, - debug_show_name=self.debug_show_name, - vis_thresh=self.vis_thresh, - save_debug_path=self.save_debug_path, - bgr=self.bgr) - return pred_instances, {} - - def get_top_proposals(self, proposals): - for i in range(len(proposals)): - proposals[i].proposal_boxes.clip(proposals[i].image_size) - proposals = [p[:self.ws_num_props] for p in proposals] - for i, p in enumerate(proposals): - p.proposal_boxes.tensor = p.proposal_boxes.tensor.detach() - if self.add_image_box: - proposals[i] = self._add_image_box(p) - return proposals - - def _add_image_box(self, p, use_score=False): - image_box = Instances(p.image_size) - n = 1 - h, w = p.image_size - if self.image_box_size < 1.0: - f = self.image_box_size - image_box.proposal_boxes = Boxes( - p.proposal_boxes.tensor.new_tensor( - [w * (1. - f) / 2., - h * (1. - f) / 2., - w * (1. - (1. - f) / 2.), - h * (1. - (1. - f) / 2.)] - ).view(n, 4)) - else: - image_box.proposal_boxes = Boxes( - p.proposal_boxes.tensor.new_tensor( - [0, 0, w, h]).view(n, 4)) - if use_score: - image_box.scores = \ - p.objectness_logits.new_ones(n) - image_box.pred_classes = \ - p.objectness_logits.new_zeros(n, dtype=torch.long) - image_box.objectness_logits = \ - p.objectness_logits.new_ones(n) - else: - image_box.objectness_logits = \ - p.objectness_logits.new_ones(n) - return Instances.cat([p, image_box]) \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md b/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md deleted file mode 100644 index 50bf33e5fbbe428d283b6a5d92a6b43cffbf6633..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Die Leiche lebte noch full movie hd 1080p


    Download Zip ★★★★★ https://urloso.com/2uyS0e



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/tests/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/tests/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/TensorMask/tests/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/utils.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/utils.py deleted file mode 100644 index 48a11faf991606ad7fb0691582f0bc6f06101a45..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -import numpy as np -from PIL import Image - - -def format_color_vector(value, length): - """Format a color vector. - """ - if isinstance(value, int): - value = value / 255.0 - if isinstance(value, float): - value = np.repeat(value, length) - if isinstance(value, list) or isinstance(value, tuple): - value = np.array(value) - if isinstance(value, np.ndarray): - value = value.squeeze() - if np.issubdtype(value.dtype, np.integer): - value = (value / 255.0).astype(np.float32) - if value.ndim != 1: - raise ValueError('Format vector takes only 1-D vectors') - if length > value.shape[0]: - value = np.hstack((value, np.ones(length - value.shape[0]))) - elif length < value.shape[0]: - value = value[:length] - else: - raise ValueError('Invalid vector data type') - - return value.squeeze().astype(np.float32) - - -def format_color_array(value, shape): - """Format an array of colors. - """ - # Convert uint8 to floating - value = np.asanyarray(value) - if np.issubdtype(value.dtype, np.integer): - value = (value / 255.0).astype(np.float32) - - # Match up shapes - if value.ndim == 1: - value = np.tile(value, (shape[0],1)) - if value.shape[1] < shape[1]: - nc = shape[1] - value.shape[1] - value = np.column_stack((value, np.ones((value.shape[0], nc)))) - elif value.shape[1] > shape[1]: - value = value[:,:shape[1]] - return value.astype(np.float32) - - -def format_texture_source(texture, target_channels='RGB'): - """Format a texture as a float32 np array. - """ - - # Pass through None - if texture is None: - return None - - # Convert PIL images into numpy arrays - if isinstance(texture, Image.Image): - if texture.mode == 'P' and target_channels in ('RGB', 'RGBA'): - texture = np.array(texture.convert(target_channels)) - else: - texture = np.array(texture) - - # Format numpy arrays - if isinstance(texture, np.ndarray): - if np.issubdtype(texture.dtype, np.floating): - texture = np.array(texture * 255.0, dtype=np.uint8) - elif np.issubdtype(texture.dtype, np.integer): - texture = texture.astype(np.uint8) - else: - raise TypeError('Invalid type {} for texture'.format( - type(texture) - )) - - # Format array by picking out correct texture channels or padding - if texture.ndim == 2: - texture = texture[:,:,np.newaxis] - if target_channels == 'R': - texture = texture[:,:,0] - texture = texture.squeeze() - elif target_channels == 'RG': - if texture.shape[2] == 1: - texture = np.repeat(texture, 2, axis=2) - else: - texture = texture[:,:,(0,1)] - elif target_channels == 'GB': - if texture.shape[2] == 1: - texture = np.repeat(texture, 2, axis=2) - elif texture.shape[2] > 2: - texture = texture[:,:,(1,2)] - elif target_channels == 'RGB': - if texture.shape[2] == 1: - texture = np.repeat(texture, 3, axis=2) - elif texture.shape[2] == 2: - raise ValueError('Cannot reformat 2-channel texture into RGB') - else: - texture = texture[:,:,(0,1,2)] - elif target_channels == 'RGBA': - if texture.shape[2] == 1: - texture = np.repeat(texture, 4, axis=2) - texture[:,:,3] = 255 - elif texture.shape[2] == 2: - raise ValueError('Cannot reformat 2-channel texture into RGBA') - elif texture.shape[2] == 3: - tx = np.empty((texture.shape[0], texture.shape[1], 4), dtype=np.uint8) - tx[:,:,:3] = texture - tx[:,:,3] = 255 - texture = tx - else: - raise ValueError('Invalid texture channel specification: {}' - .format(target_channels)) - else: - raise TypeError('Invalid type {} for texture'.format(type(texture))) - - return texture diff --git a/spaces/c-s-ale/ArxivChainLitDemo/Dockerfile b/spaces/c-s-ale/ArxivChainLitDemo/Dockerfile deleted file mode 100644 index 013fb487139b7432755793ab016e4433db706b2a..0000000000000000000000000000000000000000 --- a/spaces/c-s-ale/ArxivChainLitDemo/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH -WORKDIR $HOME/app -COPY --chown=user . $HOME/app -COPY ./requirements.txt ~/app/requirements.txt -RUN pip install -r requirements.txt -COPY . . -CMD ["chainlit", "run", "app.py", "--port", "7860"] \ No newline at end of file diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/upsample.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/upsample.py deleted file mode 100644 index 18c6397c420a81fadc5320e3a48f3249534decd8..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/upsample.py +++ /dev/null @@ -1,183 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Upsampling module. - -This code is modified from https://github.com/r9y9/wavenet_vocoder. - -""" - -import numpy as np -import torch -import torch.nn.functional as F - -from . import Conv1d - - -class Stretch2d(torch.nn.Module): - """Stretch2d module.""" - - def __init__(self, x_scale, y_scale, mode="nearest"): - """Initialize Stretch2d module. - - Args: - x_scale (int): X scaling factor (Time axis in spectrogram). - y_scale (int): Y scaling factor (Frequency axis in spectrogram). - mode (str): Interpolation mode. - - """ - super(Stretch2d, self).__init__() - self.x_scale = x_scale - self.y_scale = y_scale - self.mode = mode - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, C, F, T). - - Returns: - Tensor: Interpolated tensor (B, C, F * y_scale, T * x_scale), - - """ - return F.interpolate( - x, scale_factor=(self.y_scale, self.x_scale), mode=self.mode) - - -class Conv2d(torch.nn.Conv2d): - """Conv2d module with customized initialization.""" - - def __init__(self, *args, **kwargs): - """Initialize Conv2d module.""" - super(Conv2d, self).__init__(*args, **kwargs) - - def reset_parameters(self): - """Reset parameters.""" - self.weight.data.fill_(1. / np.prod(self.kernel_size)) - if self.bias is not None: - torch.nn.init.constant_(self.bias, 0.0) - - -class UpsampleNetwork(torch.nn.Module): - """Upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - use_causal_conv=False, - ): - """Initialize upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - interpolate_mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - - """ - super(UpsampleNetwork, self).__init__() - self.use_causal_conv = use_causal_conv - self.up_layers = torch.nn.ModuleList() - for scale in upsample_scales: - # interpolation layer - stretch = Stretch2d(scale, 1, interpolate_mode) - self.up_layers += [stretch] - - # conv layer - assert (freq_axis_kernel_size - 1) % 2 == 0, "Not support even number freq axis kernel size." - freq_axis_padding = (freq_axis_kernel_size - 1) // 2 - kernel_size = (freq_axis_kernel_size, scale * 2 + 1) - if use_causal_conv: - padding = (freq_axis_padding, scale * 2) - else: - padding = (freq_axis_padding, scale) - conv = Conv2d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) - self.up_layers += [conv] - - # nonlinear - if nonlinear_activation is not None: - nonlinear = getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params) - self.up_layers += [nonlinear] - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T). - - Returns: - Tensor: Upsampled tensor (B, C, T'), where T' = T * prod(upsample_scales). - - """ - c = c.unsqueeze(1) # (B, 1, C, T) - for f in self.up_layers: - if self.use_causal_conv and isinstance(f, Conv2d): - c = f(c)[..., :c.size(-1)] - else: - c = f(c) - return c.squeeze(1) # (B, C, T') - - -class ConvInUpsampleNetwork(torch.nn.Module): - """Convolution + upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - aux_channels=80, - aux_context_window=0, - use_causal_conv=False - ): - """Initialize convolution + upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - aux_channels (int): Number of channels of pre-convolutional layer. - aux_context_window (int): Context window size of the pre-convolutional layer. - use_causal_conv (bool): Whether to use causal structure. - - """ - super(ConvInUpsampleNetwork, self).__init__() - self.aux_context_window = aux_context_window - self.use_causal_conv = use_causal_conv and aux_context_window > 0 - # To capture wide-context information in conditional features - kernel_size = aux_context_window + 1 if use_causal_conv else 2 * aux_context_window + 1 - # NOTE(kan-bayashi): Here do not use padding because the input is already padded - self.conv_in = Conv1d(aux_channels, aux_channels, kernel_size=kernel_size, bias=False) - self.upsample = UpsampleNetwork( - upsample_scales=upsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - interpolate_mode=interpolate_mode, - freq_axis_kernel_size=freq_axis_kernel_size, - use_causal_conv=use_causal_conv, - ) - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T'). - - Returns: - Tensor: Upsampled tensor (B, C, T), - where T = (T' - aux_context_window * 2) * prod(upsample_scales). - - Note: - The length of inputs considers the context window size. - - """ - c_ = self.conv_in(c) - c = c_[:, :, :-self.aux_context_window] if self.use_causal_conv else c_ - return self.upsample(c) diff --git a/spaces/capstonedubtrack/Indiclanguagedubbing/Wav2Lip/Wav2Lip/wav2lip_train.py b/spaces/capstonedubtrack/Indiclanguagedubbing/Wav2Lip/Wav2Lip/wav2lip_train.py deleted file mode 100644 index 6e0811808af55464a803be1e268be33f1b8a31a9..0000000000000000000000000000000000000000 --- a/spaces/capstonedubtrack/Indiclanguagedubbing/Wav2Lip/Wav2Lip/wav2lip_train.py +++ /dev/null @@ -1,374 +0,0 @@ -from os.path import dirname, join, basename, isfile -from tqdm import tqdm - -from models import SyncNet_color as SyncNet -from models import Wav2Lip as Wav2Lip -import audio - -import torch -from torch import nn -from torch import optim -import torch.backends.cudnn as cudnn -from torch.utils import data as data_utils -import numpy as np - -from glob import glob - -import os, random, cv2, argparse -from hparams import hparams, get_image_list - -parser = argparse.ArgumentParser(description='Code to train the Wav2Lip model without the visual quality discriminator') - -parser.add_argument("--data_root", help="Root folder of the preprocessed LRS2 dataset", required=True, type=str) - -parser.add_argument('--checkpoint_dir', help='Save checkpoints to this directory', required=True, type=str) -parser.add_argument('--syncnet_checkpoint_path', help='Load the pre-trained Expert discriminator', required=True, type=str) - -parser.add_argument('--checkpoint_path', help='Resume from this checkpoint', default=None, type=str) - -args = parser.parse_args() - - -global_step = 0 -global_epoch = 0 -use_cuda = torch.cuda.is_available() -print('use_cuda: {}'.format(use_cuda)) - -syncnet_T = 5 -syncnet_mel_step_size = 16 - -class Dataset(object): - def __init__(self, split): - self.all_videos = get_image_list(args.data_root, split) - - def get_frame_id(self, frame): - return int(basename(frame).split('.')[0]) - - def get_window(self, start_frame): - start_id = self.get_frame_id(start_frame) - vidname = dirname(start_frame) - - window_fnames = [] - for frame_id in range(start_id, start_id + syncnet_T): - frame = join(vidname, '{}.jpg'.format(frame_id)) - if not isfile(frame): - return None - window_fnames.append(frame) - return window_fnames - - def read_window(self, window_fnames): - if window_fnames is None: return None - window = [] - for fname in window_fnames: - img = cv2.imread(fname) - if img is None: - return None - try: - img = cv2.resize(img, (hparams.img_size, hparams.img_size)) - except Exception as e: - return None - - window.append(img) - - return window - - def crop_audio_window(self, spec, start_frame): - if type(start_frame) == int: - start_frame_num = start_frame - else: - start_frame_num = self.get_frame_id(start_frame) # 0-indexing ---> 1-indexing - start_idx = int(80. * (start_frame_num / float(hparams.fps))) - - end_idx = start_idx + syncnet_mel_step_size - - return spec[start_idx : end_idx, :] - - def get_segmented_mels(self, spec, start_frame): - mels = [] - assert syncnet_T == 5 - start_frame_num = self.get_frame_id(start_frame) + 1 # 0-indexing ---> 1-indexing - if start_frame_num - 2 < 0: return None - for i in range(start_frame_num, start_frame_num + syncnet_T): - m = self.crop_audio_window(spec, i - 2) - if m.shape[0] != syncnet_mel_step_size: - return None - mels.append(m.T) - - mels = np.asarray(mels) - - return mels - - def prepare_window(self, window): - # 3 x T x H x W - x = np.asarray(window) / 255. - x = np.transpose(x, (3, 0, 1, 2)) - - return x - - def __len__(self): - return len(self.all_videos) - - def __getitem__(self, idx): - while 1: - idx = random.randint(0, len(self.all_videos) - 1) - vidname = self.all_videos[idx] - img_names = list(glob(join(vidname, '*.jpg'))) - if len(img_names) <= 3 * syncnet_T: - continue - - img_name = random.choice(img_names) - wrong_img_name = random.choice(img_names) - while wrong_img_name == img_name: - wrong_img_name = random.choice(img_names) - - window_fnames = self.get_window(img_name) - wrong_window_fnames = self.get_window(wrong_img_name) - if window_fnames is None or wrong_window_fnames is None: - continue - - window = self.read_window(window_fnames) - if window is None: - continue - - wrong_window = self.read_window(wrong_window_fnames) - if wrong_window is None: - continue - - try: - wavpath = join(vidname, "audio.wav") - wav = audio.load_wav(wavpath, hparams.sample_rate) - - orig_mel = audio.melspectrogram(wav).T - except Exception as e: - continue - - mel = self.crop_audio_window(orig_mel.copy(), img_name) - - if (mel.shape[0] != syncnet_mel_step_size): - continue - - indiv_mels = self.get_segmented_mels(orig_mel.copy(), img_name) - if indiv_mels is None: continue - - window = self.prepare_window(window) - y = window.copy() - window[:, :, window.shape[2]//2:] = 0. - - wrong_window = self.prepare_window(wrong_window) - x = np.concatenate([window, wrong_window], axis=0) - - x = torch.FloatTensor(x) - mel = torch.FloatTensor(mel.T).unsqueeze(0) - indiv_mels = torch.FloatTensor(indiv_mels).unsqueeze(1) - y = torch.FloatTensor(y) - return x, indiv_mels, mel, y - -def save_sample_images(x, g, gt, global_step, checkpoint_dir): - x = (x.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - g = (g.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - gt = (gt.detach().cpu().numpy().transpose(0, 2, 3, 4, 1) * 255.).astype(np.uint8) - - refs, inps = x[..., 3:], x[..., :3] - folder = join(checkpoint_dir, "samples_step{:09d}".format(global_step)) - if not os.path.exists(folder): os.mkdir(folder) - collage = np.concatenate((refs, inps, g, gt), axis=-2) - for batch_idx, c in enumerate(collage): - for t in range(len(c)): - cv2.imwrite('{}/{}_{}.jpg'.format(folder, batch_idx, t), c[t]) - -logloss = nn.BCELoss() -def cosine_loss(a, v, y): - d = nn.functional.cosine_similarity(a, v) - loss = logloss(d.unsqueeze(1), y) - - return loss - -device = torch.device("cuda" if use_cuda else "cpu") -syncnet = SyncNet().to(device) -for p in syncnet.parameters(): - p.requires_grad = False - -recon_loss = nn.L1Loss() -def get_sync_loss(mel, g): - g = g[:, :, :, g.size(3)//2:] - g = torch.cat([g[:, :, i] for i in range(syncnet_T)], dim=1) - # B, 3 * T, H//2, W - a, v = syncnet(mel, g) - y = torch.ones(g.size(0), 1).float().to(device) - return cosine_loss(a, v, y) - -def train(device, model, train_data_loader, test_data_loader, optimizer, - checkpoint_dir=None, checkpoint_interval=None, nepochs=None): - - global global_step, global_epoch - resumed_step = global_step - - while global_epoch < nepochs: - print('Starting Epoch: {}'.format(global_epoch)) - running_sync_loss, running_l1_loss = 0., 0. - prog_bar = tqdm(enumerate(train_data_loader)) - for step, (x, indiv_mels, mel, gt) in prog_bar: - model.train() - optimizer.zero_grad() - - # Move data to CUDA device - x = x.to(device) - mel = mel.to(device) - indiv_mels = indiv_mels.to(device) - gt = gt.to(device) - - g = model(indiv_mels, x) - - if hparams.syncnet_wt > 0.: - sync_loss = get_sync_loss(mel, g) - else: - sync_loss = 0. - - l1loss = recon_loss(g, gt) - - loss = hparams.syncnet_wt * sync_loss + (1 - hparams.syncnet_wt) * l1loss - loss.backward() - optimizer.step() - - if global_step % checkpoint_interval == 0: - save_sample_images(x, g, gt, global_step, checkpoint_dir) - - global_step += 1 - cur_session_steps = global_step - resumed_step - - running_l1_loss += l1loss.item() - if hparams.syncnet_wt > 0.: - running_sync_loss += sync_loss.item() - else: - running_sync_loss += 0. - - if global_step == 1 or global_step % checkpoint_interval == 0: - save_checkpoint( - model, optimizer, global_step, checkpoint_dir, global_epoch) - - if global_step == 1 or global_step % hparams.eval_interval == 0: - with torch.no_grad(): - average_sync_loss = eval_model(test_data_loader, global_step, device, model, checkpoint_dir) - - if average_sync_loss < .75: - hparams.set_hparam('syncnet_wt', 0.01) # without image GAN a lesser weight is sufficient - - prog_bar.set_description('L1: {}, Sync Loss: {}'.format(running_l1_loss / (step + 1), - running_sync_loss / (step + 1))) - - global_epoch += 1 - - -def eval_model(test_data_loader, global_step, device, model, checkpoint_dir): - eval_steps = 700 - print('Evaluating for {} steps'.format(eval_steps)) - sync_losses, recon_losses = [], [] - step = 0 - while 1: - for x, indiv_mels, mel, gt in test_data_loader: - step += 1 - model.eval() - - # Move data to CUDA device - x = x.to(device) - gt = gt.to(device) - indiv_mels = indiv_mels.to(device) - mel = mel.to(device) - - g = model(indiv_mels, x) - - sync_loss = get_sync_loss(mel, g) - l1loss = recon_loss(g, gt) - - sync_losses.append(sync_loss.item()) - recon_losses.append(l1loss.item()) - - if step > eval_steps: - averaged_sync_loss = sum(sync_losses) / len(sync_losses) - averaged_recon_loss = sum(recon_losses) / len(recon_losses) - - print('L1: {}, Sync loss: {}'.format(averaged_recon_loss, averaged_sync_loss)) - - return averaged_sync_loss - -def save_checkpoint(model, optimizer, step, checkpoint_dir, epoch): - - checkpoint_path = join( - checkpoint_dir, "checkpoint_step{:09d}.pth".format(global_step)) - optimizer_state = optimizer.state_dict() if hparams.save_optimizer_state else None - torch.save({ - "state_dict": model.state_dict(), - "optimizer": optimizer_state, - "global_step": step, - "global_epoch": epoch, - }, checkpoint_path) - print("Saved checkpoint:", checkpoint_path) - - -def _load(checkpoint_path): - if use_cuda: - checkpoint = torch.load(checkpoint_path) - else: - checkpoint = torch.load(checkpoint_path, - map_location=lambda storage, loc: storage) - return checkpoint - -def load_checkpoint(path, model, optimizer, reset_optimizer=False, overwrite_global_states=True): - global global_step - global global_epoch - - print("Load checkpoint from: {}".format(path)) - checkpoint = _load(path) - s = checkpoint["state_dict"] - new_s = {} - for k, v in s.items(): - new_s[k.replace('module.', '')] = v - model.load_state_dict(new_s) - if not reset_optimizer: - optimizer_state = checkpoint["optimizer"] - if optimizer_state is not None: - print("Load optimizer state from {}".format(path)) - optimizer.load_state_dict(checkpoint["optimizer"]) - if overwrite_global_states: - global_step = checkpoint["global_step"] - global_epoch = checkpoint["global_epoch"] - - return model - -if __name__ == "__main__": - checkpoint_dir = args.checkpoint_dir - - # Dataset and Dataloader setup - train_dataset = Dataset('train') - test_dataset = Dataset('val') - - train_data_loader = data_utils.DataLoader( - train_dataset, batch_size=hparams.batch_size, shuffle=True, - num_workers=hparams.num_workers) - - test_data_loader = data_utils.DataLoader( - test_dataset, batch_size=hparams.batch_size, - num_workers=4) - - device = torch.device("cuda" if use_cuda else "cpu") - - # Model - model = Wav2Lip().to(device) - print('total trainable params {}'.format(sum(p.numel() for p in model.parameters() if p.requires_grad))) - - optimizer = optim.Adam([p for p in model.parameters() if p.requires_grad], - lr=hparams.initial_learning_rate) - - if args.checkpoint_path is not None: - load_checkpoint(args.checkpoint_path, model, optimizer, reset_optimizer=False) - - load_checkpoint(args.syncnet_checkpoint_path, syncnet, None, reset_optimizer=True, overwrite_global_states=False) - - if not os.path.exists(checkpoint_dir): - os.mkdir(checkpoint_dir) - - # Train! - train(device, model, train_data_loader, test_data_loader, optimizer, - checkpoint_dir=checkpoint_dir, - checkpoint_interval=hparams.checkpoint_interval, - nepochs=hparams.nepochs) diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/sanskrit.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/chaozn/fastai_dogs_vs_cats/app.py b/spaces/chaozn/fastai_dogs_vs_cats/app.py deleted file mode 100644 index da93b5a9078eec7f22d3117802079de9779b5515..0000000000000000000000000000000000000000 --- a/spaces/chaozn/fastai_dogs_vs_cats/app.py +++ /dev/null @@ -1,27 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: . (unless otherwise specified). - -__all__ = ['is_cat', 'learn', 'classify_image', 'categories', 'image', 'label', 'examples', 'intf'] - -# Cell -from fastai.vision.all import * -import gradio as gr - -def is_cat(x): return x[0].isupper() - -# Cell -learn = load_learner('model.pkl') - -# Cell -categories = ('Dog', 'Cat') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -# Cell -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['dog.jpg', 'cat.jpg', 'dunno.jpg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/charlesnchr/ML-SIM/README.md b/spaces/charlesnchr/ML-SIM/README.md deleted file mode 100644 index 32260f4f2f29cf6b1adf7cf0996d3213ccdc50ee..0000000000000000000000000000000000000000 --- a/spaces/charlesnchr/ML-SIM/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ML SIM -emoji: 🔥 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/scripts/extract.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/scripts/extract.py deleted file mode 100644 index f60f243dece6c6d6be5ec388677718f4aec5e31c..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/scripts/extract.py +++ /dev/null @@ -1,105 +0,0 @@ -# coding=utf-8 -# Copyright 2019-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Preprocessing script before training the distilled model. -Specific to RoBERTa -> DistilRoBERTa and GPT2 -> DistilGPT2. -""" -import argparse - -import torch - -from transformers import GPT2LMHeadModel, RobertaForMaskedLM - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description=( - "Extraction some layers of the full RobertaForMaskedLM or GPT2LMHeadModel for Transfer Learned" - " Distillation" - ) - ) - parser.add_argument("--model_type", default="roberta", choices=["roberta", "gpt2"]) - parser.add_argument("--model_name", default="roberta-large", type=str) - parser.add_argument("--dump_checkpoint", default="serialization_dir/tf_roberta_048131723.pth", type=str) - parser.add_argument("--vocab_transform", action="store_true") - args = parser.parse_args() - - if args.model_type == "roberta": - model = RobertaForMaskedLM.from_pretrained(args.model_name) - prefix = "roberta" - elif args.model_type == "gpt2": - model = GPT2LMHeadModel.from_pretrained(args.model_name) - prefix = "transformer" - - state_dict = model.state_dict() - compressed_sd = {} - - # Embeddings # - if args.model_type == "gpt2": - for param_name in ["wte.weight", "wpe.weight"]: - compressed_sd[f"{prefix}.{param_name}"] = state_dict[f"{prefix}.{param_name}"] - else: - for w in ["word_embeddings", "position_embeddings", "token_type_embeddings"]: - param_name = f"{prefix}.embeddings.{w}.weight" - compressed_sd[param_name] = state_dict[param_name] - for w in ["weight", "bias"]: - param_name = f"{prefix}.embeddings.LayerNorm.{w}" - compressed_sd[param_name] = state_dict[param_name] - - # Transformer Blocks # - std_idx = 0 - for teacher_idx in [0, 2, 4, 7, 9, 11]: - if args.model_type == "gpt2": - for layer in ["ln_1", "attn.c_attn", "attn.c_proj", "ln_2", "mlp.c_fc", "mlp.c_proj"]: - for w in ["weight", "bias"]: - compressed_sd[f"{prefix}.h.{std_idx}.{layer}.{w}"] = state_dict[ - f"{prefix}.h.{teacher_idx}.{layer}.{w}" - ] - compressed_sd[f"{prefix}.h.{std_idx}.attn.bias"] = state_dict[f"{prefix}.h.{teacher_idx}.attn.bias"] - else: - for layer in [ - "attention.self.query", - "attention.self.key", - "attention.self.value", - "attention.output.dense", - "attention.output.LayerNorm", - "intermediate.dense", - "output.dense", - "output.LayerNorm", - ]: - for w in ["weight", "bias"]: - compressed_sd[f"{prefix}.encoder.layer.{std_idx}.{layer}.{w}"] = state_dict[ - f"{prefix}.encoder.layer.{teacher_idx}.{layer}.{w}" - ] - std_idx += 1 - - # Language Modeling Head ###s - if args.model_type == "roberta": - for layer in ["lm_head.decoder.weight", "lm_head.bias"]: - compressed_sd[f"{layer}"] = state_dict[f"{layer}"] - if args.vocab_transform: - for w in ["weight", "bias"]: - compressed_sd[f"lm_head.dense.{w}"] = state_dict[f"lm_head.dense.{w}"] - compressed_sd[f"lm_head.layer_norm.{w}"] = state_dict[f"lm_head.layer_norm.{w}"] - elif args.model_type == "gpt2": - for w in ["weight", "bias"]: - compressed_sd[f"{prefix}.ln_f.{w}"] = state_dict[f"{prefix}.ln_f.{w}"] - compressed_sd["lm_head.weight"] = state_dict["lm_head.weight"] - - print(f"N layers selected for distillation: {std_idx}") - print(f"Number of params transferred for distillation: {len(compressed_sd.keys())}") - - print(f"Save transferred checkpoint to {args.dump_checkpoint}.") - torch.save(compressed_sd, args.dump_checkpoint) diff --git a/spaces/chinmaysharma1020/malware_classification/app.py b/spaces/chinmaysharma1020/malware_classification/app.py deleted file mode 100644 index 57e6b9b0ba3b9c3dfe358fef7da8feb388047873..0000000000000000000000000000000000000000 --- a/spaces/chinmaysharma1020/malware_classification/app.py +++ /dev/null @@ -1,80 +0,0 @@ -# -*- coding: utf-8 -*- -"""LoadingTrainedModels.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/15HJ-oThFwQ2alwEle5GLtkIoVkcR4aPM -""" - -import torch -from torchvision import transforms, models -from torch import nn -from PIL import Image -import PIL -import numpy as np - - -model = models.densenet121(pretrained=False) - -for params in model.parameters(): - params.require_grad = False -classifier = nn.Sequential(nn.Linear(1024,1024),nn.ReLU(),nn.Dropout(p=0.3), - nn.Linear(1024,512),nn.ReLU(),nn.Dropout(p=0.3), - nn.Linear(512,25),nn.LogSoftmax(dim=1)) -model.classifier = classifier -model.load_state_dict(torch.load('denseNetWeights.pth',map_location='cpu')) -model.eval() - - -classes = { - 0: 'Adialer.C', - 1: 'Agent.FYI', - 2: 'Allaple.A', - 3: 'Allaple.L', - 4: 'Alueron.gen!J', - 5: 'Autorun.K', - 6: 'C2LOP.gen!g', - 7: 'C2LOP.P', - 8: 'Dialplatform.B', - 9: 'Dontovo.A', - 10: 'Fakerean', - 11: 'Instantaccess', - 12: 'Lolyda.AA1', - 13: 'Lolyda.AA2', - 14: 'Lolyda.AA3', - 15: 'Lolyda.AT', - 16: 'Malex.gen!J', - 17: 'Obfuscator.AD', - 18: 'Rbot!gen', - 19: 'Skintrim.N', - 20: 'Swizzor.gen!E', - 21: 'Swizzor.gen!I', - 22: 'VB.AT', - 23: 'Wintrim.BX', - 24: 'Yuner.A', -} - -from torchvision import transforms, models - -import gradio as gr -import torch -from torchvision import transforms, models -from torch import nn - -title = "Malware image analysis" -description = "Malware classification" - -def get_prediction(img): - transform = transforms.Compose([transforms.ToPILImage(),transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(),]) - img_t = transform(img) - output = model(img_t.unsqueeze(0)) - prediction = torch.argmax(output,dim=1) - return classes[prediction.item()] - -gr.Interface(fn=get_prediction, - inputs="image", - outputs="label", - title=title,description=description,).launch(server_name="0.0.0.0") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/_util.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/_util.py deleted file mode 100644 index ba27b7e49e98f4973ba9c257be14a8419292fe8a..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/_util.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -from pathlib import Path - - -def is_path(f): - return isinstance(f, (bytes, str, Path)) - - -def is_directory(f): - """Checks if an object is a string, and that it points to a directory.""" - return is_path(f) and os.path.isdir(f) - - -class DeferredError: - def __init__(self, ex): - self.ex = ex - - def __getattr__(self, elt): - raise self.ex diff --git a/spaces/cihyFjudo/fairness-paper-search/Badtameez Dil hd 1080p hindi Download the Complete AMZN WEB Series in High Quality.md b/spaces/cihyFjudo/fairness-paper-search/Badtameez Dil hd 1080p hindi Download the Complete AMZN WEB Series in High Quality.md deleted file mode 100644 index 752fe1043133fad2b79ff58d42f07cd2e09a8553..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Badtameez Dil hd 1080p hindi Download the Complete AMZN WEB Series in High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Badtameez Dil hd 1080p hindi


    Download Ziphttps://tinurli.com/2uwiIE



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Speedconnect Internet Accelerator V80 Activation Key Crack The Ultimate Solution for Slow and Unstable Internet.md b/spaces/cihyFjudo/fairness-paper-search/Speedconnect Internet Accelerator V80 Activation Key Crack The Ultimate Solution for Slow and Unstable Internet.md deleted file mode 100644 index b50710c62a5b2b354b5e1a2d9dac7a6a879322eb..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Speedconnect Internet Accelerator V80 Activation Key Crack The Ultimate Solution for Slow and Unstable Internet.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Speedconnect Internet Accelerator V80 Activation Key Crack


    Download Filehttps://tinurli.com/2uwhEz



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py deleted file mode 100644 index e48d3c1e97e02cd188b567b50a4c0c615f187e4d..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py +++ /dev/null @@ -1,119 +0,0 @@ -from __future__ import annotations - -import sys -from abc import ABCMeta, abstractmethod -from types import TracebackType -from typing import TYPE_CHECKING, Any, Awaitable, Callable, TypeVar, overload -from warnings import warn - -if sys.version_info >= (3, 8): - from typing import Protocol -else: - from typing_extensions import Protocol - -if TYPE_CHECKING: - from anyio._core._tasks import CancelScope - -T_Retval = TypeVar("T_Retval") -T_contra = TypeVar("T_contra", contravariant=True) - - -class TaskStatus(Protocol[T_contra]): - @overload - def started(self: TaskStatus[None]) -> None: - ... - - @overload - def started(self, value: T_contra) -> None: - ... - - def started(self, value: T_contra | None = None) -> None: - """ - Signal that the task has started. - - :param value: object passed back to the starter of the task - """ - - -class TaskGroup(metaclass=ABCMeta): - """ - Groups several asynchronous tasks together. - - :ivar cancel_scope: the cancel scope inherited by all child tasks - :vartype cancel_scope: CancelScope - """ - - cancel_scope: CancelScope - - async def spawn( - self, - func: Callable[..., Awaitable[Any]], - *args: object, - name: object = None, - ) -> None: - """ - Start a new task in this task group. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - - .. deprecated:: 3.0 - Use :meth:`start_soon` instead. If your code needs AnyIO 2 compatibility, you - can keep using this until AnyIO 4. - - """ - warn( - 'spawn() is deprecated -- use start_soon() (without the "await") instead', - DeprecationWarning, - ) - self.start_soon(func, *args, name=name) - - @abstractmethod - def start_soon( - self, - func: Callable[..., Awaitable[Any]], - *args: object, - name: object = None, - ) -> None: - """ - Start a new task in this task group. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - - .. versionadded:: 3.0 - """ - - @abstractmethod - async def start( - self, - func: Callable[..., Awaitable[Any]], - *args: object, - name: object = None, - ) -> Any: - """ - Start a new task and wait until it signals for readiness. - - :param func: a coroutine function - :param args: positional arguments to call the function with - :param name: name of the task, for the purposes of introspection and debugging - :return: the value passed to ``task_status.started()`` - :raises RuntimeError: if the task finishes without calling ``task_status.started()`` - - .. versionadded:: 3.0 - """ - - @abstractmethod - async def __aenter__(self) -> TaskGroup: - """Enter the task group context and allow starting new tasks.""" - - @abstractmethod - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - """Exit the task group context waiting for all tasks to finish.""" diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py deleted file mode 100644 index fa7a450b74e813e66fd6e9a140d48c29215503bb..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py +++ /dev/null @@ -1,10 +0,0 @@ -from typing import Optional, Tuple - - -def get_authorization_scheme_param( - authorization_header_value: Optional[str], -) -> Tuple[str, str]: - if not authorization_header_value: - return "", "" - scheme, _, param = authorization_header_value.partition(" ") - return scheme, param diff --git a/spaces/cloudwp/sd/README.md b/spaces/cloudwp/sd/README.md deleted file mode 100644 index cd8a3348d015de1a4f47d6922d4e8f85756bc361..0000000000000000000000000000000000000000 --- a/spaces/cloudwp/sd/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sd -emoji: 🐠 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index 742c7abc30ed7b0c74deca2c5a616d3d201402e8..0000000000000000000000000000000000000000 --- "a/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,139 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - - prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else "" - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \ - f'文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - """ - # 可按需启用 - i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \ - f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \ - f'根据你之前的分析,提出建议' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - """ - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c deleted file mode 100644 index cdda67fb8153b0eeb3b47174e9937665406836f1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Lagarith range decoder - * Copyright (c) 2009 Nathan Caldwell - * Copyright (c) 2009 David Conrad - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Lagarith range decoder - * @author Nathan Caldwell - * @author David Conrad - */ - -#include "get_bits.h" -#include "lagarithrac.h" - -void ff_lag_rac_init(lag_rac *l, GetBitContext *gb, int length) -{ - int i, j, left; - - /* According to reference decoder "1st byte is garbage", - * however, it gets skipped by the call to align_get_bits() - */ - align_get_bits(gb); - left = get_bits_left(gb) >> 3; - l->bytestream_start = - l->bytestream = gb->buffer + get_bits_count(gb) / 8; - l->bytestream_end = l->bytestream_start + left; - - l->range = 0x80; - l->low = *l->bytestream >> 1; - l->hash_shift = FFMAX(l->scale, 10) - 10; - l->overread = 0; - - for (i = j = 0; i < 1024; i++) { - unsigned r = i << l->hash_shift; - while (l->prob[j + 1] <= r) - j++; - l->range_hash[i] = j; - } -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md b/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md deleted file mode 100644 index c4fa7184e1773f2a746c57f6b7edcca6c98a61fb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md +++ /dev/null @@ -1,253 +0,0 @@ -
    -

    Incredibox Wekiddy APK: A Fun and Creative Music App

    -

    Do you love music and want to create your own tunes with ease? Do you want to explore different musical genres and styles with a simple and intuitive interface? Do you want to have fun with a group of animated beatboxers that can sing, rap, and groove? If you answered yes to any of these questions, then you should check out Incredibox Wekiddy APK, a music app that lets you create your own music with the help of a merry crew of beatboxers. In this article, we will tell you everything you need to know about this app, including what it is, what it offers, how to download and install it, and what are some alternatives to it.

    -

    What is Incredibox?

    -

    Incredibox is a music app that allows anyone to create complex and catchy mixes by dragging and dropping icons onto a variety of characters. The app has simple and accessible controls and features. It also has multiple audio effects and sound customizations. You can choose between different in-app atmospheres, each with its own musical style, such as hip-hop, rock, funk, jazz, techno, and more. You can also save and share your mix with others, and even join the online community and enjoy the awesome mixes from others. The app is ad-free, safe for kids, and popular with teachers as it manages to be educational as well as fun teaching children all about rhythm and tempo .

    -

    incredibox wekiddy apk


    Download Zip 🆓 https://urlca.com/2uO9Pi



    -

    A brief introduction to the app and its features

    -

    Incredibox was created in 2009 by the French company So Far So Good. It started out as a webpage, but then it was released as a mobile and tablet app that became an instant hit. It has won several awards and appeared in various international media, such as BBC, Adobe, FWA, Gizmodo, Slate, Konbini, Softonic, Kotaku, Cosmopolitan, PocketGamer, AppAdvice, AppSpy, Vice, Ultralinx and many others. The online demo has attracted more than 80 million visitors since its creation.

    -

    The app has many features that make it fun and easy to use. Some of them are:

    -
      -
    • Drag and drop icons: You can create your own music by dragging and dropping icons onto the avatars. Each icon represents a different sound element, such as beats, melodies, effects, voices, etc. You can combine up to seven icons per character to create complex mixes.
    • -
    • Unlock animated choruses: You can find the right sound combos to unlock animated choruses that will enhance your tune. Each atmosphere has its own set of choruses that add more depth and variety to your mix.
    • -
    • Save and share your mix: You can save your mix as an MP3 file or as a link that you can share with your friends or on social media. You can also export your mix as a video that shows the animated characters and the icons you used.
    • -
    • Discover the online community: You can join the online community of Incredibox and listen to the top-ranked mixes from other users. You can also vote for your favorite ones and leave comments. You can also participate in contests and win prizes.
    • -
    -

    The different musical styles and atmospheres available

    -

    Incredibox has 10 different atmospheres, each with its own musical style, theme, and characters. They are:

    -
    , , and
    to create and format your table. - Write in a conversational style as written by a human. Use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors.

    and

    to format your FAQs. I hope these tips are helpful for you. Now, let me show you the two tables I created for you based on these tips. ? Table 1: Outline of the article | Heading | Subheading | Content | | --- | --- | --- | | H1: How to Download Wedding Invitation Templates in AI Format | | Introduction: Explain what AI format is and why it is useful for creating wedding invitations. Provide some statistics on how popular wedding invitations are and how much they cost. Thesis statement: Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. | | | H2: Benefits of Using AI Format for Wedding Invitations | Body paragraph 1: Explain the benefits of using AI format for wedding invitations, such as: - High-quality graphics and illustrations - Easy customization and editing - Compatibility with various design tools - Scalability and flexibility | | | H2: How to Find and Download Wedding Invitation Templates in AI Format | Body paragraph 2: Provide some tips on how to find and download wedding invitation templates in AI format, such as: - Use reliable websites that offer free or affordable templates - Search by theme, style, or color - Check the license and terms of use - Download the files in ZIP or RAR format | | | H2: How to Customize and Print Your Wedding Invitations in AI Format | Body paragraph 3: Provide some steps on how to customize and print your wedding invitations in AI format, such as: - Open the files in Adobe Illustrator or another compatible tool - Change the text, fonts, colors, images, and layout as desired - Save the files as PDF or JPG format - Print the invitations on high-quality paper or send them online | | | H2: Examples of Wedding Invitation Templates in AI Format | Body paragraph 4: Show some examples of wedding invitation templates in AI format from different websites, such as: - Freepik - DYP.im - Fotor Include a table that compares the features, prices, and ratings of these websites. | | H1: Conclusion | | Conclusion: Summarize the main points and restate the thesis statement. Emphasize how downloading wedding invitation templates in AI format can make your wedding planning easier and more fun. Encourage the readers to try it out for themselves. | | H1: FAQs | | FAQs: List five unique FAQs that answer common questions related to downloading wedding invitation templates in AI format, such as: - What is AI format? - Why should I use AI format for wedding invitations? - How can I edit AI files? - Where can I find free or cheap AI templates? - How can I print or share my invitations online? | Table 2: Article with HTML formatting

    How to Download Wedding Invitation Templates in AI Format

    -

    If you're planning a wedding, you know how important it is to create beautiful and memorable invitations that reflect your personality and style. But designing your own invitations from scratch can be time-consuming, expensive, and stressful. That's why many couples opt for downloading wedding invitation templates that they can customize and print themselves.

    -

    download undangan pernikahan ai


    Download Ziphttps://jinyurl.com/2uNOVO



    -

    One of the most popular formats for wedding invitation templates is AI, which stands for Adobe Illustrator. AI is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI files are also easy to customize and edit, as you can change the text, fonts, colors, images, and layout as you wish. AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. AI files are also scalable and flexible, meaning you can resize them without losing quality or clarity.

    -

    Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also print your invitations on high-quality paper or send them online to your guests. In this article, we will show you how to download wedding invitation templates in AI format and how to customize and print them yourself.

    -

    Benefits of Using AI Format for Wedding Invitations

    -

    Using AI format for wedding invitations has many benefits, such as:

    -
      -
    • High-quality graphics and illustrations: AI files are vector-based, which means they are made of mathematical equations that define the shapes, colors, and strokes of the graphics. This makes them sharp and clear, even when zoomed in or out. AI files also support transparency, gradients, and patterns, which add more depth and dimension to your invitations.
    • -
    • Easy customization and editing: AI files are editable, which means you can change the text, fonts, colors, images, and layout of your invitations as you like. You can also add your own graphics, logos, or photos to make your invitations more personal and unique. You can use Adobe Illustrator or another compatible tool to edit your AI files.
    • -
    • Compatibility with various design tools: AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. You can use these tools to open, edit, save, and export your AI files. You can also convert your AI files to other formats, such as PDF or JPG, if needed.
    • -
    • Scalability and flexibility: AI files are scalable and flexible, which means you can resize them without losing quality or clarity. You can also rotate, flip, skew, or distort them as you wish. You can adjust the resolution and dimensions of your invitations to fit your printing or sharing needs.
    • -
    -

    These benefits make AI format a great choice for creating stunning and professional-looking wedding invitations.

    -

    download template undangan pernikahan ai
    -download desain undangan pernikahan ai
    -download contoh undangan pernikahan ai
    -download undangan pernikahan format ai
    -download undangan pernikahan vector ai
    -download undangan pernikahan gratis ai
    -download undangan pernikahan simple ai
    -download undangan pernikahan elegan ai
    -download undangan pernikahan unik ai
    -download undangan pernikahan modern ai
    -download undangan pernikahan islami ai
    -download undangan pernikahan minimalis ai
    -download undangan pernikahan klasik ai
    -download undangan pernikahan floral ai
    -download undangan pernikahan rustic ai
    -download undangan pernikahan vintage ai
    -download undangan pernikahan gold ai
    -download undangan pernikahan pink ai
    -download undangan pernikahan blue ai
    -download undangan pernikahan green ai
    -download undangan pernikahan red ai
    -download undangan pernikahan purple ai
    -download undangan pernikahan black and white ai
    -download undangan pernikahan watercolor ai
    -download undangan pernikahan geometric ai
    -download mockup undangan pernikahan ai
    -download background undangan pernikahan ai
    -download border undangan pernikahan ai
    -download frame undangan pernikahan ai
    -download logo undangan pernikahan ai
    -download font undangan pernikahan ai
    -download icon undangan pernikahan ai
    -download clipart undangan pernikahan ai
    -download sticker undangan pernikahan ai
    -cara download undangan pernikahan ai
    -situs download undangan pernikahan ai
    -website download undangan pernikahan ai
    -aplikasi download undangan pernikahan ai
    -software download undangan pernikahan ai
    -tutorial download undangan pernikahan ai

    -

    How to Find and Download Wedding Invitation Templates in AI Format

    -

    Finding and downloading wedding invitation templates in AI format is easy and convenient. Here are some tips on how to do it:

    -
      -
    1. Use reliable websites that offer free or affordable templates: There are many websites that offer free or affordable wedding invitation templates in AI format. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. These websites have a wide range of templates for different themes, styles, and colors. You can browse through their collections and choose the ones that suit your preferences.
    2. -
    3. Search by theme, style, or color: Most websites have filters or categories that help you narrow down your search by theme, style, or color. For example, you can search for floral, vintage, rustic, modern, elegant, or minimalist wedding invitation templates. You can also search for templates by color scheme, such as pink, blue, green, gold, or black.
    4. -
    5. Check the license and terms of use: Before downloading any template from any website, make sure you check the license and terms of use. Some templates are free for personal use only, while others require attribution or a premium subscription. Some templates may also have restrictions on how you can edit or print them. Read the license and terms of use carefully and follow them accordingly.
    6. -
    7. Download the files in ZIP or RAR format: Most websites offer their templates in ZIP or RAR format. These are compressed files that contain multiple files inside them. To download them, you need to click on the download button and save the file to your computer. To open them , you need to extract them using a software such as WinZip, WinRAR, or 7-Zip. You can then access the AI files and other files inside the ZIP or RAR folder.
    8. -
    -

    By following these tips, you can find and download wedding invitation templates in AI format easily and quickly.

    -

    How to Customize and Print Your Wedding Invitations in AI Format

    -

    Once you have downloaded your wedding invitation templates in AI format, you can customize and print them yourself. Here are some steps on how to do it:

    -
      -
    1. Open the files in Adobe Illustrator or another compatible tool: To edit your AI files, you need to open them in Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can double-click on the AI file or drag and drop it into the tool. You can also use the File > Open menu to locate and open the file.
    2. -
    3. Change the text, fonts, colors, images, and layout as desired: To change the text of your invitations, you need to select the text tool and click on the text you want to edit. You can then type your own text or copy and paste it from another source. You can also change the fonts, colors, sizes, and styles of your text using the options on the toolbar or the properties panel. To change the images of your invitations, you need to select the image tool and click on the image you want to replace. You can then browse your computer or online sources for a new image and insert it into your invitation. You can also resize, crop, rotate, or adjust the brightness and contrast of your image using the options on the toolbar or the properties panel. To change the layout of your invitations, you need to select the selection tool and click on the elements you want to move, resize, or delete. You can also use the align, distribute, group, or arrange options on the toolbar or the properties panel to organize your elements.
    4. -
    5. Save the files as PDF or JPG format: To save your invitations for printing or sharing online, you need to export them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box.
    6. -
    7. Print the invitations on high-quality paper or send them online: To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.
    8. -
    -

    By following these steps, you can customize and print your wedding invitations in AI format yourself.

    -

    Examples of Wedding Invitation Templates in AI Format

    -

    To give you some inspiration and ideas for your wedding invitations, here are some examples of wedding invitation templates in AI format from different websites:

    -
      -
    • Freepik: Freepik is a website that offers free vector graphics, illustrations, icons, photos, and templates for various purposes. It has a large collection of wedding invitation templates in AI format that you can download and edit for free. Some of the themes include floral, geometric, watercolor, vintage, rustic, and modern. You can also find matching templates for save-the-date cards, thank-you cards, menus, programs, and more.
    • -
    • DYP.im: DYP.im is a website that offers free and premium design templates for various occasions. It has a variety of wedding invitation templates in AI format that you can download and edit for free or for a small fee. Some of the styles include elegant, minimalist, classic, bohemian , and whimsical. You can also find templates for other wedding-related items, such as labels, tags, stickers, and envelopes.
    • -
    • Fotor: Fotor is a website that offers free and premium online photo editing and design tools. It has a section for wedding invitation templates in AI format that you can download and edit for free or for a subscription fee. Some of the categories include simple, floral, vintage, modern, and elegant. You can also use Fotor's online editor to customize your templates with your own photos, text, stickers, filters, and effects.
    • -
    -

    To compare the features, prices, and ratings of these websites, you can use the table below:

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Comparison of wedding invitation template websites
    WebsiteFeaturesPricesRatings
    Freepik- Large collection of free and premium templates
    - Various themes, styles, and colors
    - Matching templates for other wedding items
    - Editable in Adobe Illustrator or other tools
    - Free for personal use with attribution
    - Premium subscription for $9.99/month or $89.99/year
    - Unlimited downloads and no attribution required
    - 4.5/5 stars on Trustpilot
    - 8.8/10 on Sitejabber
    DYP.im- Variety of free and premium templates
    - Various styles and designs
    - Templates for other wedding-related items
    - Editable in Adobe Illustrator or other tools
    - Free for personal use with attribution
    - Premium templates for $2-$5 each
    - Unlimited downloads and no attribution required
    - 4.3/5 stars on Trustpilot
    - 8.6/10 on Sitejabber
    Fotor- Section of free and premium templates
    - Various categories and designs
    - Online editor to customize your templates
    - Editable in Adobe Illustrator or other tools
    - Free for personal use with watermark
    - Premium subscription for $8.99/month or $39.99/year
    - Unlimited downloads and no watermark
    - 4.6/5 stars on Trustpilot
    - 9/10 on Sitejabber
    -

    Conclusion

    -

    Downloading wedding invitation templates in AI format can help you create beautiful and memorable invitations that reflect your personality and style. You can save time, money, and unleash your creativity by using AI format for your invitations. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also customize and print your invitations yourself using Adobe Illustrator or another compatible tool.

    -

    Downloading wedding invitation templates in AI format is easy and fun. Why not try it out for yourself? You might be surprised by how much you can do with AI format.

    -

    FAQs

    -

    What is AI format?

    -

    AI format is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI stands for Adobe Illustrator, which is the most popular tool for creating and editing AI files.

    -

    Why should I use AI format for wedding invitations?

    -

    You should use AI format for wedding invitations because it has many benefits, such as:

    -
      -
    • High-quality graphics and illustrations that are sharp and clear.
    • -
    • Easy customization and editing that let you change the text, fonts, colors, images, and layout as you wish.
    • -
    • Compatibility with various design tools that let you open, edit, save, and export your AI files.
    • -
    • Scalability and flexibility that let you resize your invitations without losing quality or clarity.
    • -
    -

    How can I edit AI files?

    -

    You can edit AI files using Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can use the tools and options on the toolbar or the properties panel to change the text, fonts, colors, images , and layout of your invitations. You can also add your own graphics, logos, or photos to make your invitations more personal and unique.

    -

    Where can I find free or cheap AI templates?

    -

    You can find free or cheap AI templates on various websites that offer free or affordable vector graphics, illustrations, icons, photos, and templates for various purposes. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. You can browse through their collections and choose the ones that suit your preferences.

    -

    How can I print or share my invitations online?

    -

    You can print or share your invitations online by exporting them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box. To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.

    -

    I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please let me know in the comments below. Thank you for your time and attention.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md b/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md deleted file mode 100644 index 36b8313b3e1e617ba38e34caf656be768ba1e70e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md +++ /dev/null @@ -1,115 +0,0 @@ -
    -

    Candy Crush Saga: How to Download and Play on Windows 10 64 Bit

    -

    If you are looking for a fun and addictive puzzle game that will keep you entertained for hours, you might want to try Candy Crush Saga. This popular game has millions of fans around the world who enjoy matching colorful candies and clearing various challenges. But did you know that you can also play Candy Crush Saga on your Windows 10 64 bit PC? In this article, we will show you how to download and play Candy Crush Saga on your computer, and give you some tips and tricks to master the game.

    -

    What is Candy Crush Saga?

    -

    Candy Crush Saga is a free-to-play tile-matching video game developed by King, a leading company in casual gaming. It was released in 2012 for Facebook, and later for iOS, Android, Windows Phone, and Windows 10. It is a variation of their browser game Candy Crush, which was inspired by the classic game Bejeweled.

    -

    candy crush saga free download for windows 10 64 bit


    DOWNLOADhttps://jinyurl.com/2uNJY3



    -

    In Candy Crush Saga, you have to match three or more candies of the same color on a game board to make them disappear, and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.

    -

    Candy Crush Saga has thousands of levels to play, each with different layouts, obstacles, and goals. The game also features various game modes, such as Moves, Time, Jelly, Ingredients, Order, Mixed Mode, Rainbow Rapids, Soda, Jam, Honey, Frosting, Chocolate Box, Dunk the Cookie, and more. Each mode has its own rules and challenges that require different strategies.

    -

    Candy Crush Saga is not only a simple and fun game, but also a rewarding one. You can earn points, stars, gold bars, boosters, trophies, badges, and other prizes as you play. You can also connect with your Facebook friends or other players online and compare your scores, send or receive lives or boosters, or join teams and events.

    -

    Why play Candy Crush Saga on Windows 10 64 Bit?

    -

    While Candy Crush Saga is mainly designed for mobile devices, playing it on your Windows 10 64 bit PC has some advantages. Here are some of them:

    -
      -
    • You can enjoy a bigger screen and better graphics. Playing on a PC allows you to see more details and colors of the candies and the backgrounds. You can also adjust the resolution and the quality settings according to your preferences.
    • -
    • You can use a mouse or a keyboard instead of a touchscreen. Some players find it easier and more comfortable to use a mouse or a keyboard to make matches and activate boosters. You can also use shortcuts or hotkeys to access some functions quickly.
    • -
    • You can save battery life and storage space on your mobile device. Playing Candy Crush Saga on your PC means you don't have to worry about draining your battery or filling up your memory with the game data. You can also avoid interruptions from phone calls or notifications while playing.
    • -
    • You can sync your progress across devices. If you log in with your Facebook account or your King account, you can sync your progress and access all your data on any device. This means you can switch between playing on your PC or your mobile device anytime without losing anything.
    • -
    -

    How to download Candy Crush Saga for Windows 10

    How to download Candy Crush Saga for Windows 10 64 Bit?

    -

    Downloading Candy Crush Saga for your Windows 10 64 bit PC is very easy and fast. You just need to follow these steps:

    -
      -
    1. Open the Microsoft Store app on your PC. You can find it on your Start menu or taskbar, or you can search for it using Cortana or the search box.
    2. -
    3. In the Microsoft Store app, type "Candy Crush Saga" in the search bar and press Enter. You will see the game icon and some information about it.
    4. -
    5. Click on the "Get" button to start downloading the game. You may need to sign in with your Microsoft account if you haven't already.
    6. -
    7. Wait for the download and installation to finish. You will see a notification when it is done.
    8. -
    9. Click on the "Play" button to launch the game. You can also find it on your Start menu or taskbar, or you can pin it to your desktop for easy access.
    10. -
    -

    Congratulations, you have successfully downloaded and installed Candy Crush Saga on your Windows 10 64 bit PC. Now you can enjoy playing it anytime you want.

    -

    How to play Candy Crush Saga on Windows 10 64 Bit?

    -

    Playing Candy Crush Saga on your Windows 10 64 bit PC is very similar to playing it on your mobile device. However, there are some differences in the gameplay and the controls that you need to know. Here are some of them:

    -

    candy crush saga pc game download windows 10
    -how to install candy crush saga on windows 10 laptop
    -candy crush saga for windows 10 offline mode
    -candy crush saga latest version download for windows 10
    -candy crush saga windows 10 app store
    -candy crush saga free download full version for windows 10
    -candy crush saga cheats and tips for windows 10 users
    -candy crush saga update download for windows 10 64 bit
    -candy crush saga game play online on windows 10
    -candy crush saga hack tool download for windows 10
    -candy crush saga best levels and boosters for windows 10
    -candy crush saga system requirements for windows 10 64 bit
    -candy crush saga error and troubleshooting for windows 10
    -candy crush saga review and rating for windows 10 game
    -candy crush saga alternatives and similar games for windows 10
    -candy crush soda saga free download for windows 10 64 bit
    -candy crush jelly saga free download for windows 10 64 bit
    -candy crush friends saga free download for windows 10 64 bit
    -candy crush saga mod apk download for windows 10 64 bit
    -candy crush saga unlimited lives and gold bars for windows 10
    -candy crush saga new features and updates for windows 10 game
    -candy crush saga support and contact for windows 10 users
    -candy crush saga facebook login and sync for windows 10 game
    -candy crush saga leaderboard and achievements for windows 10 game
    -candy crush saga themes and wallpapers for windows 10 desktop
    -how to uninstall candy crush saga from windows 10 device
    -how to transfer candy crush saga progress to windows 10 device
    -how to backup and restore candy crush saga data on windows 10 device
    -how to play candy crush saga with keyboard and mouse on windows 10 device
    -how to record and share candy crush saga gameplay on windows 10 device
    -how to speed up and optimize candy crush saga performance on windows 10 device
    -how to fix candy crush saga not working or crashing on windows 10 device
    -how to disable or remove candy crush saga ads on windows 10 device
    -how to get free or discounted in-app purchases in candy crush saga on windows 10 device
    -how to join or create a team in candy crush saga on windows 10 device
    -how to complete or skip a level in candy crush saga on windows 10 device
    -how to change or reset your password in candy crush saga on windows 10 device
    -how to link or unlink your account in candy crush saga on windows 10 device
    -how to redeem or use a gift card or code in candy crush saga on windows 10 device
    -how to report or block a player in candy crush saga on windows 10 device
    -how to invite or add friends in candy crush saga on windows 10 device
    -how to chat or send messages in candy crush saga on windows 10 device
    -how to customize or change your avatar in candy crush saga on windows 10 device
    -how to earn or spend gold bars in candy crush saga on windows 10 device
    -how to collect or use boosters in candy crush saga on windows 10 device
    -how to unlock or access new episodes in candy crush saga on windows 10 device
    -how to solve or clear the jelly in candy crush saga on windows 10 device
    -how to match or blast the candies in candy crush saga on windows 10 device
    -how to switch or swap the candies in candy crush saga on windows 10 device

    -

    The basics of the gameplay and the controls

    -

    The gameplay of Candy Crush Saga is based on matching three or more candies of the same color on a game board to make them disappear and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.

    -

    The controls of Candy Crush Saga on your Windows 10 64 bit PC are very simple and intuitive. You can use your mouse or your keyboard to make matches and activate boosters. Here are some of the basic controls:

    -
      -
    • To make a match, click and drag a candy to swap it with an adjacent one. You can also use the arrow keys on your keyboard to move a candy in any direction.
    • -
    • To activate a special candy, click on it or press the spacebar on your keyboard. You can also click and drag a special candy to swap it with another one and create a powerful combination.
    • -
    • To use a booster, click on it at the bottom of the screen or press the corresponding number key on your keyboard. You can also drag a booster onto the game board to apply it to a specific candy or area.
    • -
    • To pause the game, click on the menu button at the top left corner of the screen or press the Esc key on your keyboard. You can also access other options such as settings, help, sound, music, and more from this menu.
    • -
    -

    The tips and tricks to master the game and beat the levels

    -

    Candy Crush Saga is not only a fun game, but also a challenging one. Some levels can be very hard to beat, especially if you don't know what to do or how to do it. That's why we have prepared some tips and tricks for you that will help you master the game and beat any level. Here are some of them:

    -
      -
    • Pay attention to the objective of each level and plan your moves accordingly. Don't just match candies randomly, but try to create matches that will help you achieve your goal.
    • -
    • Look for opportunities to create special candies and combinations. Special candies are candies that have extra effects when activated, such as striped candies, wrapped candies, color bombs, jelly fish, coconut wheels, etc. Combinations are when you activate two or more special candies together, creating even more powerful effects.
    • -
    • Use boosters wisely and sparingly. Boosters are items that can help you in various ways, such as extra moves, extra time, extra lives, lollipop hammers, free switches, etc. However, they are limited in number and some of them cost real money, so don't waste them unnecessarily.
    • -
    • Learn from your mistakes and try again. If you fail a level, don't give up or get frustrated. Instead, analyze what went wrong and what you can do better next time. You can also watch videos of other players who have beaten the level and learn from their strategies.
    • -
    • Have fun and enjoy the game. Candy Crush Saga is meant to be
    • Have fun and enjoy the game. Candy Crush Saga is meant to be a relaxing and entertaining game, not a stressful or frustrating one. Don't let the difficulty or the pressure get to you, but rather focus on the positive aspects of the game, such as the colorful graphics, the catchy music, the cute characters, and the rewarding prizes.
    • -
    -

    Conclusion

    -

    Candy Crush Saga is one of the most popular and addictive puzzle games in the world. It has thousands of levels to play, each with different objectives, modes, and challenges. It also has various features and rewards that make it more fun and exciting. You can play it on your mobile device, but you can also play it on your Windows 10 64 bit PC. Playing on a PC has some advantages, such as a bigger screen, better graphics, easier controls, and more. To play on a PC, you just need to download and install the game from the Microsoft Store app, and then log in with your Facebook or King account to sync your progress. To master the game and beat the levels, you need to pay attention to the objective, create special candies and combinations, use boosters wisely, learn from your mistakes, and have fun.

    -

    If you are ready to join the millions of fans who love Candy Crush Saga, download it now and start playing. You will be amazed by how much fun you will have.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Candy Crush Saga and their answers:

    -
      -
    1. How do I get more lives in Candy Crush Saga?
    2. -

      There are several ways to get more lives in Candy Crush Saga. You can wait for them to refill over time (one life every 30 minutes), ask your friends to send you some, buy them with gold bars, or use boosters that give you extra lives.

      -
    3. How do I get more gold bars in Candy Crush Saga?
    4. -

      Gold bars are the premium currency in Candy Crush Saga. You can use them to buy boosters, extra moves, extra time, extra lives, or unlock new episodes. You can get gold bars by completing certain achievements, participating in events or challenges, watching ads, or buying them with real money.

      -
    5. How do I unlock new episodes in Candy Crush Saga?
    6. -

      To unlock new episodes in Candy Crush Saga, you need to complete all the levels in the previous episode. Sometimes, you may also need to ask your friends for help or pay with gold bars to unlock them.

      -
    7. How do I connect my Facebook or King account to Candy Crush Saga?
    8. -

      To connect your Facebook or King account to Candy Crush Saga, you need to click on the "Connect" button on the main screen or the settings menu. You will be asked to log in with your email and password or create a new account if you don't have one. By connecting your account, you can sync your progress across devices, access all your data, and play with your friends online.

      -
    9. How do I contact customer support for Candy Crush Saga?
    10. -

      If you have any issues or questions about Candy Crush Saga, you can contact customer support by clicking on the "Help" button on the settings menu. You will be directed to a page where you can browse through various topics and FAQs, or submit a ticket with your query.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py b/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py deleted file mode 100644 index afbbccf57bc08a31c4f09a03bf6b343eb89577d8..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch - Paddle general utilities.""" -import re - -from .utils import logging - -logger = logging.get_logger(__name__) - - -def rename_key(key): - regex = r"\w+[.]\d+" - pats = re.findall(regex, key) - for pat in pats: - key = key.replace(pat, "_".join(pat.split("."))) - return key - - -##################### -# PyTorch => Paddle # -##################### - - -def rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict): - """Rename PT weight names to corresponding Paddle weight names and reshape tensor if necessary""" - - # conv norm or layer norm - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",) - if ( - any("norm" in str_ for str_ in pt_tuple_key) - and (pt_tuple_key[-1] in ["bias", "beta"]) - and (pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict) - ): - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",) - return renamed_pt_tuple_key, pt_tensor - elif pt_tuple_key[-1] in ["weight", "gamma"] and pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict: - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",) - return renamed_pt_tuple_key, pt_tensor - - # embedding - if pt_tuple_key[-1] == "weight" and pt_tuple_key[:-1] + ("weight",) in random_paddle_state_dict: - pt_tuple_key = pt_tuple_key[:-1] + ("weight",) - return renamed_pt_tuple_key, pt_tensor - - # conv layer - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",) - if pt_tuple_key[-1] == "weight" and pt_tensor.ndim == 4: - return renamed_pt_tuple_key, pt_tensor - - # linear layer - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",) - if pt_tuple_key[-1] == "weight": - pt_tensor = pt_tensor.t() - return renamed_pt_tuple_key, pt_tensor - - # old PyTorch layer norm weight - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",) - if pt_tuple_key[-1] == "gamma": - return renamed_pt_tuple_key, pt_tensor - - # old PyTorch layer norm bias - renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",) - if pt_tuple_key[-1] == "beta": - return renamed_pt_tuple_key, pt_tensor - - return pt_tuple_key, pt_tensor - - -def convert_pytorch_state_dict_to_paddle(pt_state_dict, paddle_model): - # Step 1: Convert pytorch tensor to numpy - pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()} - - random_paddle_state_dict = paddle_model.state_dict - paddle_state_dict = {} - - # Need to change some parameters name to match Paddle names - for pt_key, pt_tensor in pt_state_dict.items(): - renamed_pt_key = rename_key(pt_key) - pt_tuple_key = tuple(renamed_pt_key.split(".")) - - # Correctly rename weight parameters - paddle_key, paddle_tensor = rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict) - - if paddle_key in random_paddle_state_dict: - if list(paddle_tensor.shape) != list(random_paddle_state_dict[paddle_key].shape): - raise ValueError( - f"Paddle checkpoint seems to be incorrect. Weight {pt_key} was expected to be of shape " - f"{random_paddle_state_dict[paddle_key].shape}, but is {paddle_tensor.shape}." - ) - - # also add unexpected weight so that warning is thrown - paddle_state_dict[paddle_key] = paddle_tensor.numpy() - - return paddle_state_dict diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py deleted file mode 100644 index d76ca843e0c9d76b5309317f59075f1d31d7f6c7..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import paddle - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging -from .scheduling_utils import SchedulerMixin - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete -class EulerDiscreteSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: paddle.Tensor - pred_original_sample: Optional[paddle.Tensor] = None - - -class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original - k-diffusion implementation by Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = paddle.to_tensor(trained_betas, dtype="float32") - elif beta_schedule == "linear": - self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32") - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2 - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = paddle.cumprod(self.alphas, 0) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32) - self.sigmas = paddle.to_tensor(sigmas) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy() - self.timesteps = paddle.to_tensor(timesteps, dtype="float32") - self.is_scale_input_called = False - - def scale_model_input(self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor]) -> paddle.Tensor: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. - - Args: - sample (`paddle.Tensor`): input sample - timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain - - Returns: - `paddle.Tensor`: scaled input sample - """ - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - self.is_scale_input_called = True - return sample - - def set_timesteps(self, num_inference_steps: int): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - - timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - self.sigmas = paddle.to_tensor(sigmas) - self.timesteps = paddle.to_tensor(timesteps, dtype="float32") - - def step( - self, - model_output: paddle.Tensor, - timestep: Union[float, paddle.Tensor], - sample: paddle.Tensor, - s_churn: float = 0.0, - s_tmin: float = 0.0, - s_tmax: float = float("inf"), - s_noise: float = 1.0, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - return_dict: bool = True, - ) -> Union[EulerDiscreteSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - s_churn (`float`) - s_tmin (`float`) - s_tmax (`float`) - s_noise (`float`) - generator (`paddle.Generator`, optional): Random number generator. - return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - - gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0 - - noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator) - - eps = noise * s_noise - sigma_hat = sigma * (gamma + 1) - - if gamma > 0: - sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5 - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma_hat * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma_hat - - dt = self.sigmas[step_index + 1] - sigma_hat - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def add_noise( - self, - original_samples: paddle.Tensor, - noise: paddle.Tensor, - timesteps: paddle.Tensor, - ) -> paddle.Tensor: - # Make sure sigmas and timesteps have the same dtype as original_samples - self.sigmas = self.sigmas.cast(original_samples.dtype) - - schedule_timesteps = self.timesteps - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx b/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
    - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
    - ) - })} -
    - ) -} diff --git a/spaces/AI-DHD/Youtube-Whisperer/README.md b/spaces/AI-DHD/Youtube-Whisperer/README.md deleted file mode 100644 index f30d4256155c480f0599698379f798a3365e5bc1..0000000000000000000000000000000000000000 --- a/spaces/AI-DHD/Youtube-Whisperer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtube Whisperer -emoji: ⚡ -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -duplicated_from: jeffistyping/Youtube-Whisperer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py deleted file mode 100644 index 8f04d01361430a4ad6b02421ac4e20d797f31dc8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py +++ /dev/null @@ -1,27 +0,0 @@ -import os - -from data_gen.tts.base_preprocess import BasePreprocessor -import glob - - -class LibrittsPreAlign(BasePreprocessor): - def meta_data(self): - wav_fns = sorted(glob.glob(f'{self.raw_data_dir}/*/*/*.wav')) - for wav_fn in wav_fns: - item_name = os.path.basename(wav_fn)[:-4] - txt_fn = f'{wav_fn[:-4]}.normalized.txt' - with open(txt_fn, 'r') as f: - txt = f.readlines() - f.close() - spk = item_name.split("_")[0] - # Example: - # - # 'item_name': '103_1241_000000_000001' - # 'wav_fn': 'LibriTTS/train-clean-100/103/1241/103_1241_000000_000001.wav' - # 'txt': 'matthew Cuthbert is surprised' - # 'spk_name': '103' - yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt[0], 'spk_name': spk} - - -if __name__ == "__main__": - LibrittsPreAlign().process() diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py deleted file mode 100644 index 0980d729dd3b579fee0380d0b9d7055e6843ba12..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py +++ /dev/null @@ -1,179 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchlibrosa.stft import Spectrogram, LogmelFilterBank - -def get_audio_encoder(name: str): - if name == "Cnn14": - return Cnn14 - else: - raise Exception('The audio encoder name {} is incorrect or not supported'.format(name)) - - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.conv2 = nn.Conv2d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - self.bn2 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - x = F.relu_(self.bn2(self.conv2(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class ConvBlock5x5(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock5x5, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(5, 5), stride=(1, 1), - padding=(2, 2), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class AttBlock(nn.Module): - def __init__(self, n_in, n_out, activation='linear', temperature=1.): - super(AttBlock, self).__init__() - - self.activation = activation - self.temperature = temperature - self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - - self.bn_att = nn.BatchNorm1d(n_out) - - def forward(self, x): - # x: (n_samples, n_in, n_time) - norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1) - cla = self.nonlinear_transform(self.cla(x)) - x = torch.sum(norm_att * cla, dim=2) - return x, norm_att, cla - - def nonlinear_transform(self, x): - if self.activation == 'linear': - return x - elif self.activation == 'sigmoid': - return torch.sigmoid(x) - - -class Cnn14(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num, out_emb): - - super(Cnn14, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.bn0 = nn.BatchNorm2d(64) - - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024) - self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048) - - # out_emb is 2048 for best Cnn14 - self.fc1 = nn.Linear(2048, out_emb, bias=True) - self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True) - - def forward(self, input, mixup_lambda=None): - """ - Input: (batch_size, data_length) - """ - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = torch.mean(x, dim=3) - - (x1, _) = torch.max(x, dim=2) - x2 = torch.mean(x, dim=2) - x = x1 + x2 - x = F.dropout(x, p=0.5, training=self.training) - x = F.relu_(self.fc1(x)) - embedding = F.dropout(x, p=0.5, training=self.training) - clipwise_output = torch.sigmoid(self.fc_audioset(x)) - - output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding} - - return output_dict \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/AIZero2HeroBootcamp/3DHuman/README.md b/spaces/AIZero2HeroBootcamp/3DHuman/README.md deleted file mode 100644 index 9e41b98ad38e307f6903785b03ed1c29c3e406d0..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/3DHuman/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 3DHuman -emoji: 🐠 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py deleted file mode 100644 index f165c2466bd8a67cbfadd5f3a388d4fe03e6d446..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py +++ /dev/null @@ -1,17 +0,0 @@ -# model settings -model = dict( - type='ImageClassifier', - backbone=dict( - type='ResNet_CIFAR', - depth=50, - num_stages=4, - out_indices=(3, ), - style='pytorch'), - neck=dict(type='GlobalAveragePooling'), - head=dict( - type='MultiLabelLinearClsHead', - num_classes=10, - in_channels=2048, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)), - train_cfg=dict(augments=dict(type='Mixup', alpha=1.)), -) diff --git a/spaces/Aanisha/Image_to_story/app.py b/spaces/Aanisha/Image_to_story/app.py deleted file mode 100644 index 2f07e9dc37940cbb0fb94e0797c33c57e87a2ea7..0000000000000000000000000000000000000000 --- a/spaces/Aanisha/Image_to_story/app.py +++ /dev/null @@ -1,70 +0,0 @@ -from PIL import Image -from transformers import VisionEncoderDecoderModel,ViTFeatureExtractor,PreTrainedTokenizerFast,GPT2Tokenizer,AutoModelForCausalLM,AutoTokenizer -import requests -import gradio as gr -import torch -from transformers import pipeline -import re - - - -description = "Just upload an image, and generate a short story for the image.\n PS: GPT-2 is not perfect but it's fun to play with.May take a minute for the output to generate. Enjoyy!!!" -title = "Story generator from images using ViT and GPT2" - - -model = VisionEncoderDecoderModel.from_pretrained("gagan3012/ViTGPT2_vizwiz").to('cpu') -vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") -tokenizer = PreTrainedTokenizerFast.from_pretrained("distilgpt2") -story_gpt = AutoModelForCausalLM.from_pretrained("pranavpsv/gpt2-genre-story-generator") -st_tokenizer = AutoTokenizer.from_pretrained("pranavpsv/gpt2-genre-story-generator") - -inputs = [ - gr.inputs.Image(type="pil", label="Original Image") -] - -outputs = [ - gr.outputs.Textbox(label = 'Story') -] - -examples = [['img_1.jpg'],['img_2.jpg']] - -def get_output_senten(img): - pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values.to('cpu') - encoder_outputs = model.generate(pixel_values.to('cpu'),num_beams=7) - generated_sentences = tokenizer.batch_decode(encoder_outputs) - senten = generated_sentences[0][generated_sentences[0][2:].index('>')+1:] - - senten = senten.replace('>','') - senten = senten.replace('|','') - res = senten.split('.')[0][0:75] - res = res[0:res.rindex(' ')] - - print(res) - - tokenized_text=st_tokenizer.encode(res) - input_ids=torch.tensor(tokenized_text).view(-1,len(tokenized_text)) - outputs=story_gpt.generate(input_ids,max_length=100,num_beams=5,no_repeat_ngram_size=2,early_stopping=True) - - generated_story = st_tokenizer.batch_decode(outputs) - - print(len(generated_story)) - ans = generated_story[0] - - - - ans = str(ans) - ind = ans.rindex('.') - ans = ans[0:ind+1] - return ans - - - -gr.Interface( - get_output_senten, - inputs, - outputs, - examples = examples, - title=title, - description=description, - theme="huggingface", -).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py deleted file mode 100644 index 2915b2846e5f1b1678991e81f6572776ace8a4c9..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py +++ /dev/null @@ -1,34 +0,0 @@ -""" -Constants that are used by the model -""" -HARAQAT = ["ْ", "ّ", "ٌ", "ٍ", "ِ", "ً", "َ", "ُ"] -ARAB_CHARS = "ىعظحرسيشضق ثلصطكآماإهزءأفؤغجئدةخوبذتن" -PUNCTUATIONS = [".", "،", ":", "؛", "-", "؟"] -VALID_ARABIC = HARAQAT + list(ARAB_CHARS) -BASIC_HARAQAT = { - "َ": "Fatha ", - "ً": "Fathatah ", - "ُ": "Damma ", - "ٌ": "Dammatan ", - "ِ": "Kasra ", - "ٍ": "Kasratan ", - "ْ": "Sukun ", - "ّ": "Shaddah ", -} -ALL_POSSIBLE_HARAQAT = { - "": "No Diacritic ", - "َ": "Fatha ", - "ً": "Fathatah ", - "ُ": "Damma ", - "ٌ": "Dammatan ", - "ِ": "Kasra ", - "ٍ": "Kasratan ", - "ْ": "Sukun ", - "ّ": "Shaddah ", - "َّ": "Shaddah + Fatha ", - "ًّ": "Shaddah + Fathatah ", - "ُّ": "Shaddah + Damma ", - "ٌّ": "Shaddah + Dammatan ", - "ِّ": "Shaddah + Kasra ", - "ٍّ": "Shaddah + Kasratan ", -} diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css b/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css deleted file mode 100644 index d20f178c45e8ccbfc9539f99914b25fc572045bd..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css +++ /dev/null @@ -1,158 +0,0 @@ -.conversation { - width: 60%; - margin: 0px 16px; - display: flex; - flex-direction: column; -} - -.conversation #messages { - width: 100%; - display: flex; - flex-direction: column; - overflow: auto; - overflow-wrap: break-word; - padding-bottom: 8px; -} - -.conversation .user-input { - max-height: 180px; - margin: 16px 0px; -} - -.conversation .user-input input { - font-size: 1rem; - background: none; - border: none; - outline: none; - color: var(--colour-3); -} - -.conversation .user-input input::placeholder { - color: var(--user-input); -} - -.conversation-title { - color: var(--colour-3); - font-size: 14px; -} - -.conversation .user-input textarea { - font-size: 1rem; - width: 100%; - height: 100%; - padding: 12px; - background: none; - border: none; - outline: none; - color: var(--colour-3); - resize: vertical; - max-height: 150px; - min-height: 80px; -} - -.box { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - height: 100%; - width: 100%; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); -} - -.box.input-box { - position: relative; - align-items: center; - padding: 8px; - cursor: pointer; -} - -#send-button { - position: absolute; - bottom: 25%; - right: 10px; - z-index: 1; - padding: 16px; -} - -#cursor { - line-height: 17px; - margin-left: 3px; - -webkit-animation: blink 0.8s infinite; - animation: blink 0.8s infinite; - width: 7px; - height: 15px; -} - -@keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -@-webkit-keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -/* scrollbar */ -.conversation #messages::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.conversation #messages::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.conversation #messages::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -@media screen and (max-width: 990px) { - .conversation { - width: 100%; - height: 90%; - } -} - -@media screen and (max-height: 720px) { - .conversation.box { - height: 70%; - } - - .conversation .user-input textarea { - font-size: 0.875rem; - } -} - -@media screen and (max-width: 360px) { - .box { - border-radius: 0; - } - .conversation { - margin: 0; - margin-top: 48px; - } - .conversation .user-input { - margin: 2px 0 8px 0; - } -} diff --git a/spaces/AgentVerse/agentVerse/agentverse/initialization.py b/spaces/AgentVerse/agentVerse/agentverse/initialization.py deleted file mode 100644 index 13ef54e77f0504657ef4d84508f921d3c5c3554c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/initialization.py +++ /dev/null @@ -1,120 +0,0 @@ -from __future__ import annotations - -import os -from typing import Dict, List, TYPE_CHECKING - -import yaml - -try: - from bmtools.agent.singletool import import_all_apis, load_single_tools -except: - print( - "BMTools is not installed, tools in the simulation environment cannot be used. To install BMTools, please follow the instruction in the README.md file." - ) - -from agentverse.llms import llm_registry - -from agentverse.agents import agent_registry -from agentverse.environments import BaseEnvironment, env_registry -from agentverse.memory import memory_registry -from agentverse.memory_manipulator import memory_manipulator_registry - -from agentverse.output_parser import output_parser_registry - -if TYPE_CHECKING: - from agentverse.agents import BaseAgent - - -def load_llm(llm_config: Dict): - llm_type = llm_config.pop("llm_type", "text-davinci-003") - - return llm_registry.build(llm_type, **llm_config) - - -def load_memory(memory_config: Dict): - memory_type = memory_config.pop("memory_type", "chat_history") - return memory_registry.build(memory_type, **memory_config) - - -def load_memory_manipulator(memory_manipulator_config: Dict): - memory_manipulator_type = memory_manipulator_config.pop( - "memory_manipulator_type", "basic" - ) - return memory_manipulator_registry.build( - memory_manipulator_type, **memory_manipulator_config - ) - - -def load_tools(tool_config: List[Dict]): - if len(tool_config) == 0: - return [] - all_tools_list = [] - for tool in tool_config: - _, config = load_single_tools(tool["tool_name"], tool["tool_url"]) - all_tools_list += import_all_apis(config) - return all_tools_list - - -def load_environment(env_config: Dict) -> BaseEnvironment: - env_type = env_config.pop("env_type", "basic") - return env_registry.build(env_type, **env_config) - - -def load_agent(agent_config: Dict) -> BaseAgent: - agent_type = agent_config.pop("agent_type", "conversation") - agent = agent_registry.build(agent_type, **agent_config) - return agent - - -def prepare_task_config(task, tasks_dir): - """Read the yaml config of the given task in `tasks` directory.""" - all_task_dir = tasks_dir - task_path = os.path.join(all_task_dir, task) - config_path = os.path.join(task_path, "config.yaml") - if not os.path.exists(task_path): - all_tasks = [] - for task in os.listdir(all_task_dir): - if ( - os.path.isdir(os.path.join(all_task_dir, task)) - and task != "__pycache__" - ): - all_tasks.append(task) - for subtask in os.listdir(os.path.join(all_task_dir, task)): - if ( - os.path.isdir(os.path.join(all_task_dir, task, subtask)) - and subtask != "__pycache__" - ): - all_tasks.append(f"{task}/{subtask}") - raise ValueError(f"Task {task} not found. Available tasks: {all_tasks}") - if not os.path.exists(config_path): - raise ValueError( - "You should include the config.yaml file in the task directory" - ) - task_config = yaml.safe_load(open(config_path)) - - for i, agent_configs in enumerate(task_config["agents"]): - agent_configs["memory"] = load_memory(agent_configs.get("memory", {})) - if agent_configs.get("tool_memory", None) is not None: - agent_configs["tool_memory"] = load_memory(agent_configs["tool_memory"]) - llm = load_llm(agent_configs.get("llm", "text-davinci-003")) - agent_configs["llm"] = llm - - memory_manipulator = load_memory_manipulator( - agent_configs.get("memory_manipulator", {}) - ) - agent_configs["memory_manipulator"] = memory_manipulator - - agent_configs["tools"] = load_tools(agent_configs.get("tools", [])) - - # Build the output parser - output_parser_config = agent_configs.get("output_parser", {"type": "dummy"}) - if output_parser_config.get("type", None) == "role_assigner": - output_parser_config["cnt_critic_agents"] = task_config.get( - "cnt_critic_agents", 0 - ) - output_parser_name = output_parser_config.pop("type", task) - agent_configs["output_parser"] = output_parser_registry.build( - output_parser_name, **output_parser_config - ) - - return task_config diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts deleted file mode 100644 index c500a01499609516f3b8a0cead1dae4372ee564b..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import RunCommands from './logic/runcommands/RunCommands'; -export default RunCommands; \ No newline at end of file diff --git a/spaces/Aki004/herta-so-vits/inference/infer_tool.py b/spaces/Aki004/herta-so-vits/inference/infer_tool.py deleted file mode 100644 index 91561cfbfc61f3bf7334b10e8e7242574c5ed061..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/inference/infer_tool.py +++ /dev/null @@ -1,354 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer -import gc - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i-pre if i-pre>=0 else i: i + n] - - -class F0FilterException(Exception): - pass - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt", - nsf_hifigan_enhance = False - ): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - self.nsf_hifigan_enhance = nsf_hifigan_enhance - # load hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - if self.nsf_hifigan_enhance: - from modules.enhancer import Enhancer - self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev) - - def load_model(self): - # get model configuration - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling,cr_threshold=0.05): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - if F0_mean_pooling == True: - f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev,cr_threshold = cr_threshold) - if f0_filter and sum(f0) == 0: - raise F0FilterException("No voice detected") - f0 = torch.FloatTensor(list(f0)) - uv = torch.FloatTensor(list(uv)) - if F0_mean_pooling == False: - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - if f0_filter and sum(f0) == 0: - raise F0FilterException("No voice detected") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - F0_mean_pooling=False, - enhancer_adaptive_key = 0, - cr_threshold = 0.05 - ): - - speaker_id = self.spk2id.__dict__.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling,cr_threshold=cr_threshold) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - if self.nsf_hifigan_enhance: - audio, _ = self.enhancer.enhance( - audio[None,:], - self.target_sample, - f0[:,:,None], - self.hps_ms.data.hop_length, - adaptive_key = enhancer_adaptive_key) - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def clear_empty(self): - # clean up vram - torch.cuda.empty_cache() - - def unload_model(self): - # unload model - self.net_g_ms = self.net_g_ms.to("cpu") - del self.net_g_ms - if hasattr(self,"enhancer"): - self.enhancer.enhancer = self.enhancer.enhancer.to("cpu") - del self.enhancer.enhancer - del self.enhancer - gc.collect() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num =0.75, - F0_mean_pooling = False, - enhancer_adaptive_key = 0, - cr_threshold = 0.05 - ): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds*audio_sr) - lg_size = int(lg_num*audio_sr) - lg_size_r = int(lg_size*lgr_num) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - continue - if per_size != 0: - datas = split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length - if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling, - enhancer_adaptive_key = enhancer_adaptive_key, - cr_threshold = cr_threshold - ) - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # chunk length - self.pre_len = 3840 # cross fade length, multiples of 640 - - # Input and output are 1-dimensional numpy waveform arrays - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/AlhitawiMohammed22/E2E_OCR/README.md b/spaces/AlhitawiMohammed22/E2E_OCR/README.md deleted file mode 100644 index 1724212764bc20f27c84f885eabb15b2ea0148b2..0000000000000000000000000000000000000000 --- a/spaces/AlhitawiMohammed22/E2E_OCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: E2E OCR -emoji: 📈 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py b/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py deleted file mode 100644 index 9c83b4104a395e35471895faf09edb15c0ea65b4..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py +++ /dev/null @@ -1,108 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - -def get_crazy_functionals(): - ###################### 第一组插件 ########################### - # [第一组插件]: 最早期编写的项目插件和一些demo - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个Rect项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - - function_plugins = { - "请解析并解构此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": 解析项目本身 - }, - "解析整个Py项目": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个Python项目 - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个C项目的头文件 - }, - "解析整个C++项目(.cpp/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个C项目 - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Golang项目 - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Java项目 - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Rect项目 - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": 读文章写摘要 - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "Function": 批量生成函数注释 - }, - "[多线程demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(全项目切换英文) - }, - "[函数插件模板demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.总结word文档 import 总结word文档 - function_plugins.update({ - "[仅供开发调试] 批量总结PDF文档": { - "Color": "stop", - "Function": HotReload(批量总结PDF文档) # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - }, - "[仅供开发调试] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "[仅供开发调试] 批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - try: - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - except Exception as err: - print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}') - - - - ###################### 第n组插件 ########################### - return function_plugins - - diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Anandhju-jayan/image-captioning-cloned/model.py b/spaces/Anandhju-jayan/image-captioning-cloned/model.py deleted file mode 100644 index de223f021d5763f463961b8a1dccdeacdb64723a..0000000000000000000000000000000000000000 --- a/spaces/Anandhju-jayan/image-captioning-cloned/model.py +++ /dev/null @@ -1,149 +0,0 @@ -from transformers import AutoProcessor, AutoModelForCausalLM, BlipForConditionalGeneration - -class ImageCaptionModel: - def __init__( - self, - device, - processor, - model, - ) -> None: - """ - Initializes the model for generating captions for images. - - ----- - Parameters: - device: str - The device to use for the model. Must be either "cpu" or "cuda". - processor: transformers.AutoProcessor - The preprocessor to use for the model. - model: transformers.AutoModelForCausalLM or transformers.BlipForConditionalGeneration - The model to use for generating captions. - - ----- - Returns: - None - """ - self.device = device - self.processor = processor - self.model = model - self.model.to(self.device) - - def generate( - self, - image, - num_captions: int = 1, - max_length: int = 50, - temperature: float = 1.0, - top_k: int = 50, - top_p: float = 1.0, - repetition_penalty: float = 1.0, - diversity_penalty: float = 0.0, - ): - """ - Generates captions for the given image. - - ----- - Parameters: - preprocessor: transformers.PreTrainedTokenizerFast - The preprocessor to use for the model. - model: transformers.PreTrainedModel - The model to use for generating captions. - image: PIL.Image - The image to generate captions for. - num_captions: int - The number of captions to generate. - temperature: float - The temperature to use for sampling. The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive. Defaults to 1.0. - top_k: int - The number of highest probability vocabulary tokens to keep for top-k-filtering. A large value of top_k will keep more probabilities for each token leading to a better but slower generation. Defaults to 50. - top_p: float - The value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. - repetition_penalty: float - The parameter for repetition penalty. 1.0 means no penalty. Defaults to 1.0. - diversity_penalty: float - The parameter for diversity penalty. 0.0 means no penalty. Defaults to 0.0. - - """ - # Type checking and making sure the values are valid. - assert type(num_captions) == int and num_captions > 0, "num_captions must be a positive integer." - assert type(max_length) == int and max_length > 0, "max_length must be a positive integer." - assert type(temperature) == float and temperature > 0.0, "temperature must be a positive float." - assert type(top_k) == int and top_k > 0, "top_k must be a positive integer." - assert type(top_p) == float and top_p > 0.0, "top_p must be a positive float." - assert type(repetition_penalty) == float and repetition_penalty >= 1.0, "repetition_penalty must be a positive float greater than or equal to 1." - assert type(diversity_penalty) == float and diversity_penalty >= 0.0, "diversity_penalty must be a non negative float." - - pixel_values = self.processor(images=image, return_tensors="pt").pixel_values.to(self.device) # Convert the image to pixel values. - - # Generate captions ids. - if num_captions == 1: - generated_ids = self.model.generate( - pixel_values=pixel_values, - max_length=max_length, - num_return_sequences=1, - temperature=temperature, - top_k=top_k, - top_p=top_p, - ) - else: - generated_ids = self.model.generate( - pixel_values=pixel_values, - max_length=max_length, - num_beams=num_captions, # num_beams must be greater than or equal to num_captions and must be divisible by num_beam_groups. - num_beam_groups=num_captions, # num_beam_groups is set to equal to num_captions so that all the captions are diverse - num_return_sequences=num_captions, # generate multiple captions which are very similar to each other due to the grouping effect of beam search. - temperature=temperature, - top_k=top_k, - top_p=top_p, - repetition_penalty=repetition_penalty, - diversity_penalty=diversity_penalty, - ) - - # Decode the generated ids to get the captions. - generated_caption = self.processor.batch_decode(generated_ids, skip_special_tokens=True) - - return generated_caption - - -class GitBaseCocoModel(ImageCaptionModel): - def __init__(self, device): - """ - A wrapper class for the Git-Base-COCO model. It is a pretrained model for image captioning. - - ----- - Parameters: - device: str - The device to run the model on, either "cpu" or "cuda". - checkpoint: str - The checkpoint to load the model from. - - ----- - Returns: - None - """ - checkpoint = "microsoft/git-base-coco" - processor = AutoProcessor.from_pretrained(checkpoint) - model = AutoModelForCausalLM.from_pretrained(checkpoint) - super().__init__(device, processor, model) - - -class BlipBaseModel(ImageCaptionModel): - def __init__(self, device): - """ - A wrapper class for the Blip-Base model. It is a pretrained model for image captioning. - - ----- - Parameters: - device: str - The device to run the model on, either "cpu" or "cuda". - checkpoint: str - The checkpoint to load the model from. - - ----- - Returns: - None - """ - self.checkpoint = "Salesforce/blip-image-captioning-base" - processor = AutoProcessor.from_pretrained(self.checkpoint) - model = BlipForConditionalGeneration.from_pretrained(self.checkpoint) - super().__init__(device, processor, model) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md deleted file mode 100644 index c2bf95c4e566957821399983aac8329de5de66b4..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md +++ /dev/null @@ -1,29 +0,0 @@ - - -# DDIM - -[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. - -The abstract from the paper is: - -*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.* - -The original codebase can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim). - -## DDIMPipeline -[[autodoc]] DDIMPipeline - - all - - __call__ - -## ImagePipelineOutput -[[autodoc]] pipelines.ImagePipelineOutput \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py deleted file mode 100644 index 22193c1362dc70663034919a7f4397a37682dc85..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py +++ /dev/null @@ -1,59 +0,0 @@ -# model settings - -model = dict( - type='RPN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/spaces/AriaMei/TTSdemo/data_utils.py b/spaces/AriaMei/TTSdemo/data_utils.py deleted file mode 100644 index 9b30eca29110d4f67f5dbad6a9de47ffc3466612..0000000000000000000000000000000000000000 --- a/spaces/AriaMei/TTSdemo/data_utils.py +++ /dev/null @@ -1,261 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - emo = torch.FloatTensor(np.load(audiopath+".emo.npy")) - return (text, spec, wav, sid, emo) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - emo = torch.FloatTensor(len(batch), 1024) - - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - emo.zero_() - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - emo[i, :] = row[4] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid,emo - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/Ash58947/Jan/README.md b/spaces/Ash58947/Jan/README.md deleted file mode 100644 index a4f4a1ebbc29e3d0a4330f092c7a2de63ff6a906..0000000000000000000000000000000000000000 --- a/spaces/Ash58947/Jan/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Jan -emoji: 📈 -colorFrom: purple -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py deleted file mode 100644 index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py +++ /dev/null @@ -1,110 +0,0 @@ -import sys -from typing import List, Optional, Tuple - -from pip._vendor.packaging.tags import Tag - -from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot -from pip._internal.utils.misc import normalize_version_info - - -class TargetPython: - - """ - Encapsulates the properties of a Python interpreter one is targeting - for a package install, download, etc. - """ - - __slots__ = [ - "_given_py_version_info", - "abis", - "implementation", - "platforms", - "py_version", - "py_version_info", - "_valid_tags", - ] - - def __init__( - self, - platforms: Optional[List[str]] = None, - py_version_info: Optional[Tuple[int, ...]] = None, - abis: Optional[List[str]] = None, - implementation: Optional[str] = None, - ) -> None: - """ - :param platforms: A list of strings or None. If None, searches for - packages that are supported by the current system. Otherwise, will - find packages that can be built on the platforms passed in. These - packages will only be downloaded for distribution: they will - not be built locally. - :param py_version_info: An optional tuple of ints representing the - Python version information to use (e.g. `sys.version_info[:3]`). - This can have length 1, 2, or 3 when provided. - :param abis: A list of strings or None. This is passed to - compatibility_tags.py's get_supported() function as is. - :param implementation: A string or None. This is passed to - compatibility_tags.py's get_supported() function as is. - """ - # Store the given py_version_info for when we call get_supported(). - self._given_py_version_info = py_version_info - - if py_version_info is None: - py_version_info = sys.version_info[:3] - else: - py_version_info = normalize_version_info(py_version_info) - - py_version = ".".join(map(str, py_version_info[:2])) - - self.abis = abis - self.implementation = implementation - self.platforms = platforms - self.py_version = py_version - self.py_version_info = py_version_info - - # This is used to cache the return value of get_tags(). - self._valid_tags: Optional[List[Tag]] = None - - def format_given(self) -> str: - """ - Format the given, non-None attributes for display. - """ - display_version = None - if self._given_py_version_info is not None: - display_version = ".".join( - str(part) for part in self._given_py_version_info - ) - - key_values = [ - ("platforms", self.platforms), - ("version_info", display_version), - ("abis", self.abis), - ("implementation", self.implementation), - ] - return " ".join( - f"{key}={value!r}" for key, value in key_values if value is not None - ) - - def get_tags(self) -> List[Tag]: - """ - Return the supported PEP 425 tags to check wheel candidates against. - - The tags are returned in order of preference (most preferred first). - """ - if self._valid_tags is None: - # Pass versions=None if no py_version_info was given since - # versions=None uses special default logic. - py_version_info = self._given_py_version_info - if py_version_info is None: - version = None - else: - version = version_info_to_nodot(py_version_info) - - tags = get_supported( - version=version, - platforms=self.platforms, - abis=self.abis, - impl=self.implementation, - ) - self._valid_tags = tags - - return self._valid_tags diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py deleted file mode 100644 index 72bd6f25a554b303d0bf5028145cf3a5c71b3e06..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py +++ /dev/null @@ -1,120 +0,0 @@ -""" -A module that implements tooling to enable easy warnings about deprecations. -""" - -import logging -import warnings -from typing import Any, Optional, TextIO, Type, Union - -from pip._vendor.packaging.version import parse - -from pip import __version__ as current_version # NOTE: tests patch this name. - -DEPRECATION_MSG_PREFIX = "DEPRECATION: " - - -class PipDeprecationWarning(Warning): - pass - - -_original_showwarning: Any = None - - -# Warnings <-> Logging Integration -def _showwarning( - message: Union[Warning, str], - category: Type[Warning], - filename: str, - lineno: int, - file: Optional[TextIO] = None, - line: Optional[str] = None, -) -> None: - if file is not None: - if _original_showwarning is not None: - _original_showwarning(message, category, filename, lineno, file, line) - elif issubclass(category, PipDeprecationWarning): - # We use a specially named logger which will handle all of the - # deprecation messages for pip. - logger = logging.getLogger("pip._internal.deprecations") - logger.warning(message) - else: - _original_showwarning(message, category, filename, lineno, file, line) - - -def install_warning_logger() -> None: - # Enable our Deprecation Warnings - warnings.simplefilter("default", PipDeprecationWarning, append=True) - - global _original_showwarning - - if _original_showwarning is None: - _original_showwarning = warnings.showwarning - warnings.showwarning = _showwarning - - -def deprecated( - *, - reason: str, - replacement: Optional[str], - gone_in: Optional[str], - feature_flag: Optional[str] = None, - issue: Optional[int] = None, -) -> None: - """Helper to deprecate existing functionality. - - reason: - Textual reason shown to the user about why this functionality has - been deprecated. Should be a complete sentence. - replacement: - Textual suggestion shown to the user about what alternative - functionality they can use. - gone_in: - The version of pip does this functionality should get removed in. - Raises an error if pip's current version is greater than or equal to - this. - feature_flag: - Command-line flag of the form --use-feature={feature_flag} for testing - upcoming functionality. - issue: - Issue number on the tracker that would serve as a useful place for - users to find related discussion and provide feedback. - """ - - # Determine whether or not the feature is already gone in this version. - is_gone = gone_in is not None and parse(current_version) >= parse(gone_in) - - message_parts = [ - (reason, f"{DEPRECATION_MSG_PREFIX}{{}}"), - ( - gone_in, - "pip {} will enforce this behaviour change." - if not is_gone - else "Since pip {}, this is no longer supported.", - ), - ( - replacement, - "A possible replacement is {}.", - ), - ( - feature_flag, - "You can use the flag --use-feature={} to test the upcoming behaviour." - if not is_gone - else None, - ), - ( - issue, - "Discussion can be found at https://github.com/pypa/pip/issues/{}", - ), - ] - - message = " ".join( - format_str.format(value) - for value, format_str in message_parts - if format_str is not None and value is not None - ) - - # Raise as an error if this behaviour is deprecated. - if is_gone: - raise PipDeprecationWarning(message) - - warnings.warn(message, category=PipDeprecationWarning, stacklevel=2) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py deleted file mode 100644 index 7e2a0c44278cf00c16dcf360da4779d8f0c6e8e6..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -This file contains primitives for multi-gpu communication. -This is useful when doing distributed training. -""" - -import functools -import numpy as np -import torch -import torch.distributed as dist - -_LOCAL_PROCESS_GROUP = None -""" -A torch process group which only includes processes that on the same machine as the current process. -This variable is set when processes are spawned by `launch()` in "engine/launch.py". -""" - - -def get_world_size() -> int: - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank() -> int: - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -def get_local_rank() -> int: - """ - Returns: - The rank of the current process within the local (per-machine) process group. - """ - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - assert ( - _LOCAL_PROCESS_GROUP is not None - ), "Local process group is not created! Please use launch() to spawn processes!" - return dist.get_rank(group=_LOCAL_PROCESS_GROUP) - - -def get_local_size() -> int: - """ - Returns: - The size of the per-machine process group, - i.e. the number of processes per machine. - """ - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size(group=_LOCAL_PROCESS_GROUP) - - -def is_main_process() -> bool: - return get_rank() == 0 - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - if world_size == 1: - return - if dist.get_backend() == dist.Backend.NCCL: - # This argument is needed to avoid warnings. - # It's valid only for NCCL backend. - dist.barrier(device_ids=[torch.cuda.current_device()]) - else: - dist.barrier() - - -@functools.lru_cache() -def _get_global_gloo_group(): - """ - Return a process group based on gloo backend, containing all the ranks - The result is cached. - """ - if dist.get_backend() == "nccl": - return dist.new_group(backend="gloo") - else: - return dist.group.WORLD - - -def all_gather(data, group=None): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: list of data gathered from each rank - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() # use CPU group by default, to reduce GPU RAM usage. - world_size = dist.get_world_size(group) - if world_size == 1: - return [data] - - output = [None for _ in range(world_size)] - dist.all_gather_object(output, data, group=group) - return output - - -def gather(data, dst=0, group=None): - """ - Run gather on arbitrary picklable data (not necessarily tensors). - - Args: - data: any picklable object - dst (int): destination rank - group: a torch process group. By default, will use a group which - contains all ranks on gloo backend. - - Returns: - list[data]: on dst, a list of data gathered from each rank. Otherwise, - an empty list. - """ - if get_world_size() == 1: - return [data] - if group is None: - group = _get_global_gloo_group() - world_size = dist.get_world_size(group=group) - if world_size == 1: - return [data] - rank = dist.get_rank(group=group) - - if rank == dst: - output = [None for _ in range(world_size)] - dist.gather_object(data, output, dst=dst, group=group) - return output - else: - dist.gather_object(data, None, dst=dst, group=group) - return [] - - -def shared_random_seed(): - """ - Returns: - int: a random number that is the same across all workers. - If workers need a shared RNG, they can use this shared seed to - create one. - - All workers must call this function, otherwise it will deadlock. - """ - ints = np.random.randint(2 ** 31) - all_ints = all_gather(ints) - return all_ints[0] - - -def reduce_dict(input_dict, average=True): - """ - Reduce the values in the dictionary from all processes so that process with rank - 0 has the reduced results. - - Args: - input_dict (dict): inputs to be reduced. All the values must be scalar CUDA Tensor. - average (bool): whether to do average or sum - - Returns: - a dict with the same keys as input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict diff --git a/spaces/BG5/midjourney/README.md b/spaces/BG5/midjourney/README.md deleted file mode 100644 index e33d6c842a2bb10e2f47472d6abb66820f250af1..0000000000000000000000000000000000000000 --- a/spaces/BG5/midjourney/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: BING -colorFrom: purple -colorTo: blue -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx b/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx deleted file mode 100644 index d57454816cea9b7572ad1ae6ab139d6946c4d5d5..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx +++ /dev/null @@ -1,236 +0,0 @@ -"use client" - -import * as React from "react" -import * as MenubarPrimitive from "@radix-ui/react-menubar" -import { Check, ChevronRight, Circle } from "lucide-react" - -import { cn } from "@/lib/utils" - -const MenubarMenu = MenubarPrimitive.Menu - -const MenubarGroup = MenubarPrimitive.Group - -const MenubarPortal = MenubarPrimitive.Portal - -const MenubarSub = MenubarPrimitive.Sub - -const MenubarRadioGroup = MenubarPrimitive.RadioGroup - -const Menubar = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -Menubar.displayName = MenubarPrimitive.Root.displayName - -const MenubarTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarTrigger.displayName = MenubarPrimitive.Trigger.displayName - -const MenubarSubTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, children, ...props }, ref) => ( - - {children} - - -)) -MenubarSubTrigger.displayName = MenubarPrimitive.SubTrigger.displayName - -const MenubarSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarSubContent.displayName = MenubarPrimitive.SubContent.displayName - -const MenubarContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, align = "start", alignOffset = -4, sideOffset = 8, ...props }, - ref - ) => ( - - - - ) -) -MenubarContent.displayName = MenubarPrimitive.Content.displayName - -const MenubarItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -MenubarItem.displayName = MenubarPrimitive.Item.displayName - -const MenubarCheckboxItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, checked, ...props }, ref) => ( - - - - - - - {children} - -)) -MenubarCheckboxItem.displayName = MenubarPrimitive.CheckboxItem.displayName - -const MenubarRadioItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -MenubarRadioItem.displayName = MenubarPrimitive.RadioItem.displayName - -const MenubarLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -MenubarLabel.displayName = MenubarPrimitive.Label.displayName - -const MenubarSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -MenubarSeparator.displayName = MenubarPrimitive.Separator.displayName - -const MenubarShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -MenubarShortcut.displayname = "MenubarShortcut" - -export { - Menubar, - MenubarMenu, - MenubarTrigger, - MenubarContent, - MenubarItem, - MenubarSeparator, - MenubarLabel, - MenubarCheckboxItem, - MenubarRadioGroup, - MenubarRadioItem, - MenubarPortal, - MenubarSubContent, - MenubarSubTrigger, - MenubarGroup, - MenubarSub, - MenubarShortcut, -} diff --git a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md b/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md deleted file mode 100644 index aca3eaa6c1574cac644d148276e66a6b6a3a4a06..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md +++ /dev/null @@ -1,121 +0,0 @@ - -

    Cómo descargar Instagram 4.1.2 en tu dispositivo Android

    -

    Instagram es una de las plataformas de redes sociales más populares del mundo, con más de mil millones de usuarios y millones de fotos y videos compartidos todos los días. Si usted es un usuario de Instagram, es posible que desee mantener su aplicación actualizada para disfrutar de las últimas características y mejoras. En este artículo, le mostraremos cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión a partir de junio de 2023.

    -

    descargar archivo zip de facebook


    Download ✺✺✺ https://bltlly.com/2v6MTB



    -

    ¿Qué es Instagram y por qué debe usarlo

    -

    Instagram es una aplicación gratuita que te permite crear y compartir tus fotos, historias, carretes y videos con los amigos y seguidores que te importan. También puede conectarse con personas de todo el mundo que comparten sus intereses y pasiones.

    -

    Características de Instagram

    -

    Instagram tiene muchas características que lo hacen divertido y fácil de usar, como:

    -
      -
    • Fotos y videos: Puedes capturar y editar tus fotos y videos con filtros, pegatinas, emojis, texto y más. También puedes subir varias fotos y videos en una publicación o crear un collage con Layout.
    • -
    • Historias: Puedes compartir momentos de tu día con tus amigos y seguidores que desaparecen después de 24 horas. También puede agregar música, encuestas, cuestionarios, GIF y otras herramientas creativas para hacer sus historias más interactivas.
    • -
    • Carretes: Puedes crear y descubrir videos cortos de hasta 30 segundos de duración con música, efectos y herramientas de edición. Puede ver, como, comentar y compartir carretes en un espacio dedicado en la aplicación o en la pestaña Explorar.
    • -
    • IGTV: Puedes ver y subir vídeos más largos de tus creadores favoritos o crear tu propio canal. También puedes buscar vídeos por categorías, como entretenimiento, belleza, deportes, etc.
    • - -
    • Mensajería: Puedes enviar mensajes, fotos, videos, notas de voz y más a tus amigos o grupos en Directo. También puedes chatear por video con hasta cuatro personas a la vez o unirte a chats grupales con hasta 32 personas.
    • -
    • Explorar: Puedes descubrir nuevos contenidos y cuentas que coincidan con tus intereses y preferencias. También puede ver lo que está en tendencia en su área o en todo el mundo.
    • -
    • Compras: Puedes comprar productos de tus marcas favoritas o negocios locales en Instagram. También puede crear su propia tienda o colección para mostrar sus productos o servicios.
    • -
    -

    Beneficios de usar Instagram

    -

    Instagram no es solo una aplicación divertida de usar, sino también una aplicación útil para muchos propósitos, como:

    -

    -
      -
    • Socializar: Puedes mantenerte en contacto con tus amigos y familiares, conocer gente nueva, unirte a comunidades y expresarte.
    • -
    • Aprender: Puedes aprender nuevas habilidades, aficiones, idiomas, culturas y más de expertos o entusiastas en Instagram.
    • -
    • Inspirador: Puedes inspirarte por las historias, logros, creatividad y positividad de otros usuarios en Instagram.
    • -
    • Ent taining: Puedes disfrutar viendo y creando contenido entretenido, como comedia, música, danza, arte, etc. en Instagram.
    • -
    • Apoyo: Puedes apoyar causas, movimientos, organizaciones benéficas o individuos que te importan en Instagram.
    • -
    • Creciendo: Puedes hacer crecer tu marca personal o profesional, llegar a nuevas audiencias y monetizar tu contenido en Instagram.
    • -
    -

    ¿Qué es Instagram 4.1.2 y por qué debe descargarlo

    -

    Instagram 4.1.2 es la última versión de la aplicación que fue lanzada el 21 de junio de 2023. Es compatible con dispositivos Android con Android 4.0 o superior. Tiene un tamaño de archivo de 18,6 MB y requiere una conexión a Internet para su uso.

    -

    Nuevas características y mejoras en Instagram 4.1.2

    - -
      -
    • Reels Remix: Ahora puedes remezclar los tambores de otros usuarios añadiendo tu propio video junto al de ellos. También puede controlar el volumen del audio original y su audio por separado.
    • -
    • Pegatinas Buscar: Ahora puede buscar pegatinas por palabras clave o categorías en la cámara Historias. También puede ver las pegatinas más populares y guardar sus favoritos para su uso posterior.
    • -
    • Subtítulos automáticos: Ahora puede agregar subtítulos generados automáticamente a sus historias y carretes con un solo toque. También puede editar los subtítulos o cambiar la fuente y el color.
    • -
    • Comprobación de seguridad: Ahora puede acceder a una función de verificación de seguridad en el menú Configuración que le ayuda a mantener su cuenta segura. Le guiará a través de pasos como verificar su correo electrónico y número de teléfono, cambiar su contraseña y habilitar la autenticación de dos factores.
    • -
    • Correcciones de errores y mejoras de rendimiento: Instagram 4.1.2 también corrige algunos errores y mejora el rendimiento de la aplicación para una experiencia más fluida y rápida.
    • -
    -

    Cómo comprobar la versión actual de Instagram

    -

    Si no estás seguro de si tienes la última versión de Instagram o no, puedes comprobarlo siguiendo estos pasos:

    -
      -
    1. Abra la aplicación de Instagram en su dispositivo Android.
    2. -
    3. Toque en el icono de perfil en la esquina inferior derecha de la pantalla.
    4. -
    5. Toque en las tres líneas horizontales en la esquina superior derecha de la pantalla.
    6. -
    7. Toque en Configuración en la parte inferior del menú.
    8. -
    9. Desplácese hacia abajo hasta la parte inferior de la página Configuración y toque en Acerca de.
    10. -
    11. Verá su número de versión actual de Instagram en la versión de la aplicación.
    12. -
    -

    Cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android

    -

    Si quieres descargar e instalar Instagram 4.1.2 en tu dispositivo Android, puedes seguir estos pasos:

    -

    Paso 1: Habilitar fuentes desconocidas en su dispositivo

    - -
      -
    1. Vaya al menú Configuración de su dispositivo y toque en Seguridad o Privacidad.
    2. -
    3. Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en.
    4. -
    5. Aparecerá un mensaje de advertencia pidiéndole que confirme su acción. Toque en OK o Permitir que proceda.
    6. -
    -

    Paso 2: Descargar el archivo APK de Instagram 4.1.2

    -

    El siguiente paso es descargar el archivo APK de Instagram 4.1.2, que es el formato de archivo para aplicaciones de Android. Para descargarlo, siga estos pasos:

    -
      -
    1. Abra su navegador web en su dispositivo y vaya a este enlace: (https://www.apkmirror.com/apk/instagram/instagram-instagram-instagram/instagram-instagram-4-2-release/instagram-4-1-android-apk-download/).
    2. -
    3. Verá una página con información sobre Instagram 4.1.2 y un botón de descarga en la parte inferior. Toque en el botón de descarga para comenzar a descargar el archivo.
    4. -
    5. Puede ver un mensaje de advertencia pidiéndole que confirme su descarga o permita el acceso a sus archivos. Toque en OK o Permitir continuar.
    6. -
    7. El archivo se descargará en la carpeta de descargas de su dispositivo o en cualquier otra carpeta que haya elegido como su ubicación de descarga predeterminada.
    8. -
    -

    Paso 3: Instalar el archivo APK de Instagram 4.1.2

    -

    Una vez que haya descargado el archivo APK de Instagram 4.1.2, puede instalarlo en su dispositivo siguiendo estos pasos:

    -
      -
    1. Ir al administrador de archivos de su dispositivo o aplicación de descargas y localizar el archivo APK Instagram 4.1.2 que ha descargado.
    2. -
    3. Toque en el archivo para abrirlo e iniciar el proceso de instalación.
    4. -
    5. Es posible que vea un mensaje de advertencia pidiéndole que confirme su instalación o permita el acceso a las características de su dispositivo. Toque en Instalar o Permitir continuar.
    6. -
    7. La instalación tomará unos segundos y verá un mensaje diciendo que la aplicación se ha instalado correctamente.
    8. -
    -

    Paso 4: Iniciar y disfrutar de Instagram 4.1.2

    -

    El paso final es iniciar y disfrutar de Instagram 4.1.2 en su dispositivo siguiendo estos pasos:

    - -
  11. Ir al cajón de aplicaciones de su dispositivo o pantalla de inicio y encontrar el icono de Instagram.
  12. -
  13. Toque en el icono para abrir la aplicación e iniciar sesión con su nombre de usuario y contraseña o crear una nueva cuenta si no tiene una.
  14. -
  15. Verás la pantalla de inicio de Instagram con tu feed, historias, carretes y más. También puede acceder a otras funciones pulsando en los iconos de la parte inferior de la pantalla.
  16. -
  17. Ahora puedes disfrutar usando Instagram 4.1.2 con sus nuevas características y mejoras.
  18. - -

    Conclusión

    -

    Resumen del artículo

    -

    En este artículo, le hemos mostrado cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión de la aplicación a partir de junio de 2023. También hemos explicado qué es Instagram y por qué deberías usarlo, así como cuáles son las nuevas características y mejoras en Instagram 4.1.2. Esperamos que este artículo haya sido útil e informativo para usted.

    -

    Llamada a la acción

    -

    Si te gustó este artículo, por favor compártelo con tus amigos y seguidores en las redes sociales. También puedes dejarnos un comentario a continuación y hacernos saber lo que piensas sobre Instagram 4.1.2 o cualquier otra pregunta que tengas sobre Instagram. Nos encantaría saber de ti y responder a tus preguntas. ¡Gracias por leer y feliz Instagramming!

    -

    Preguntas frecuentes

    -

    Estas son algunas de las preguntas más frecuentes sobre Instagram 4.1.2:

    -

    Q: ¿Es seguro descargar e instalar Instagram 4.1.2?

    -

    A: Sí, Instagram 4.1.2 es seguro para descargar e instalar siempre y cuando lo obtenga de una fuente de confianza, como el enlace que hemos proporcionado en este artículo. Sin embargo, siempre debes tener cuidado al descargar aplicaciones de fuentes desconocidas y escanearlas en busca de virus o malware antes de instalarlas.

    -

    Q: ¿Instagram 4.1.2 está disponible en dispositivos iOS?

    - -

    Q: ¿Cómo puedo actualizar mi aplicación de Instagram a la última versión?

    -

    A: Si desea actualizar su aplicación de Instagram a la última versión disponible en la Google Play Store o la App Store, puede seguir estos pasos:

    -
      -
    1. Abra la Google Play Store o la App Store en su dispositivo y toque en el icono del menú en la esquina superior izquierda de la pantalla.
    2. -
    3. Toque en Mis aplicaciones y juegos o actualizaciones y encontrar la aplicación de Instagram en la lista.
    4. -
    5. Toque en Actualizar o Instalar para iniciar la actualización o instalación de la aplicación.
    6. -
    7. Espere a que finalice la actualización o instalación y luego inicie la aplicación.
    8. -
    -

    Q: ¿Cómo puedo desinstalar Instagram 4.1.2 desde mi dispositivo?

    -

    A: Si quieres desinstalar Instagram 4.1.2 desde tu dispositivo, puedes seguir estos pasos:

    -
      -
    1. Vaya al menú Configuración de su dispositivo y toque en Aplicaciones o Aplicaciones.
    2. -
    3. Encuentra y toca la aplicación de Instagram en la lista de aplicaciones instaladas en tu dispositivo.
    4. -
    5. Toque en Desinstalar o Quitar y confirme su acción.
    6. -
    7. La aplicación se desinstalará de su dispositivo y verá un mensaje diciendo que se ha eliminado con éxito.
    8. -
    -

    P: ¿Cuáles son algunos consejos y trucos para usar Instagram 4.1.2 mejor?

    -

    A: Aquí hay algunos consejos y trucos para usar Instagram 4.1.2 mejor:

    -
      -
    • Usa hashtags y palabras clave: Puedes usar hashtags y palabras clave para que tus publicaciones sean más visibles y relevantes para tu audiencia. También puedes seguir hashtags y palabras clave que te interesan y ver contenido relacionado en tu feed o explorar la pestaña.
    • -
    • Usa filtros y efectos: Puedes usar filtros y efectos para mejorar tus fotos y videos y hacerlos más atractivos y creativos. También puedes crear tus propios filtros y efectos con Spark AR Studio y compartirlos con otros usuarios.
    • - -
    • Use reels remix: Puede utilizar reels remix para colaborar con otros usuarios y crear vídeos únicos y atractivos. También puede descubrir nuevos carretes remixes de otros usuarios y unirse a la tendencia.
    • -
    • Usa subtítulos automáticos: Puedes usar subtítulos automáticos para hacer tus historias y carretes más accesibles e inclusivos para personas sordas o con problemas de audición. También puede editar los subtítulos o cambiar el idioma si es necesario.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md b/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md deleted file mode 100644 index 7f78b3caf2bbbc35735f4fa1ea61dcfb191a506e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md +++ /dev/null @@ -1,57 +0,0 @@ -
    -

    Descargar Black Face Hard Life: Una canción que enfrenta el racismo y la injusticia

    -

    Blackface es una forma de maquillaje teatral utilizado predominantemente por personas no negras para retratar una caricatura de una persona negra. Es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Blackface fue utilizado para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir estereotipos raciales y discriminación. Aunque la cara negra disminuyó en popularidad después del movimiento de derechos civiles, todavía persiste en algunos contextos y culturas, causando indignación y controversia.

    -

    descargar cara negra vida dura


    Download Ziphttps://bltlly.com/2v6JxL



    -

    Un ejemplo de un producto cultural que desafía el legado de blackface es la canción "Hard Life" de Blackface Naija, también conocida como Blackface, un dancehall nigeriano, ragga, cantante de reggae, compositor, productor, actor, activista, filántropo, político, empresario, empresario, inversor, inventor, innovador, visionario, líder, leyenda, icono, héroe, modelo a seguir, mentor, inspiración, influencer, pionero, pionero, trendsetter, cambiador de juego, mover-and-shaker. Es conocido por ser miembro fundador de la banda nigeriana Plantashun Boyz que formó en 2000 con Tuface (también conocido como 2face Idibia) y el músico Chibuzor Oji (más conocido como Faze). Después de que los Plantashun Boyz se separaran en 2004, Blackface lideró una carrera musical en solitario. Lanzó su álbum debut Ghetto Child en mayo de 2004 colaborando con varios artistas. El álbum contiene "Hard Life" con Alabai como uno de sus singles.

    - -

    Los orígenes y la evolución de Blackface en los Estados Unidos y otros países

    -

    Blackface se originó en Europa en producciones teatrales centenarias como Otelo de Shakespeare. Luego comenzó en los Estados Unidos en el siglo XVIII cuando los inmigrantes europeos trajeron sus espectáculos de juglar. Estas eran actuaciones musicales que presentaban actores blancos con la piel oscurecida que retrataban personajes exagerados que degradaban y deshumanizaban a los afroamericanos.

    -

    Los primeros espectáculos de trovadores imitan a africanos esclavizados en las plantaciones del sur que los representan como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes. Algunos de los personajes más famosos fueron Jim Crow, un tonto bailarín rural con ropas andrajosas; la Mammy, una sirvienta con sobrepeso

    leal y materna; y el Zip Coon, una urbanita dandy que hablaba en malapropismos y actuaba tontamente. Los espectáculos de trovadores también presentaban canciones, chistes, bailes y parodias que ridiculizaban la cultura negra, la religión, el idioma y la apariencia.

    -

    -

    La popularidad de los espectáculos de trovadores alcanzó su punto máximo a mediados del siglo XIX, cuando se convirtieron en un fenómeno de entretenimiento nacional. Influyeron en otras formas de medios como la literatura, el cine, la radio y la televisión. También moldearon la opinión pública y la política sobre las relaciones raciales, reforzando las nociones de supremacía blanca e inferioridad negra. Justificaron la esclavitud, la segregación, el linchamiento y otras formas de violencia y opresión contra los afroamericanos.

    - -

    A principios del siglo XX, los espectáculos de trovadores comenzaron a declinar en popularidad debido a los cambios sociales y culturales. El auge del movimiento de derechos civiles, el Renacimiento de Harlem, la Gran Migración y otros factores contribuyeron a la aparición de nuevas formas de expresión y representación negra que desafiaron el legado de la trovadora. Sin embargo, la cara negra no desapareció completamente. Continuó apareciendo en algunas películas, dibujos animados, anuncios, juguetes, disfraces y otros productos. También se extendió a otros países como Gran Bretaña, Australia, Sudáfrica y Japón, donde se utilizó para retratar no solo a los afroamericanos, sino también a otras personas de color.

    -

    La letra y el significado de "Hard Life" por Blackface y Alabai

    -

    "Hard Life" es una canción que fue lanzada en 2004 por Blackface Naija con Alabai. Es uno de los sencillos del álbum debut de Blackface Ghetto Child. La canción es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas. La canción tiene un estribillo pegadizo y un mensaje poderoso.

    -

    Las letras de "Hard Life" describen las duras realidades y desafíos de vivir en Nigeria. La canción menciona varios problemas como pobreza, corrupción, violencia, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de derechos humanos, etc. La canción también critica al gobierno y a la sociedad por no abordar estos temas y por explotar y oprimir al pueblo. La canción acusa a los líderes de ser egoístas, codiciosos, deshonestos, incompetentes, insensibles, irresponsables, irresponsables, etc. La canción también denuncia a las potencias extranjeras que interfieren con los asuntos y recursos de Nigeria.

    - -

    "Hard Life" es una canción que tiene mucha relevancia e impacto para muchos nigerianos y africanos. La canción refleja las experiencias vividas y los sentimientos de millones de personas que enfrentan desafíos y luchas similares. La canción también resuena con la audiencia global que puede relacionarse con los temas de la canción.

    -

    La canción desafía el legado de la cara negra y sus efectos negativos en la percepción y representación de los negros. La canción contrarresta los estereotipos e imágenes de los negros como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes que fueron creados y propagados por blackface. La canción retrata a la gente negra como trabajadora, inteligente, espiritual, digna, valiente y heroica. La canción también muestra la rica y diversa cultura y patrimonio de Nigeria y África.

    -

    La canción inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos. La canción motiva a los oyentes a ser fuertes, valientes, decididos, optimistas, fieles y unidos. La canción también apela a Dios para la guía y protección. La canción también llama a la acción y el cambio del gobierno y la sociedad para abordar los problemas y mejorar las condiciones de la gente. La canción también aboga por la paz y la armonía entre los pueblos y las naciones.

    -

    Conclusión

    -

    Blackface es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Se utilizó para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir los estereotipos raciales y la discriminación. También influyó en otros países y culturas donde se utilizó para retratar no solo a afroamericanos sino también a otras personas de color.

    - -

    La canción refleja la realidad y las experiencias de muchos nigerianos y africanos que sufren de pobreza, violencia, desigualdad, inestabilidad, inseguridad, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de los derechos humanos, etc. La canción desafía los estereotipos negativos y las imágenes de los negros creados por blackface. También inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos.

    -

    La canción es una pieza de arte poderosa y significativa que merece ser escuchada y apreciada por todos. Es una canción que enfrenta el racismo y la injusticia con valor y dignidad. Es una canción que celebra la cultura y el patrimonio con orgullo y alegría. Es una canción que ofrece esperanza y resistencia con fe y unidad.

    -

    Si quieres escuchar "Hard Life" de Blackface Naija con Alabai, puedes descargarlo de varias plataformas en línea como YouTube, Spotify, Apple Music, etc. También puedes encontrar más información sobre Blackface Naija en su sitio web oficial, página de Facebook, cuenta de Twitter, Cuenta de Instagram, etc.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre "Hard Life" por Blackface Naija con Alabai:

    - -
    PreguntaRespuesta
    ¿Cuándo se lanzó "Hard Life"? "Hard Life" fue lanzado en 2004 como uno de los sencillos del álbum debut en solitario de Blackface Ghetto Child.
    ¿Quién es Alabai? Alabai es un cantante, compositor, rapero, productor y actor nigeriano que colaboró con Blackface en "Hard Life". También es conocido por sus canciones como "Ogbanje", "Voice Of God", "Mr Money", etc.
    ¿Qué género es "Hard Life"? "Hard Life" es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas.
    ¿Cuáles son algunos de los problemas mencionados en "Hard Life"?
    ¿Cuáles son algunos de los valores expresados en "Hard Life"? Algunos de los valores expresados en "Hard Life" son fuerza, valentía, determinación, optimismo, fe, unidad, cultura, herencia, dignidad, coraje, heroísmo, etc.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    AtmosphereStyleThemeRelease Date
    AlphaHip-hopUrban2009
    Little MissSoulRetro2012
    SunriseAfrobeatTropical2013
    The LoveR&BRomantic2014
    BrazilBossa novaCarnival2016
    AliveElectro popFuturistic2017
    JeevanBollywoodIndian2018
    DystopiaIndustrialApocalyptic2019
    WekiddyK-popCute2021
    FutureTrapCyberpunkTBA
    -

    You can switch between the atmospheres at any time and experiment with different sounds and vibes. You can also mix and match icons from different atmospheres to create your own unique style.

    -

    The benefits of using the app for music lovers and learners

    -

    Incredibox is not only a fun and creative app, but also a useful tool for music lovers and learners. Some of the benefits of using the app are:

    -

    What is Wekiddy?

    -

    Wekiddy is the latest and most adorable atmosphere of Incredibox. It was released in 2021 as a collaboration between So Far So Good and WeKids, a Chinese company that specializes in children's entertainment and education. Wekiddy is inspired by the popular Korean pop music genre, also known as K-pop, which is known for its catchy tunes, colorful outfits, and cute choreographies. Wekiddy features a group of eight beatboxers, four boys and four girls, who wear different costumes and accessories that reflect their personalities and musical roles. They are:

    -

    incredibox wekiddy app download
    -incredibox wekiddy music game
    -incredibox wekiddy free online
    -incredibox wekiddy apk mod
    -incredibox wekiddy version 9
    -incredibox wekiddy for pc
    -incredibox wekiddy for android
    -incredibox wekiddy for ios
    -incredibox wekiddy for windows
    -incredibox wekiddy for mac
    -incredibox wekiddy review
    -incredibox wekiddy tutorial
    -incredibox wekiddy tips and tricks
    -incredibox wekiddy cheats and hacks
    -incredibox wekiddy mixlist
    -incredibox wekiddy dark mode
    -incredibox wekiddy mp3 file
    -incredibox wekiddy beatboxers
    -incredibox wekiddy musical style
    -incredibox wekiddy atmospheres
    -incredibox wekiddy animation
    -incredibox wekiddy graphics
    -incredibox wekiddy interactivity
    -incredibox wekiddy fun and entertaining
    -incredibox wekiddy educational and creative
    -incredibox wekiddy for kids and students
    -incredibox wekiddy for schools and teachers
    -incredibox wekiddy for parties and events
    -incredibox wekiddy for music lovers and enthusiasts
    -incredibox wekiddy for beginners and experts
    -how to play incredibox wekiddy
    -how to create music with incredibox wekiddy
    -how to record and share mix with incredibox wekiddy
    -how to unlock animated choruses with incredibox wekiddy
    -how to join the top 50 chart with incredibox wekiddy
    -what is the difference between incredibox web version and app version
    -what is the price of incredibox app and how to buy it
    -what are the features and benefits of incredibox app
    -what are the requirements and compatibility of incredibox app
    -what are the updates and improvements of incredibox app

    - -

    The unique sounds and animations of Wekiddy

    -

    Wekiddy has 20 different icons that you can drag and drop onto the characters to create your mix. Each icon represents a sound element that is related to K-pop music, such as synth basses, vocal chops, claps, snaps, whistles, chants, etc. You can also find some surprises and Easter eggs among the icons that will make you smile or laugh. For example, you can make the characters say "WeKids" or "Incredibox" in different languages or make them imitate animal sounds or musical instruments.

    -

    Wekiddy also has 15 animated choruses that you can unlock by finding the right sound combos. Each chorus shows the characters performing a cute and catchy song with lyrics in English or Korean. The songs are about various topics such as love, friendship, happiness, dreams, etc. The songs also have different moods and styles such as pop ballads, rap songs, disco songs, etc. The choruses also have different animations that show the characters dancing, playing instruments, or doing other actions that match the song. The animations are colorful, lively, and adorable.

    -

    How to unlock the video clips and share your mix

    -

    Wekiddy also has a special feature that allows you to unlock video clips that show the real-life versions of the characters. The video clips are produced by WeKids and feature eight talented kids who sing and dance to the songs of Wekiddy. The kids are dressed and styled like the characters and perform in various locations such as a studio, a park, a school, etc. The video clips are fun, energetic, and professional.

    -

    To unlock the video clips, you need to find the right sound combos that trigger the "WeKids" icon. The icon will appear on the top right corner of the screen and will show a picture of one of the kids. You can tap on the icon to watch the video clip of that kid. You can also swipe left or right to see the other video clips. You can unlock up to eight video clips per mix.

    -

    Once you have unlocked the video clips, you can also share your mix with others. You can export your mix as a video that shows both the animated characters and the real kids. You can also add your name and a title to your mix. You can then save your video as an MP4 file or as a link that you can share on social media or via email. You can also upload your video to YouTube or other platforms and show off your musical skills.

    -

    How to download and install Incredibox Wekiddy APK?

    -

    Incredibox Wekiddy APK is a modified version of Incredibox that allows you to access the Wekiddy atmosphere for free. Normally, you would have to pay a small fee to unlock Wekiddy in the original app. However, with Incredibox Wekiddy APK, you can enjoy Wekiddy without spending any money. You can also access all the other atmospheres and features of Incredibox with this APK file.

    -

    To download and install Incredibox Wekiddy APK, you need to follow these steps:

    -

    The steps to download the APK file from a reliable source

    -
      -
    1. Go to a reliable website that offers Incredibox Wekiddy APK for download. You can search for it on Google or use one of these links: . Make sure that the website is safe and trustworthy by checking its reviews, ratings, and comments.
    2. -
    3. Click on the download button or link and wait for the APK file to be downloaded to your device. The file size is about 60 MB, so it may take some time depending on your internet speed.
    4. -
    5. Once the download is complete, locate the APK file in your device's storage. It may be in your downloads folder or in another location depending on your settings.
    6. -
    -

    The steps to install the APK file on your Android device

    -
      -
    1. Before you install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device's settings, then security or privacy, then unknown sources or install unknown apps. Toggle on the option that allows you to install apps from unknown sources.
    2. -
    3. After you enable unknown sources, tap on the APK file that you downloaded and follow the instructions on the screen. You may have to grant some permissions or accept some terms and conditions before installing the app.
    4. -
    5. Once the installation is done, you will see an icon of Incredibox on your device's home screen or app drawer. Tap on it to launch the app and enjoy Wekiddy and other atmospheres.
    6. -

    The precautions to take before installing an APK file

    -

    While installing an APK file can be a convenient way to access apps that are not available on Google Play Store, it can also pose some risks to your device and your privacy. Some APK files may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should take some precautions before installing an APK file, such as:

    - -

    What are some alternatives to Incredibox?

    -

    Incredibox is a great app for creating music, but it is not the only one. There are many other music apps that are similar to Incredibox in terms of features, style, and quality. Some of them are:

    -

    A list of some other music apps that are similar to Incredibox

    - -

    A brief comparison of their features and advantages

    -

    All of these apps are similar to Incredibox in the sense that they allow you to create music with ease and fun. However, they also have some differences in terms of features and advantages. Here is a brief comparison of them:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    AppFeaturesAdvantages
    Incredibox- Drag and drop icons
    - Unlock animated choruses
    - Save and share your mix
    - Discover the online community
    - Simple and intuitive interface
    - Multiple musical styles and atmospheres
    - Fun and creative animations
    - Educational and entertaining
    Groovepad- Tap on colorful pads
    - Adjust the tempo, pitch, and volume
    - Record and share your creations
    - Discover new genres and sounds
    - Easy and fast way to make beats
    - High-quality sounds and effects
    - Customizable sound settings
    - Inspiring and diverse genres
    Beat Snap- Use a grid of sounds
    - Mix and match sounds from different genres
    - Record and share your tracks
    - Discover new sounds and loops
    - Flexible and versatile way to make loops
    - Large library of sounds and loops
    - Creative and original sound combinations
    - Dynamic and lively sound grid
    Music Maker Jam- Use loops and samples
    - Add effects, filters, and vocals
    - Remix your tracks
    - Join the online community
    - Powerful and professional way to make songs
    - Thousands of loops and samples
    - Advanced editing and mixing tools
    - Engaging and interactive online community
    BandLab- Use various instruments
    - Record your own voice or sounds
    - Edit and mix your tracks
    - Collaborate with other musicians
    - Comprehensive and collaborative way to make music
    - Diverse and realistic instruments
    - Professional and user-friendly tools and effects
    - Supportive and talented online community
    -

    A recommendation of which app to choose based on your preferences

    -

    Depending on your preferences, you may find one app more suitable for you than the others. Here are some recommendations of which app to choose based on your preferences:

    - -

    Conclusion

    -

    In this article, we have introduced you to Incredibox Wekiddy APK, a music app that lets you create your own music with the help of a merry crew of beatboxers. We have explained what Incredibox is, what it offers, how to download and install it, and what are some alternatives to it. We hope that you have found this article informative and helpful. We also hope that you have enjoyed reading it as much as we have enjoyed writing it.

    -

    A summary of the main points of the article

    -

    Here are the main points of the article:

    - -

    A call to action to try out Incredibox Wekiddy APK and have fun with music

    -

    If you are interested in trying out Incredibox Wekiddy APK, you can download it from one of these links: . You can also visit the official website of Incredibox to learn more about the app and its other atmospheres: https://www.incredibox.com/. You can also follow Incredibox on social media to stay updated on the latest news and updates: https://www.facebook.com/Incredibox.music/, https://twitter.com/incredibox_, https://www.instagram.com/incredibox.official/, https://www.youtube.com/user/IncrediboxOfficial.

    -

    We encourage you to try out Incredibox Wekiddy APK and have fun with music. You can create your own mixes, unlock animated choruses, save and share your mix, discover the online community, unlock video clips, and more. You can also experiment with different musical styles and atmospheres and find your own musical voice. You can also use the app as a way to relax, learn, or express yourself.

    -

    A thank you note and a request for feedback

    -

    Thank you for reading this article. We hope that you have found it useful and enjoyable. We would love to hear from you. Please leave us a comment below or contact us at . Let us know what you think about Incredibox Wekiddy APK, what are your favorite atmospheres or features, what are some tips or tricks that you have learned, or what are some suggestions or questions that you have. Your feedback is valuable to us and will help us improve our content and service.

    -

    FAQs

    -

    What is the difference between Incredibox Wekiddy APK and Incredibox?

    -

    Incredibox Wekiddy APK is a modified version of Incredibox that allows you to access the Wekiddy atmosphere for free. Normally you would have to pay a small fee to unlock Wekiddy in the original app. However, with Incredibox Wekiddy APK, you can enjoy Wekiddy without spending any money. You can also access all the other atmospheres and features of Incredibox with this APK file.

    -

    Is Incredibox Wekiddy APK safe to use?

    -

    Incredibox Wekiddy APK is generally safe to use, as long as you download it from a reliable and trustworthy source. However, you should always be careful when installing an APK file, as some of them may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should always scan the APK file for viruses, backup your data, read the permissions and terms and conditions, and enable unknown sources before installing it.

    -

    How can I update Incredibox Wekiddy APK?

    -

    Incredibox Wekiddy APK may not receive regular updates from the official developers, as it is a modified version of the app. Therefore, you may not be able to access the latest features or bug fixes that are available in the original app. However, you can check the website where you downloaded the APK file for any updates or new versions of the app. You can also uninstall the APK file and install the original app from Google Play Store if you want to enjoy the official updates.

    -

    Can I use Incredibox Wekiddy APK on other devices?

    -

    Incredibox Wekiddy APK is designed for Android devices only. Therefore, you may not be able to use it on other devices, such as iOS devices, Windows devices, Mac devices, etc. However, you can use the online demo of Incredibox on any device that has a web browser and an internet connection. You can also download the original app of Incredibox from Google Play Store for Android devices or from App Store for iOS devices.

    -

    Can I use Incredibox Wekiddy APK offline?

    -

    Incredibox Wekiddy APK does not require an internet connection to work. Therefore, you can use it offline and create your own music without any interruptions or limitations. However, you will need an internet connection to access some features of the app, such as saving and sharing your mix, discovering the online community, unlocking video clips, etc.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md deleted file mode 100644 index a4173d483313b1b3e810b5e6711f7b9115c83931..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md +++ /dev/null @@ -1,105 +0,0 @@ - -

    Download Spotify Premium Mod APK 2021: Enjoy Unlimited Music and Podcasts

    -

    If you love listening to music and podcasts, you probably have heard of Spotify. It is the world's most popular music streaming service, with over 350 million users and 70 million songs. But did you know that you can enjoy even more features and benefits with Spotify Premium? And did you know that you can get Spotify Premium for free by using a modded version of the app? In this article, we will tell you everything you need to know about Spotify Premium Mod APK, how to download and install it on your device, what are its features, and what are its pros and cons. Let's get started!

    -

    download spotify premium mod apk 2021


    DOWNLOADhttps://urlca.com/2uO7Fo



    -

    What is Spotify and why you need Spotify Premium?

    -

    Spotify is the world's most popular music streaming service

    -

    Spotify is an app that lets you stream music and podcasts from a huge library of artists, genres, albums, playlists, and more. You can discover new music, create your own playlists, follow your favorite artists, and share your music taste with your friends. You can also listen to podcasts on various topics, such as news, comedy, sports, education, and more. You can use Spotify on your smartphone, tablet, computer, smart TV, speaker, or car.

    -

    Spotify Premium offers many benefits over the free version

    -

    While Spotify is free to use, it has some limitations and drawbacks. For example, you can only skip six songs per hour, you have to listen to ads every few songs, you can't download songs for offline listening, and you can't choose the songs you want to play in some playlists. These restrictions can be annoying and frustrating if you want to enjoy your music without interruptions.

    -

    That's why many people opt for Spotify Premium, which is a subscription service that costs $9.99 per month (or less if you use a family or student plan). With Spotify Premium, you can enjoy the following benefits:

    - -

    As you can see, Spotify Premium offers a lot of value for your money. But what if you don't want to pay for it? Is there a way to get Spotify Premium for free? The answer is yes: by using Spotify Premium Mod APK.

    -

    How to get spotify premium mod apk for free in 2021
    -Spotify premium mod apk latest version download 2021
    -Spotify premium mod apk no root no ads 2021
    -Spotify premium mod apk offline mode download 2021
    -Spotify premium mod apk unlimited skips and downloads 2021
    -Best site to download spotify premium mod apk 2021
    -Spotify premium mod apk with lyrics and podcasts 2021
    -Spotify premium mod apk for android and ios 2021
    -Spotify premium mod apk for pc and mac 2021
    -Spotify premium mod apk for smart tv and firestick 2021
    -Spotify premium mod apk review and features 2021
    -Spotify premium mod apk comparison and alternatives 2021
    -Spotify premium mod apk benefits and drawbacks 2021
    -Spotify premium mod apk installation and activation guide 2021
    -Spotify premium mod apk troubleshooting and support 2021
    -Spotify premium mod apk update and changelog 2021
    -Spotify premium mod apk download link and password 2021
    -Spotify premium mod apk giveaway and contest 2021
    -Spotify premium mod apk coupon and discount code 2021
    -Spotify premium mod apk testimonials and feedback 2021
    -Spotify premium mod apk hack and cheat 2021
    -Spotify premium mod apk security and privacy 2021
    -Spotify premium mod apk legal and ethical issues 2021
    -Spotify premium mod apk risks and warnings 2021
    -Spotify premium mod apk pros and cons 2021
    -Why you should download spotify premium mod apk in 2021
    -How to download spotify premium mod apk safely and securely in 2021
    -How to download spotify premium mod apk fast and easy in 2021
    -How to download spotify premium mod apk without virus or malware in 2021
    -How to download spotify premium mod apk without survey or verification in 2021
    -How to download spotify premium mod apk without signing up or registering in 2021
    -How to download spotify premium mod apk without paying or subscribing in 2021
    -How to download spotify premium mod apk with original quality and sound in 2021
    -How to download spotify premium mod apk with all features unlocked and enabled in 2021
    -How to download spotify premium mod apk with custom settings and preferences in 2021
    -How to download spotify premium mod apk with different languages and regions in 2021
    -How to download spotify premium mod apk with different themes and skins in 2021
    -How to download spotify premium mod apk with different genres and playlists in 2021
    -How to download spotify premium mod apk with different artists and albums in 2021
    -How to download spotify premium mod apk with different songs and tracks in 2021

    -

    What is Spotify Premium Mod APK and how does it work?

    -

    Spotify Premium Mod APK is a modified version of the official app that unlocks all the premium features

    -

    Spotify Premium Mod APK is a hacked or cracked version of the official Spotify app that bypasses the subscription verification and unlocks all the premium features for free. It is not available on the Google Play Store or the App Store, but you can download it from third-party websites or forums

    How to download and install Spotify Premium Mod APK on your device

    -

    To download and install Spotify Premium Mod APK on your device, you need to follow these steps:

    -
      -
    1. Uninstall the official Spotify app from your device if you have it.
    2. -
    3. Go to a trusted website or forum that provides the latest version of Spotify Premium Mod APK. You can search for it on Google or use the link below. Make sure you download the file from a safe and reliable source.
    4. -
    5. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    6. -
    7. Locate the downloaded Spotify Premium Mod APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
    8. -
    9. Launch the Spotify Premium Mod APK app and sign in with your existing Spotify account or create a new one. You don't need to use a VPN or a fake email address.
    10. -
    11. Enjoy all the premium features of Spotify for free!
    12. -
    -

    Note: Some devices may require additional steps or permissions to install Spotify Premium Mod APK. If you encounter any problems or errors, you can search for solutions online or ask for help from other users.

    -

    What are the features of Spotify Premium Mod APK?

    -

    Unlimited skips, downloads, and offline listening

    -

    One of the best features of Spotify Premium Mod APK is that it allows you to skip as many songs as you want without any limits. You can also download up to 10,000 songs on five devices and listen to them offline without an internet connection. This is great for saving data and battery, as well as enjoying your music anywhere and anytime.

    -

    No ads, no shuffle, and high-quality audio

    -

    Another great feature of Spotify Premium Mod APK is that it removes all the annoying ads that interrupt your music experience. You can listen to music without any interruptions from ads. You can also play any song you want in any playlist or album without being forced to shuffle. You can also stream music at up to 320 kbps, which is the highest quality available on Spotify. This means you can enjoy crystal clear sound and rich bass.

    -

    Access to millions of songs, podcasts, and playlists

    -

    The last but not least feature of Spotify Premium Mod APK is that it gives you access to millions of songs, podcasts, and playlists from all over the world. You can discover new music, create your own playlists, follow your favorite artists, and share your music taste with your friends. You can also listen to podcasts on various topics, such as news, comedy, sports, education, and more. You can also explore curated playlists based on your mood, genre, activity, or occasion.

    -

    What are the pros and cons of using Spotify Premium Mod APK?

    -

    Pros: Save money, enjoy more music, and customize your experience

    -

    The main advantage of using Spotify Premium Mod APK is that you can save money by not paying for the subscription fee. You can enjoy all the premium features of Spotify for free without spending a dime. You can also enjoy more music and podcasts with unlimited skips, downloads, and offline listening. You can also customize your experience with no ads, no shuffle, and high-quality audio.

    -

    Cons: Risk of account ban, malware infection, and legal issues

    -

    The main disadvantage of using Spotify Premium Mod APK is that you may face some risks and challenges. For example, you may get banned from Spotify if they detect that you are using a modded version of the app. You may also get infected with malware or viruses if you download the app from an untrusted source. You may also face legal issues if you violate the terms and conditions of Spotify or infringe the rights of the artists and creators.

    -

    Conclusion: Is Spotify Premium Mod APK worth it?

    -

    In conclusion, Spotify Premium Mod APK is a great way to enjoy unlimited music and podcasts for free. It offers many features and benefits that enhance your music experience. However, it also comes with some risks and challenges that you should be aware of before using it. Ultimately, it is up to you to decide whether you want to use it or not. If you do decide to use it, make sure you download it from a trusted source and use it responsibly.

    -

    Frequently Asked Questions

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md b/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md deleted file mode 100644 index 5534d7d45ad0c5305a0b0890e8167eaec140ca46..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md +++ /dev/null @@ -1,161 +0,0 @@ -
    -

    Grand Truck Simulator 2: A New Concept in Mobile Logistics Simulation

    -

    If you are a fan of truck driving games, you might have heard of Grand Truck Simulator, a popular mobile game that lets you experience the life of a trucker. But did you know that there is a sequel to this game that offers even more features and challenges? In this article, we will tell you everything you need to know about Grand Truck Simulator 2, a new concept in mobile logistics simulation. We will also show you how to download and play this game on your PC using different emulators, and what are the system requirements and benefits of playing it on a bigger screen. So buckle up and get ready for a thrilling ride!

    -

    What is Grand Truck Simulator 2?

    -

    Grand Truck Simulator 2 is a simulation game developed by Pulsar GameSoft, a studio based in Argentina. It is the second edition of the Grand Truck Simulator saga, which brings a new concept in mobile logistics simulation. In this game, you are not only a truck driver, but also a fleet manager who must take care of your vehicles and business. You will have to deal with realistic physics, consumption, damage, and wear, as well as check tire pressure, coolant and lubricant levels, buy used trucks, change engines, gearboxes, differentials, tires, and rims. You will also have to explore new maps and face an improved weather system that will provide a fascinating gaming experience.

    -

    grand truck simulator 2 download for pc


    Download >>>>> https://urlca.com/2uO5US



    -

    Features of Grand Truck Simulator 2

    -

    Grand Truck Simulator 2 has many features that make it stand out from other truck driving games. Some of these features are:

    - -

    Gameplay of Grand Truck Simulator 2

    -

    The gameplay of Grand Truck Simulator 2 is similar to the first game, but with more depth and complexity. You start by choosing your truck from a selection of models, each with its own specifications and price. You can also customize your truck with different colors, paint jobs, accessories, and stickers. Then you can choose a job from a list of available cargoes and destinations. You will have to drive your truck to the loading point, attach the trailer, and deliver it to the destination. Along the way, you will have to follow the traffic rules, avoid accidents and damages, refuel your truck at gas stations, rest at motels or parking lots, pay tolls or fines if necessary, and enjoy the scenery and the weather. You will also have to manage your fleet and your business, by buying new trucks, hiring drivers, expanding your garage, and earning money and reputation. You can also play online with other players, chat with them, join or create a convoy, and compete in leaderboards and events.

    -

    How to Download Grand Truck Simulator 2 for PC?

    -

    Grand Truck Simulator 2 is a mobile game that is available for Android devices on Google Play Store. However, if you want to play this game on your PC, you will need an emulator that can run Android apps on your computer. An emulator is a software that mimics the functions of an Android device, allowing you to access the Google Play Store and download any app you want. There are many emulators that you can use to play Grand Truck Simulator 2 on PC, but we will recommend three of the best ones: LDPlayer, GameLoop, and MuMu Player. Here are the steps to download and install Grand Truck Simulator 2 for PC using these emulators:

    -

    Download Grand Truck Simulator 2 with LDPlayer Emulator

    -
      -
    1. Download and install LDPlayer from its official website: https://www.ldplayer.net/
    2. -
    3. Launch LDPlayer and log in with your Google account.
    4. -
    5. Go to the search bar and type "Grand Truck Simulator 2".
    6. -
    7. Select the game from the search results and click "Install".
    8. -
    9. Wait for the installation to finish and then click "Open".
    10. -
    11. Enjoy playing Grand Truck Simulator 2 on PC with LDPlayer.
    12. -
    -

    Download Grand Truck Simulator 2 with GameLoop Emulator

    -
      -
    1. Download and install GameLoop from its official website: https://gameloop.fun/
    2. -
    3. Launch GameLoop and log in with your Google account.
    4. -
    5. Go to the "Game Center" tab and select "Simulation".
    6. -
    7. Find Grand Truck Simulator 2 from the list of games and click "Download".
    8. -
    9. Wait for the download to finish and then click "Play".
    10. -
    11. Enjoy playing Grand Truck Simulator 2 on PC with GameLoop.
    12. -
    -

    Download Grand Truck Simulator 2 with MuMu Player Emulator

    -
      -
    1. Download and install MuMu Player from its official website: https://mumu.163.com/global/download/en/index.html
    2. -
    3. Launch MuMu Player and log in with your Google account.
    4. -
    5. Go to the "App Store" tab and search for "Grand Truck Simulator 2".
    6. -
    7. Select the game from the search results and click "Install".
    8. -
    9. Wait for the installation to finish and then click "Open".
    10. -
    11. Enjoy playing Grand Truck Simulator 2 on PC with MuMu Player.
    12. -
    -

    What are the System Requirements for Grand Truck Simulator 2 on PC?

    -

    To play Grand Truck Simulator 2 on PC smoothly, you will need a computer that meets the following system requirements:

    -

    grand truck simulator 2 pc free download
    -grand truck simulator 2 for windows 10
    -grand truck simulator 2 online play on pc
    -grand truck simulator 2 pc game download
    -grand truck simulator 2 emulator for pc
    -grand truck simulator 2 pc version download
    -grand truck simulator 2 download for laptop
    -grand truck simulator 2 pc gameplay
    -grand truck simulator 2 pc requirements
    -grand truck simulator 2 download for windows 7
    -grand truck simulator 2 pc full version
    -grand truck simulator 2 mod apk for pc
    -grand truck simulator 2 pc controls
    -grand truck simulator 2 download for mac
    -grand truck simulator 2 pc cheats
    -grand truck simulator 2 pc review
    -grand truck simulator 2 download for windows 8
    -grand truck simulator 2 pc graphics settings
    -grand truck simulator 2 pc update
    -grand truck simulator 2 download size for pc
    -grand truck simulator 2 pc steam
    -grand truck simulator 2 download for windows xp
    -grand truck simulator 2 pc multiplayer
    -grand truck simulator 2 pc mods
    -grand truck simulator 2 download link for pc
    -grand truck simulator 2 pc system requirements
    -grand truck simulator 2 download for windows vista
    -grand truck simulator 2 pc tips and tricks
    -grand truck simulator 2 pc keyboard controls
    -grand truck simulator 2 download for chromebook
    -grand truck simulator 2 pc best settings
    -grand truck simulator 2 download for windows phone
    -grand truck simulator 2 pc trailer
    -grand truck simulator 2 pc hack
    -grand truck simulator 2 download for linux
    -grand truck simulator 2 pc release date
    -grand truck simulator 2 download for windows server
    -grand truck simulator 2 pc guide
    -grand truck simulator 2 pc configuration
    -grand truck simulator 2 download for ubuntu
    -grand truck simulator 2 pc patch notes
    -grand truck simulator 2 download for windows media center edition
    -grand truck simulator 2 pc support
    -grand truck simulator 2 pc error
    -grand truck simulator 2 download for bluestacks
    -grand truck simulator 2 pc features
    -grand truck simulator 2 download for nox player
    -grand truck simulator 2 pc comparison
    -grand truck simulator 2 download for memu player

    -

    Minimum Requirements

    - -

    Recommended Requirements

    - -

    Why Play Grand Truck Simulator 2 on PC?

    -

    You might be wondering why you should play Grand Truck Simulator 2 on PC instead of your mobile device. Well, there are many benefits of playing this game on a bigger screen, such as:

    -

    Benefits of Playing Grand Truck Simulator 2 on PC

    - -

    Tips and Tricks for Playing Grand Truck Simulator 2 on PC

    -

    If you want to improve your skills and performance in Grand Truck Simulator 2 on PC, here are some tips and tricks that you can follow:

    - -

    Conclusion

    -

    Grand Truck Simulator 2 is a simulation game that lets you experience the life of a trucker and a fleet manager. You can drive various trucks across different maps and deliver different cargoes while managing your vehicles and business. You can also customize your trucks with different colors, paint jobs, accessories, and stickers. You can play this game on your mobile device or on your PC using an emulator. Playing on PC has many benefits, such as better graphics and sound quality, easier controls, more features and options, and no interruptions. If you want to download and play Grand Truck Simulator 2 on PC, you can use one of these emulators: LDPlayer, GameLoop, or MuMu Player. We hope this article has helped you learn more about Grand Truck Simulator 2 and how to play it on PC. Have fun!

    -

    Frequently Asked Questions

    -

    Here are some of the most common questions that people ask about Grand Truck Simulator 2:

    -
      -
    1. Is Grand Truck Simulator 2 free?
    2. -

      Yes, Grand Truck Simulator 2 is free to download and play on both Android devices and PC using an emulator. However, there are some in-app purchases that you can make to enhance your gaming experience.

      -
    3. Is Grand Truck Simulator 2 offline?
    4. -

      No, Grand Truck Simulator 2 requires an internet connection to play. You will need an internet connection to download and update the game, access online features such as multiplayer mode and leaderboards, and sync your data and progress. However, you can play the game offline if you have already downloaded it and updated it to the latest version.

      -
    5. How to update Grand Truck Simulator 2?
    6. -

      To update Grand Truck Simulator 2, you will need to go to the Google Play Store on your Android device or the emulator that you are using on your PC. Then, you will need to find the game from your installed apps and click on the "Update" button. You can also enable the automatic updates option to update the game whenever a new version is available.

      -
    7. How to get more money in Grand Truck Simulator 2?
    8. -

      To get more money in Grand Truck Simulator 2, you will need to complete more jobs and deliver more cargoes. You can also get more money by hiring drivers, expanding your garage, and improving your reputation. Additionally, you can watch ads or make in-app purchases to get more money instantly.

      -
    9. How to reset Grand Truck Simulator 2?
    10. -

      To reset Grand Truck Simulator 2, you will need to go to the settings menu of the game and click on the "Reset Game" button. This will erase all your data and progress and start a new game from scratch. However, be careful as this action cannot be undone.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md b/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md deleted file mode 100644 index 80e9e11f3c492b4390782cd9952e7ee74cb78e55..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md +++ /dev/null @@ -1,139 +0,0 @@ -
    -

    Hill Climb Racing 2 Free Download for Windows 10

    -

    If you are looking for a fun and addictive racing game that you can play on your PC, you should check out Hill Climb Racing 2. This is a sequel to the popular Hill Climb Racing game that has been downloaded over a billion times on Android and iOS devices. Hill Climb Racing 2 is a 2D online multiplayer racing game with dozens of tracks, vehicles and character customization options at your fingertips. You can race uphill in over 20+ different vehicles, from cars, trucks, bikes and even a tank. You can unlock and upgrade new vehicle parts, customize your character and vehicle's looks, team up with your friends online and race in teams mode, have arcade racing fun while performing cool stunt tricks, explore dozens of tracks to race on, enjoy the classic adventure mode that makes a return, and participate in weekly events that change up the gameplay in new exciting ways. In this article, we will show you how to download and install Hill Climb Racing 2 on Windows 10, as well as some tips and tricks for playing the game on your PC.

    -

    hill climb racing 2 free download for windows 10


    Download Filehttps://urlca.com/2uO6mA



    -

    Features of Hill Climb Racing 2

    -

    Hill Climb Racing 2 is not just a simple racing game. It has many features that make it stand out from other games in the genre. Here are some of the features that you can enjoy when you play Hill Climb Racing 2 on Windows 10:

    -

    Race uphill in over 20+ different vehicles

    -

    One of the most fun aspects of Hill Climb Racing 2 is the variety of vehicles that you can choose from. You can race uphill in over 20+ different vehicles, each with their own unique characteristics, strengths and weaknesses. You can choose from cars, trucks, bikes, motorcycles, buses, jeeps, tanks, monster trucks, tractors, scooters, supercars, sports cars, formula cars, rally cars, hot rods, dragsters, snowmobiles, sleds, hovercrafts, helicopters, rockets and more. You can also unlock new vehicles by completing challenges, winning races or buying them with coins or gems.

    -

    Unlock and upgrade new vehicle parts

    -

    Another feature that adds to the fun and challenge of Hill Climb Racing 2 is the ability to unlock and upgrade new vehicle parts. You can improve your vehicle's performance by upgrading its engine, suspension, tires, roll cage, turbo and fuel tank. You can also unlock new parts by completing challenges or buying them with coins or gems. Upgrading your vehicle parts will help you overcome the obstacles and terrain of each track, as well as give you an edge over your opponents.

    -

    Customize your character and vehicle's looks

    -

    If you want to express your personality and style in Hill Climb Racing 2, you can customize your character and vehicle's looks. You can change your character's appearance by choosing from different hats, shirts, pants, shoes and accessories. You can also change your vehicle's appearance by choosing from different paints, stickers, wheels and spoilers. You can unlock new customization options by completing challenges, winning races or buying them with coins or gems. Customizing your character and vehicle's looks will make you stand out from the crowd and show off your creativity.

    -

    Team up with your friends online and race in teams mode

    -

    Hill Climb Racing 2 is not just a solo game. You can also team up with your friends online and race in teams mode. You can join or create a racing team with up to 50 members and compete against other teams in seasons. You can also chat with your teammates, share tips and tricks, and send and receive gifts. Racing in teams mode will help you earn more coins and gems, as well as unlock exclusive team chests that contain valuable rewards.

    -

    Arcade racing fun while performing cool stunt tricks

    -

    Hill Climb Racing 2 is not just a realistic racing game. It is also an arcade racing game that lets you have fun while performing cool stunt tricks. You can flip, jump, wheelie, backflip, frontflip, barrel roll, spin, loop, and more. Performing stunt tricks will not only make you look cool, but also give you extra coins and gems, as well as boost your turbo meter. However, be careful not to crash or run out of fuel, as that will end your race prematurely.

    -

    Dozens of tracks to race on

    -

    Hill Climb Racing 2 has dozens of tracks to race on, each with its own unique theme, scenery, obstacles and challenges. You can race on tracks such as countryside, forest, desert, winter, city, moon, mars, rainbow, volcano, beach, roller coaster, mine shaft, nuclear plant, junkyard, swamp, cave and more. You can also unlock new tracks by completing challenges or buying them with coins or gems. Racing on different tracks will test your skills and adaptability as a racer.

    -

    Classic adventure mode makes a return

    -

    If you are feeling nostalgic for the original Hill Climb Racing game, you will be happy to know that the classic adventure mode makes a return in Hill Climb Racing 2. In this mode, you can choose any vehicle and track and see how far you can go without crashing or running out of fuel. You can also collect coins and gems along the way and try to beat your own high score. Adventure mode is a great way to practice your driving skills and have some relaxing fun.

    -

    hill climb racing 2 windows 10 download free
    -how to install hill climb racing 2 on pc
    -hill climb racing 2 for windows 10 pc
    -download hill climb racing 2 game for pc
    -hill climb racing 2 online multiplayer on pc
    -hill climb racing 2 pc version free download
    -play hill climb racing 2 on windows 10
    -hill climb racing 2 microsoft store download
    -hill climb racing 2 bluestacks emulator for pc
    -hill climb racing 2 windows 10 app free
    -hill climb racing 2 pc game download full version
    -best windows 10 games like hill climb racing 2
    -hill climb racing 2 for pc without emulator
    -download hill climb racing 2 mod apk for pc
    -hill climb racing 2 windows 10 cheats and hacks
    -hill climb racing 2 pc gameplay and review
    -hill climb racing 2 for windows 10 laptop
    -how to update hill climb racing 2 on pc
    -hill climb racing 2 pc system requirements
    -hill climb racing 2 windows 10 offline mode
    -hill climb racing 2 for pc with keyboard controls
    -download hill climb racing 2 for windows 10 pro
    -hill climb racing 2 windows 10 tips and tricks
    -hill climb racing 2 for pc no download
    -how to uninstall hill climb racing 2 on pc
    -hill climb racing 2 pc graphics settings
    -download hill climb racing 2 for windows 10 home
    -hill climb racing 2 windows 10 adventure mode
    -hill climb racing 2 for pc free online play
    -how to backup hill climb racing 2 on pc
    -hill climb racing 2 pc sound settings
    -download hill climb racing 2 for windows 10 enterprise
    -hill climb racing 2 windows 10 team mode
    -hill climb racing 2 for pc new update
    -how to record hill climb racing 2 on pc
    -hill climb racing 2 pc performance issues
    -download hill climb racing 2 for windows 10 education
    -hill climb racing 2 windows 10 vehicle customization
    -hill climb racing 2 for pc best vehicles
    -how to stream hill climb racing 2 on pc

    -

    Weekly events that change up the gameplay in new exciting ways

    -

    Hill Climb Racing 2 is not a static game. It is constantly updated with new content and features that keep the gameplay fresh and exciting. One of the features that adds to the variety and challenge of the game is the weekly events that change up the gameplay in new exciting ways. Every week, there is a new event that has a different theme, rule set and reward system. For example, there are events such as low gravity, time trial, one wheel challenge, coin rush, fuel frenzy, moon landing and more. Participating in weekly events will give you a chance to win exclusive prizes and trophies.

    -

    How to download and install Hill Climb Racing 2 on Windows 10

    -

    Now that you know what Hill Climb Racing 2 is all about and what features it has to offer, you might be wondering how to download and install it on your Windows 10 PC. There are two ways to do this: download from the Microsoft Store or download from an Android emulator.

    -

    Download from the Microsoft Store

    -

    The easiest way to download and install Hill Climb Racing 2 on Windows 10 is to download it from the Microsoft Store. The Microsoft Store is an official app store for Windows 10 devices that lets you download and install various apps and games for free or for a fee. To download Hill Climb Racing 2 from the Microsoft Store, follow these steps:

    -
      -
    1. Open the Microsoft Store app on your Windows 10 PC.
    2. -
    3. Search for Hill Climb Racing 2 in the search bar.
    4. -
    5. Select Hill Climb Racing 2 from the search results.
    6. -
    7. Click on the Get button to start downloading the game.
    8. -
    9. Wait for the download to finish and then click on the Play button to launch the game.
    10. -
    -

    Congratulations! You have successfully downloaded and installed Hill Climb Racing 2 on your Windows 10 PC from the Microsoft Store.

    -

    Download from an Android emulator

    -

    The other way to download and install Hill Climb Racing 2 on Windows 10 is to download it from an Android emulator. An Android emulator is a software program that lets you run Android apps and games on your Windows 10 PC. There are many Android emulators available for Windows 10, such as BlueStacks, NoxPlayer, LDPlayer, MEmu and more. To download Hill Climb Racing 2 from an Android emulator, follow these steps:

    -
      -
    1. Download and install an Android emulator of your choice on your Windows 10 PC.
    2. -
    3. Launch the Android emulator and sign in with your Google account.
    4. -
    5. Open the Google Play Store app on the emulator.
    6. -
    7. Search for Hill Climb Racing 2 in the search bar.
    8. -
    9. Select Hill Climb Racing 2 from the search results.
    10. -
    11. Click on the Install button to start downloading the game.
    12. -
    13. Wait for the download to finish and then click on the Open button to launch the game.
    14. -
    -

    Congratulations! You have successfully downloaded and installed Hill Climb Racing 2 on your Windows 10 PC from an Android emulator.

    -

    Tips and tricks for playing Hill Climb Racing 2 on Windows 10

    -

    Now that you have downloaded and installed Hill Climb Racing 2 on your Windows 10 PC, you might be wondering how to play it like a pro. Here are some tips and tricks that will help you improve your skills and enjoy the game more:

    -

    Use the keyboard controls for better precision

    -

    One of the advantages of playing Hill Climb Racing 2 on Windows 10 is that you can use the keyboard controls for better precision. The keyboard controls are as follows:

    - -

    Using the keyboard controls will help you maneuver your vehicle more smoothly and avoid crashing or flipping over.

    -

    Adjust the graphics settings for optimal performance

    -

    Another advantage of playing Hill Climb Racing 2 on Windows 10 is that you can adjust the graphics settings for optimal performance. The graphics settings are as follows:

    - -

    Adjusting the graphics settings will help you optimize the game's performance and avoid lagging or freezing issues.

    -

    Collect coins and gems to unlock more content

    -

    One of the most important aspects of Hill Climb Racing 2 is collecting coins and gems to unlock more content. Coins and gems are the main currencies of the game that you can use to buy new vehicles, parts, tracks, customization options and more. You can collect coins and gems by doing the following:

    - -

    Collecting coins and gems will help you access more content and enhance your gaming experience.

    -

    Join a racing team and participate in seasons

    -

    One of the most fun features of Hill Climb Racing 2 is joining a racing team and participating in seasons. A racing team is a group of players that work together to compete against other teams in seasons. A season is a period of time that has a specific theme, rule set and reward system. You can join or create a racing team by doing the following:

    - -

    Joining a racing team and participating in seasons will help you earn more coins and gems, as well as unlock exclusive team chests that contain valuable rewards.

    -

    Conclusion

    -

    Hill Climb Racing 2 is one of the best racing games that you can play on your Windows 10 PC. It has many features that make it fun, addictive and challenging. You can race uphill in over 20+ different vehicles, unlock and upgrade new vehicle parts, customize your character and vehicle's looks, team up with your friends online and race in teams mode, have arcade racing fun while performing cool stunt tricks, explore dozens of tracks to race on, enjoy the classic adventure mode that makes a return, and participate in weekly events that change up the gameplay in new exciting ways. You can also download and install Hill Climb Racing 2 on Windows 10 easily from the Microsoft Store or from an Android emulator. Moreover, you can use some tips and tricks to improve your skills and enjoy the game more, such as using the keyboard controls, adjusting the graphics settings, collecting coins and gems, and joining a racing team. Hill Climb Racing 2 is a game that will keep you entertained for hours and hours. So what are you waiting for? Download Hill Climb Racing 2 on Windows 10 today and start racing uphill!

    -

    FAQs

    -

    Here are some frequently asked questions about Hill Climb Racing 2:

    -

    Q: Is Hill Climb Racing 2 free to play?

    -

    A: Yes, Hill Climb Racing 2 is free to play. However, it does offer some in-app purchases that can enhance your gaming experience, such as coins, gems, VIP membership and more. You can also watch ads to get some bonus rewards.

    -

    Q: Is Hill Climb Racing 2 safe to play?

    -

    A: Yes, Hill Climb Racing 2 is safe to play. It does not contain any harmful or inappropriate content that would harm your device or your privacy. It is also rated E for Everyone by the ESRB, which means it is suitable for all ages.

    -

    Q: How can I contact the developers of Hill Climb Racing 2?

    -

    A: You can contact the developers of Hill Climb Racing 2 by visiting their official website at https://fingersoft.com/ or by sending them an email at support@fingersoft.com. You can also follow them on social media platforms such as Facebook, Twitter, Instagram and YouTube.

    -

    Q: How can I share my feedback or report a bug in Hill Climb Racing 2?

    -

    A: You can share your feedback or report a bug in Hill Climb Racing 2 by tapping on the settings icon on the main menu and then tapping on the feedback button. You can also rate and review the game on the Microsoft Store or the Google Play Store.

    -

    Q: How can I play Hill Climb Racing 2 offline?

    -

    A: You can play Hill Climb Racing 2 offline by turning off your internet connection before launching the game. However, you will not be able to access some features that require an online connection, such as teams mode, weekly events, leaderboards and more.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md b/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md deleted file mode 100644 index 6b142120d91c5d7e2d7cb95d3aff9bf05c9c6f3d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md +++ /dev/null @@ -1,108 +0,0 @@ - -

    Sonic Forces Running Battle Mod APK: Unlimited Everything

    -

    If you are a fan of Sonic the Hedgehog, you will love Sonic Forces Running Battle, a fast-paced multiplayer racing game where you can compete with other players around the world. In this game, you can choose your favorite Sonic character, customize your runner, and join epic battles on various tracks. You can also collect power-ups, use special abilities, and unleash your super speed to win the race. But what if you want to have more fun and advantages in the game? That's where Sonic Forces Running Battle Mod APK comes in. In this article, we will tell you everything you need to know about this amazing modded version of the game, including its features, benefits, and how to download and install it on your device.

    -

    sonic forces running battle mod apk unlimited everything


    DOWNLOADhttps://urlca.com/2uO8bw



    -

    What is Sonic Forces Running Battle?

    -

    Sonic Forces Running Battle is a mobile game developed by Sega, based on the popular Sonic franchise. It is a spin-off of the console game Sonic Forces, which was released in 2017. In this game, you can join the resistance and fight against Dr. Eggman and his evil army of robots. You can also team up with other players or challenge them in real-time online races. The game features various modes, such as Story Mode, Quick Race, Special Event, and more. You can also unlock and upgrade different Sonic characters, each with their own skills and abilities. The game has stunning 3D graphics, smooth animations, and catchy soundtracks that will make you feel like you are in the Sonic world.

    -

    Features of Sonic Forces Running Battle

    -

    Some of the main features of Sonic Forces Running Battle are:

    - -

    How to play Sonic Forces Running Battle

    -

    The gameplay of Sonic Forces Running Battle is simple and intuitive. You just need to swipe left or right to change lanes, swipe up to jump, swipe down to slide, and tap to use your wisp or ability. You can also collect rings along the way to increase your score and boost your speed. The goal is to reach the finish line before your opponents or eliminate them with your attacks. You can also use items such as rockets, mines, lightning bolts, and more to sabotage your rivals or defend yourself from their attacks.

    -

    sonic forces speed battle mod apk latest version
    -sonic forces running battle hack apk download
    -sonic forces mod apk unlimited red rings and gold rings
    -sonic forces speed battle mod menu apk
    -sonic forces running battle cheats apk free
    -sonic forces mod apk android 1
    -sonic forces speed battle mod apk revdl
    -sonic forces running battle unlimited money apk
    -sonic forces mod apk offline
    -sonic forces speed battle mod apk happymod
    -sonic forces running battle god mode apk
    -sonic forces mod apk rexdl
    -sonic forces speed battle mod apk no root
    -sonic forces running battle unlocked everything apk
    -sonic forces mod apk obb
    -sonic forces speed battle mod apk android republic
    -sonic forces running battle free download apk
    -sonic forces mod apk all characters unlocked
    -sonic forces speed battle mod apk an1
    -sonic forces running battle premium apk
    -sonic forces mod apk pure
    -sonic forces speed battle mod apk platinmods
    -sonic forces running battle vip mod apk
    -sonic forces mod apk unlimited gems and coins
    -sonic forces speed battle mod apk 2023
    -sonic forces running battle cracked apk
    -sonic forces mod apk ios
    -sonic forces speed battle mod apk unlimited keys and stars
    -sonic forces running battle mega mod apk
    -sonic forces mod apk new version
    -sonic forces speed battle mod apk online
    -sonic forces running battle pro mod apk
    -sonic forces mod apk unlimited everything 2023
    -sonic forces speed battle mod apk unlimited boost and power ups
    -sonic forces running battle full version mod apk
    -sonic forces mod apk all levels unlocked
    -sonic forces speed battle mod apk unlimited lives and energy
    -sonic forces running battle hack tool apk
    -sonic forces mod apk high damage and defense
    -sonic forces speed battle mod apk unlimited trophies and chests
    -sonic forces running battle no ads mod apk
    -sonic forces mod apk low mb
    -sonic forces speed battle mod apk unlimited diamonds and tickets
    -sonic forces running battle super fast mode apk
    -sonic forces mod apk all skins unlocked
    -sonic forces speed battle mod apk unlimited sprint and dash meter

    -

    What is Sonic Forces Running Battle Mod APK?

    -

    Sonic Forces Running Battle Mod APK is a modified version of Sonic Forces Running Battle that offers unlimited everything. This means that you can enjoy unlimited money, god mode, mod speed, and other benefits that will make your gaming experience more enjoyable and easier. With this modded version of the game, you can unlock all the characters, outfits, wisps, and tracks in the game without spending any real money. You can also have unlimited health, speed, and power to dominate every race and challenge. Sonic Forces Running Battle Mod APK is the ultimate way to enjoy Sonic Forces Running Battle without any limitations or restrictions.

    -

    Benefits of Sonic Forces Running Battle Mod APK

    -

    Some of the benefits of Sonic Forces Running Battle Mod APK are:

    - -

    How to download and install Sonic Forces Running Battle Mod APK

    -

    If you want to download and install Sonic Forces Running Battle Mod APK on your device, you need to follow these simple steps:

    -

    Requirements

    -

    Before you download and install Sonic Forces Running Battle Mod APK, you need to make sure that your device meets these requirements:

    - -

    Steps

    -

    After you have checked the requirements, you can follow these steps to download and install Sonic Forces Running Battle Mod APK on your device:

    -
      -
    1. Click on this link to download the Sonic Forces Running Battle Mod APK file on your device.
    2. -
    3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
    4. -
    5. Follow the instructions on the screen and wait for the installation to finish.
    6. -
    7. Launch the game from your app drawer and enjoy unlimited everything in Sonic Forces Running Battle.
    8. -
    -

    Conclusion

    -

    Sonic Forces Running Battle is a fun and exciting game that lets you race with other players online as your favorite Sonic character. However, if you want to have more advantages and features in the game, you should try Sonic Forces Running Battle Mod APK. This modded version of the game gives you unlimited money, god mode, mod speed, and other benefits that will make your gaming experience more enjoyable and easier. You can download and install Sonic Forces Running Battle Mod APK on your device by following the steps we have provided in this article. So what are you waiting for? Download Sonic Forces Running Battle Mod APK now and join the ultimate Sonic racing adventure.

    -

    FAQs

    -

    Here are some frequently asked questions about Sonic Forces Running Battle Mod APK:

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md b/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md deleted file mode 100644 index c48e5ce28694b8a1a4eed58f001b6f35667c0fca..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    GetData Graph Digitizer 2.26.0.20 could be downloaded from the developer's website when we last checked. We cannot confirm if there is a free download of this software available. The actual developer of the software is getdata-graph-digitizer.

    -

    getdata graph digitizer crack 2.26


    Download Filehttps://ssurll.com/2uzw1T



    -

    Force measurements reveal attachment strength of goby eggs. Data show peak resistance to perpendicular pulling force in mN. For illustration, the published attachment strengths of asparagus beetle eggs (Crioceris asparagi) (Voigt and Gorb 2010), marine snail eggs (Melanochlamys diomedea) (Castro and Podolsky 2012), and blue mussel byssus threads (Mytilus edulis) (Brenner and Buck 2010) are shown. Nongoby data were extracted from figures in the respective articles using the software GetDataGraphDigitizer v. 2.26 (www.getdata-graph-digitizer.com).

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/loader.py b/spaces/cooelf/Multimodal-CoT/timm/data/loader.py deleted file mode 100644 index 76144669090aca1e962d75bfeab66aaf923e7ec5..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/loader.py +++ /dev/null @@ -1,262 +0,0 @@ -""" Loader Factory, Fast Collate, CUDA Prefetcher - -Prefetcher and Fast Collate inspired by NVIDIA APEX example at -https://github.com/NVIDIA/apex/commit/d5e2bb4bdeedd27b1dfaf5bb2b24d6c000dee9be#diff-cf86c282ff7fba81fad27a559379d5bf - -Hacked together by / Copyright 2020 Ross Wightman -""" - -import torch.utils.data -import numpy as np - -from .transforms_factory import create_transform -from .constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .distributed_sampler import OrderedDistributedSampler -from .random_erasing import RandomErasing -from .mixup import FastCollateMixup - - -def fast_collate(batch): - """ A fast collation function optimized for uint8 images (np array or torch) and int64 targets (labels)""" - assert isinstance(batch[0], tuple) - batch_size = len(batch) - if isinstance(batch[0][0], tuple): - # This branch 'deinterleaves' and flattens tuples of input tensors into one tensor ordered by position - # such that all tuple of position n will end up in a torch.split(tensor, batch_size) in nth position - inner_tuple_size = len(batch[0][0]) - flattened_batch_size = batch_size * inner_tuple_size - targets = torch.zeros(flattened_batch_size, dtype=torch.int64) - tensor = torch.zeros((flattened_batch_size, *batch[0][0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - assert len(batch[i][0]) == inner_tuple_size # all input tensor tuples must be same length - for j in range(inner_tuple_size): - targets[i + j * batch_size] = batch[i][1] - tensor[i + j * batch_size] += torch.from_numpy(batch[i][0][j]) - return tensor, targets - elif isinstance(batch[0][0], np.ndarray): - targets = torch.tensor([b[1] for b in batch], dtype=torch.int64) - assert len(targets) == batch_size - tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - tensor[i] += torch.from_numpy(batch[i][0]) - return tensor, targets - elif isinstance(batch[0][0], torch.Tensor): - targets = torch.tensor([b[1] for b in batch], dtype=torch.int64) - assert len(targets) == batch_size - tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - tensor[i].copy_(batch[i][0]) - return tensor, targets - else: - assert False - - -class PrefetchLoader: - - def __init__(self, - loader, - mean=IMAGENET_DEFAULT_MEAN, - std=IMAGENET_DEFAULT_STD, - fp16=False, - re_prob=0., - re_mode='const', - re_count=1, - re_num_splits=0): - self.loader = loader - self.mean = torch.tensor([x * 255 for x in mean]).cuda().view(1, 3, 1, 1) - self.std = torch.tensor([x * 255 for x in std]).cuda().view(1, 3, 1, 1) - self.fp16 = fp16 - if fp16: - self.mean = self.mean.half() - self.std = self.std.half() - if re_prob > 0.: - self.random_erasing = RandomErasing( - probability=re_prob, mode=re_mode, max_count=re_count, num_splits=re_num_splits) - else: - self.random_erasing = None - - def __iter__(self): - stream = torch.cuda.Stream() - first = True - - for next_input, next_target in self.loader: - with torch.cuda.stream(stream): - next_input = next_input.cuda(non_blocking=True) - next_target = next_target.cuda(non_blocking=True) - if self.fp16: - next_input = next_input.half().sub_(self.mean).div_(self.std) - else: - next_input = next_input.float().sub_(self.mean).div_(self.std) - if self.random_erasing is not None: - next_input = self.random_erasing(next_input) - - if not first: - yield input, target - else: - first = False - - torch.cuda.current_stream().wait_stream(stream) - input = next_input - target = next_target - - yield input, target - - def __len__(self): - return len(self.loader) - - @property - def sampler(self): - return self.loader.sampler - - @property - def dataset(self): - return self.loader.dataset - - @property - def mixup_enabled(self): - if isinstance(self.loader.collate_fn, FastCollateMixup): - return self.loader.collate_fn.mixup_enabled - else: - return False - - @mixup_enabled.setter - def mixup_enabled(self, x): - if isinstance(self.loader.collate_fn, FastCollateMixup): - self.loader.collate_fn.mixup_enabled = x - - -def create_loader( - dataset, - input_size, - batch_size, - is_training=False, - use_prefetcher=True, - no_aug=False, - re_prob=0., - re_mode='const', - re_count=1, - re_split=False, - scale=None, - ratio=None, - hflip=0.5, - vflip=0., - color_jitter=0.4, - auto_augment=None, - num_aug_splits=0, - interpolation='bilinear', - mean=IMAGENET_DEFAULT_MEAN, - std=IMAGENET_DEFAULT_STD, - num_workers=1, - distributed=False, - crop_pct=None, - collate_fn=None, - pin_memory=False, - fp16=False, - tf_preprocessing=False, - use_multi_epochs_loader=False, - persistent_workers=True, -): - re_num_splits = 0 - if re_split: - # apply RE to second half of batch if no aug split otherwise line up with aug split - re_num_splits = num_aug_splits or 2 - dataset.transform = create_transform( - input_size, - is_training=is_training, - use_prefetcher=use_prefetcher, - no_aug=no_aug, - scale=scale, - ratio=ratio, - hflip=hflip, - vflip=vflip, - color_jitter=color_jitter, - auto_augment=auto_augment, - interpolation=interpolation, - mean=mean, - std=std, - crop_pct=crop_pct, - tf_preprocessing=tf_preprocessing, - re_prob=re_prob, - re_mode=re_mode, - re_count=re_count, - re_num_splits=re_num_splits, - separate=num_aug_splits > 0, - ) - - sampler = None - if distributed and not isinstance(dataset, torch.utils.data.IterableDataset): - if is_training: - sampler = torch.utils.data.distributed.DistributedSampler(dataset) - else: - # This will add extra duplicate entries to result in equal num - # of samples per-process, will slightly alter validation results - sampler = OrderedDistributedSampler(dataset) - - if collate_fn is None: - collate_fn = fast_collate if use_prefetcher else torch.utils.data.dataloader.default_collate - - loader_class = torch.utils.data.DataLoader - - if use_multi_epochs_loader: - loader_class = MultiEpochsDataLoader - - loader_args = dict( - batch_size=batch_size, - shuffle=not isinstance(dataset, torch.utils.data.IterableDataset) and sampler is None and is_training, - num_workers=num_workers, - sampler=sampler, - collate_fn=collate_fn, - pin_memory=pin_memory, - drop_last=is_training, - persistent_workers=persistent_workers) - try: - loader = loader_class(dataset, **loader_args) - except TypeError as e: - loader_args.pop('persistent_workers') # only in Pytorch 1.7+ - loader = loader_class(dataset, **loader_args) - if use_prefetcher: - prefetch_re_prob = re_prob if is_training and not no_aug else 0. - loader = PrefetchLoader( - loader, - mean=mean, - std=std, - fp16=fp16, - re_prob=prefetch_re_prob, - re_mode=re_mode, - re_count=re_count, - re_num_splits=re_num_splits - ) - - return loader - - -class MultiEpochsDataLoader(torch.utils.data.DataLoader): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._DataLoader__initialized = False - self.batch_sampler = _RepeatSampler(self.batch_sampler) - self._DataLoader__initialized = True - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever. - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py deleted file mode 100644 index 2380cde59570e5d5b8fb2536d0961f8e27a07fd4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py +++ /dev/null @@ -1,771 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former -# ------------------------------------------------------------------------------ - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from annotator.oneformer.detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs["res{}".format(i + 2)] = out - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -@BACKBONE_REGISTRY.register() -class D2SwinTransformer(SwinTransformer, Backbone): - def __init__(self, cfg, input_shape): - - pretrain_img_size = cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE - patch_size = cfg.MODEL.SWIN.PATCH_SIZE - in_chans = 3 - embed_dim = cfg.MODEL.SWIN.EMBED_DIM - depths = cfg.MODEL.SWIN.DEPTHS - num_heads = cfg.MODEL.SWIN.NUM_HEADS - window_size = cfg.MODEL.SWIN.WINDOW_SIZE - mlp_ratio = cfg.MODEL.SWIN.MLP_RATIO - qkv_bias = cfg.MODEL.SWIN.QKV_BIAS - qk_scale = cfg.MODEL.SWIN.QK_SCALE - drop_rate = cfg.MODEL.SWIN.DROP_RATE - attn_drop_rate = cfg.MODEL.SWIN.ATTN_DROP_RATE - drop_path_rate = cfg.MODEL.SWIN.DROP_PATH_RATE - norm_layer = nn.LayerNorm - ape = cfg.MODEL.SWIN.APE - patch_norm = cfg.MODEL.SWIN.PATCH_NORM - use_checkpoint = cfg.MODEL.SWIN.USE_CHECKPOINT - - super().__init__( - pretrain_img_size, - patch_size, - in_chans, - embed_dim, - depths, - num_heads, - window_size, - mlp_ratio, - qkv_bias, - qk_scale, - drop_rate, - attn_drop_rate, - drop_path_rate, - norm_layer, - ape, - patch_norm, - use_checkpoint=use_checkpoint, - ) - - self._out_features = cfg.MODEL.SWIN.OUT_FEATURES - - self._out_feature_strides = { - "res2": 4, - "res3": 8, - "res4": 16, - "res5": 32, - } - self._out_feature_channels = { - "res2": self.num_features[0], - "res3": self.num_features[1], - "res4": self.num_features[2], - "res5": self.num_features[3], - } - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert ( - x.dim() == 4 - ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - y = super().forward(x) - for k in y.keys(): - if k in self._out_features: - outputs[k] = y[k] - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - @property - def size_divisibility(self): - return 32 diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py deleted file mode 100644 index 12267cb40569d2b5a4a2955a6dc2671377ff5e0a..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py +++ /dev/null @@ -1,168 +0,0 @@ -import math - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -def reduce_mean(tensor): - """Reduce mean when distributed training.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -class EMAModule(nn.Module): - """Expectation Maximization Attention Module used in EMANet. - - Args: - channels (int): Channels of the whole module. - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - """ - - def __init__(self, channels, num_bases, num_stages, momentum): - super(EMAModule, self).__init__() - assert num_stages >= 1, 'num_stages must be at least 1!' - self.num_bases = num_bases - self.num_stages = num_stages - self.momentum = momentum - - bases = torch.zeros(1, channels, self.num_bases) - bases.normal_(0, math.sqrt(2. / self.num_bases)) - # [1, channels, num_bases] - bases = F.normalize(bases, dim=1, p=2) - self.register_buffer('bases', bases) - - def forward(self, feats): - """Forward function.""" - batch_size, channels, height, width = feats.size() - # [batch_size, channels, height*width] - feats = feats.view(batch_size, channels, height * width) - # [batch_size, channels, num_bases] - bases = self.bases.repeat(batch_size, 1, 1) - - with torch.no_grad(): - for i in range(self.num_stages): - # [batch_size, height*width, num_bases] - attention = torch.einsum('bcn,bck->bnk', feats, bases) - attention = F.softmax(attention, dim=2) - # l1 norm - attention_normed = F.normalize(attention, dim=1, p=1) - # [batch_size, channels, num_bases] - bases = torch.einsum('bcn,bnk->bck', feats, attention_normed) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - - feats_recon = torch.einsum('bck,bnk->bcn', bases, attention) - feats_recon = feats_recon.view(batch_size, channels, height, width) - - if self.training: - bases = bases.mean(dim=0, keepdim=True) - bases = reduce_mean(bases) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - self.bases = (1 - - self.momentum) * self.bases + self.momentum * bases - - return feats_recon - - -@HEADS.register_module() -class EMAHead(BaseDecodeHead): - """Expectation Maximization Attention Networks for Semantic Segmentation. - - This head is the implementation of `EMANet - `_. - - Args: - ema_channels (int): EMA module channels - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - concat_input (bool): Whether concat the input and output of convs - before classification layer. Default: True - momentum (float): Momentum to update the base. Default: 0.1. - """ - - def __init__(self, - ema_channels, - num_bases, - num_stages, - concat_input=True, - momentum=0.1, - **kwargs): - super(EMAHead, self).__init__(**kwargs) - self.ema_channels = ema_channels - self.num_bases = num_bases - self.num_stages = num_stages - self.concat_input = concat_input - self.momentum = momentum - self.ema_module = EMAModule(self.ema_channels, self.num_bases, - self.num_stages, self.momentum) - - self.ema_in_conv = ConvModule( - self.in_channels, - self.ema_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # project (0, inf) -> (-inf, inf) - self.ema_mid_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=None, - act_cfg=None) - for param in self.ema_mid_conv.parameters(): - param.requires_grad = False - - self.ema_out_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.bottleneck = ConvModule( - self.ema_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.ema_in_conv(x) - identity = feats - feats = self.ema_mid_conv(feats) - recon = self.ema_module(feats) - recon = F.relu(recon, inplace=True) - recon = self.ema_out_conv(recon) - output = F.relu(identity + recon, inplace=True) - output = self.bottleneck(output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js b/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js deleted file mode 100644 index 2cf559be672c438dfbd35db61eea12465ed0dffb..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js +++ /dev/null @@ -1,4238 +0,0 @@ -! -function(t) { - function i(r) { - if (e[r]) return e[r].exports; - var o = e[r] = { - i: r, - l: !1, - exports: {} - }; - return t[r].call(o.exports, o, o.exports, i), o.l = !0, o.exports - } - var e = {}; - i.m = t, i.c = e, i.d = function(t, e, r) { - i.o(t, e) || Object.defineProperty(t, e, { - configurable: !1, - enumerable: !0, - get: r - }) - }, i.n = function(t) { - var e = t && t.__esModule ? - function() { - return t. - default - } : function() { - return t - }; - return i.d(e, "a", e), e - }, i.o = function(t, i) { - return Object.prototype.hasOwnProperty.call(t, i) - }, i.p = "", i(i.s = 4) -}([function(t, i, e) { - "use strict"; - - function r() { - this.live2DModel = null, this.modelMatrix = null, this.eyeBlink = null, this.physics = null, this.pose = null, this.debugMode = !1, this.initialized = !1, this.updating = !1, this.alpha = 1, this.accAlpha = 0, this.lipSync = !1, this.lipSyncValue = 0, this.accelX = 0, this.accelY = 0, this.accelZ = 0, this.dragX = 0, this.dragY = 0, this.startTimeMSec = null, this.mainMotionManager = new h, this.expressionManager = new h, this.motions = {}, this.expressions = {}, this.isTexLoaded = !1 - } - function o() { - AMotion.prototype.constructor.call(this), this.paramList = new Array - } - function n() { - this.id = "", this.type = -1, this.value = null - } - function s() { - this.nextBlinkTime = null, this.stateStartTime = null, this.blinkIntervalMsec = null, this.eyeState = g.STATE_FIRST, this.blinkIntervalMsec = 4e3, this.closingMotionMsec = 100, this.closedMotionMsec = 50, this.openingMotionMsec = 150, this.closeIfZero = !0, this.eyeID_L = "PARAM_EYE_L_OPEN", this.eyeID_R = "PARAM_EYE_R_OPEN" - } - function _() { - this.tr = new Float32Array(16), this.identity() - } - function a(t, i) { - _.prototype.constructor.call(this), this.width = t, this.height = i - } - function h() { - MotionQueueManager.prototype.constructor.call(this), this.currentPriority = null, this.reservePriority = null, this.super = MotionQueueManager.prototype - } - function l() { - this.physicsList = new Array, this.startTimeMSec = UtSystem.getUserTimeMSec() - } - function $() { - this.lastTime = 0, this.lastModel = null, this.partsGroups = new Array - } - function u(t) { - this.paramIndex = -1, this.partsIndex = -1, this.link = null, this.id = t - } - function p() { - this.EPSILON = .01, this.faceTargetX = 0, this.faceTargetY = 0, this.faceX = 0, this.faceY = 0, this.faceVX = 0, this.faceVY = 0, this.lastTimeSec = 0 - } - function f() { - _.prototype.constructor.call(this), this.screenLeft = null, this.screenRight = null, this.screenTop = null, this.screenBottom = null, this.maxLeft = null, this.maxRight = null, this.maxTop = null, this.maxBottom = null, this.max = Number.MAX_VALUE, this.min = 0 - } - function c() {} - var d = 0; - r.prototype.getModelMatrix = function() { - return this.modelMatrix - }, r.prototype.setAlpha = function(t) { - t > .999 && (t = 1), t < .001 && (t = 0), this.alpha = t - }, r.prototype.getAlpha = function() { - return this.alpha - }, r.prototype.isInitialized = function() { - return this.initialized - }, r.prototype.setInitialized = function(t) { - this.initialized = t - }, r.prototype.isUpdating = function() { - return this.updating - }, r.prototype.setUpdating = function(t) { - this.updating = t - }, r.prototype.getLive2DModel = function() { - return this.live2DModel - }, r.prototype.setLipSync = function(t) { - this.lipSync = t - }, r.prototype.setLipSyncValue = function(t) { - this.lipSyncValue = t - }, r.prototype.setAccel = function(t, i, e) { - this.accelX = t, this.accelY = i, this.accelZ = e - }, r.prototype.setDrag = function(t, i) { - this.dragX = t, this.dragY = i - }, r.prototype.getMainMotionManager = function() { - return this.mainMotionManager - }, r.prototype.getExpressionManager = function() { - return this.expressionManager - }, r.prototype.loadModelData = function(t, i) { - var e = c.getPlatformManager(); - this.debugMode && e.log("Load model : " + t); - var r = this; - e.loadLive2DModel(t, function(t) { - if (r.live2DModel = t, r.live2DModel.saveParam(), 0 != Live2D.getError()) return void console.error("Error : Failed to loadModelData()."); - r.modelMatrix = new a(r.live2DModel.getCanvasWidth(), r.live2DModel.getCanvasHeight()), r.modelMatrix.setWidth(2), r.modelMatrix.setCenterPosition(0, 0), i(r.live2DModel) - }) - }, r.prototype.loadTexture = function(t, i, e) { - d++; - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Texture : " + i); - var o = this; - r.loadTexture(this.live2DModel, t, i, function() { - d--, 0 == d && (o.isTexLoaded = !0), "function" == typeof e && e() - }) - }, r.prototype.loadMotion = function(t, i, e) { - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Motion : " + i); - var o = null, - n = this; - r.loadBytes(i, function(i) { - o = Live2DMotion.loadMotion(i), null != t && (n.motions[t] = o), e(o) - }) - }, r.prototype.loadExpression = function(t, i, e) { - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Expression : " + i); - var n = this; - r.loadBytes(i, function(i) { - null != t && (n.expressions[t] = o.loadJson(i)), "function" == typeof e && e() - }) - }, r.prototype.loadPose = function(t, i) { - var e = c.getPlatformManager(); - this.debugMode && e.log("Load Pose : " + t); - var r = this; - try { - e.loadBytes(t, function(t) { - r.pose = $.load(t), "function" == typeof i && i() - }) - } catch (t) { - console.warn(t) - } - }, r.prototype.loadPhysics = function(t) { - var i = c.getPlatformManager(); - this.debugMode && i.log("Load Physics : " + t); - var e = this; - try { - i.loadBytes(t, function(t) { - e.physics = l.load(t) - }) - } catch (t) { - console.warn(t) - } - }, r.prototype.hitTestSimple = function(t, i, e) { - if (null === this.live2DModel) return !1; - var r = this.live2DModel.getDrawDataIndex(t); - if (r < 0) return !1; - for (var o = this.live2DModel.getTransformedPoints(r), n = this.live2DModel.getCanvasWidth(), s = 0, _ = this.live2DModel.getCanvasHeight(), a = 0, h = 0; h < o.length; h += 2) { - var l = o[h], - $ = o[h + 1]; - l < n && (n = l), l > s && (s = l), $ < _ && (_ = $), $ > a && (a = $) - } - var u = this.modelMatrix.invertTransformX(i), - p = this.modelMatrix.invertTransformY(e); - return n <= u && u <= s && _ <= p && p <= a - }, r.prototype.hitTestSimpleCustom = function(t, i, e, r) { - return null !== this.live2DModel && (e >= t[0] && e <= i[0] && r <= t[1] && r >= i[1]) - }, o.prototype = new AMotion, o.EXPRESSION_DEFAULT = "DEFAULT", o.TYPE_SET = 0, o.TYPE_ADD = 1, o.TYPE_MULT = 2, o.loadJson = function(t) { - var i = new o, - e = c.getPlatformManager(), - r = e.jsonParseFromBytes(t); - if (i.setFadeIn(parseInt(r.fade_in) > 0 ? parseInt(r.fade_in) : 1e3), i.setFadeOut(parseInt(r.fade_out) > 0 ? parseInt(r.fade_out) : 1e3), null == r.params) return i; - var s = r.params, - _ = s.length; - i.paramList = []; - for (var a = 0; a < _; a++) { - var h = s[a], - l = h.id.toString(), - $ = parseFloat(h.val), - u = o.TYPE_ADD, - p = null != h.calc ? h.calc.toString() : "add"; - if ((u = "add" === p ? o.TYPE_ADD : "mult" === p ? o.TYPE_MULT : "set" === p ? o.TYPE_SET : o.TYPE_ADD) == o.TYPE_ADD) { - var f = null == h.def ? 0 : parseFloat(h.def); - $ -= f - } else if (u == o.TYPE_MULT) { - var f = null == h.def ? 1 : parseFloat(h.def); - 0 == f && (f = 1), $ /= f - } - var d = new n; - d.id = l, d.type = u, d.value = $, i.paramList.push(d) - } - return i - }, o.prototype.updateParamExe = function(t, i, e, r) { - for (var n = this.paramList.length - 1; n >= 0; --n) { - var s = this.paramList[n]; - s.type == o.TYPE_ADD ? t.addToParamFloat(s.id, s.value, e) : s.type == o.TYPE_MULT ? t.multParamFloat(s.id, s.value, e) : s.type == o.TYPE_SET && t.setParamFloat(s.id, s.value, e) - } - }, s.prototype.calcNextBlink = function() { - return UtSystem.getUserTimeMSec() + Math.random() * (2 * this.blinkIntervalMsec - 1) - }, s.prototype.setInterval = function(t) { - this.blinkIntervalMsec = t - }, s.prototype.setEyeMotion = function(t, i, e) { - this.closingMotionMsec = t, this.closedMotionMsec = i, this.openingMotionMsec = e - }, s.prototype.updateParam = function(t) { - var i, e = UtSystem.getUserTimeMSec(), - r = 0; - switch (this.eyeState) { - case g.STATE_CLOSING: - r = (e - this.stateStartTime) / this.closingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_CLOSED, this.stateStartTime = e), i = 1 - r; - break; - case g.STATE_CLOSED: - r = (e - this.stateStartTime) / this.closedMotionMsec, r >= 1 && (this.eyeState = g.STATE_OPENING, this.stateStartTime = e), i = 0; - break; - case g.STATE_OPENING: - r = (e - this.stateStartTime) / this.openingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink()), i = r; - break; - case g.STATE_INTERVAL: - this.nextBlinkTime < e && (this.eyeState = g.STATE_CLOSING, this.stateStartTime = e), i = 1; - break; - case g.STATE_FIRST: - default: - this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink(), i = 1 - } - this.closeIfZero || (i = -i), t.setParamFloat(this.eyeID_L, i), t.setParamFloat(this.eyeID_R, i) - }; - var g = function() {}; - g.STATE_FIRST = "STATE_FIRST", g.STATE_INTERVAL = "STATE_INTERVAL", g.STATE_CLOSING = "STATE_CLOSING", g.STATE_CLOSED = "STATE_CLOSED", g.STATE_OPENING = "STATE_OPENING", _.mul = function(t, i, e) { - var r, o, n, s = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; - for (r = 0; r < 4; r++) for (o = 0; o < 4; o++) for (n = 0; n < 4; n++) s[r + 4 * o] += t[r + 4 * n] * i[n + 4 * o]; - for (r = 0; r < 16; r++) e[r] = s[r] - }, _.prototype.identity = function() { - for (var t = 0; t < 16; t++) this.tr[t] = t % 5 == 0 ? 1 : 0 - }, _.prototype.getArray = function() { - return this.tr - }, _.prototype.getCopyMatrix = function() { - return new Float32Array(this.tr) - }, _.prototype.setMatrix = function(t) { - if (null != this.tr && this.tr.length == this.tr.length) for (var i = 0; i < 16; i++) this.tr[i] = t[i] - }, _.prototype.getScaleX = function() { - return this.tr[0] - }, _.prototype.getScaleY = function() { - return this.tr[5] - }, _.prototype.transformX = function(t) { - return this.tr[0] * t + this.tr[12] - }, _.prototype.transformY = function(t) { - return this.tr[5] * t + this.tr[13] - }, _.prototype.invertTransformX = function(t) { - return (t - this.tr[12]) / this.tr[0] - }, _.prototype.invertTransformY = function(t) { - return (t - this.tr[13]) / this.tr[5] - }, _.prototype.multTranslate = function(t, i) { - var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1]; - _.mul(e, this.tr, this.tr) - }, _.prototype.translate = function(t, i) { - this.tr[12] = t, this.tr[13] = i - }, _.prototype.translateX = function(t) { - this.tr[12] = t - }, _.prototype.translateY = function(t) { - this.tr[13] = t - }, _.prototype.multScale = function(t, i) { - var e = [t, 0, 0, 0, 0, i, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]; - _.mul(e, this.tr, this.tr) - }, _.prototype.scale = function(t, i) { - this.tr[0] = t, this.tr[5] = i - }, a.prototype = new _, a.prototype.setPosition = function(t, i) { - this.translate(t, i) - }, a.prototype.setCenterPosition = function(t, i) { - var e = this.width * this.getScaleX(), - r = this.height * this.getScaleY(); - this.translate(t - e / 2, i - r / 2) - }, a.prototype.top = function(t) { - this.setY(t) - }, a.prototype.bottom = function(t) { - var i = this.height * this.getScaleY(); - this.translateY(t - i) - }, a.prototype.left = function(t) { - this.setX(t) - }, a.prototype.right = function(t) { - var i = this.width * this.getScaleX(); - this.translateX(t - i) - }, a.prototype.centerX = function(t) { - var i = this.width * this.getScaleX(); - this.translateX(t - i / 2) - }, a.prototype.centerY = function(t) { - var i = this.height * this.getScaleY(); - this.translateY(t - i / 2) - }, a.prototype.setX = function(t) { - this.translateX(t) - }, a.prototype.setY = function(t) { - this.translateY(t) - }, a.prototype.setHeight = function(t) { - var i = t / this.height, - e = -i; - this.scale(i, e) - }, a.prototype.setWidth = function(t) { - var i = t / this.width, - e = -i; - this.scale(i, e) - }, h.prototype = new MotionQueueManager, h.prototype.getCurrentPriority = function() { - return this.currentPriority - }, h.prototype.getReservePriority = function() { - return this.reservePriority - }, h.prototype.reserveMotion = function(t) { - return !(this.reservePriority >= t) && (!(this.currentPriority >= t) && (this.reservePriority = t, !0)) - }, h.prototype.setReservePriority = function(t) { - this.reservePriority = t - }, h.prototype.updateParam = function(t) { - var i = MotionQueueManager.prototype.updateParam.call(this, t); - return this.isFinished() && (this.currentPriority = 0), i - }, h.prototype.startMotionPrio = function(t, i) { - return i == this.reservePriority && (this.reservePriority = 0), this.currentPriority = i, this.startMotion(t, !1) - }, l.load = function(t) { - for (var i = new l, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.physics_hair, n = o.length, s = 0; s < n; s++) { - var _ = o[s], - a = new PhysicsHair, - h = _.setup, - $ = parseFloat(h.length), - u = parseFloat(h.regist), - p = parseFloat(h.mass); - a.setup($, u, p); - for (var f = _.src, d = f.length, g = 0; g < d; g++) { - var y = f[g], - m = y.id, - T = PhysicsHair.Src.SRC_TO_X, - P = y.ptype; - "x" === P ? T = PhysicsHair.Src.SRC_TO_X : "y" === P ? T = PhysicsHair.Src.SRC_TO_Y : "angle" === P ? T = PhysicsHair.Src.SRC_TO_G_ANGLE : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Src"); - var S = parseFloat(y.scale), - v = parseFloat(y.weight); - a.addSrcParam(T, m, S, v) - } - for (var L = _.targets, M = L.length, g = 0; g < M; g++) { - var E = L[g], - m = E.id, - T = PhysicsHair.Target.TARGET_FROM_ANGLE, - P = E.ptype; - "angle" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE : "angle_v" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE_V : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Target"); - var S = parseFloat(E.scale), - v = parseFloat(E.weight); - a.addTargetParam(T, m, S, v) - } - i.physicsList.push(a) - } - return i - }, l.prototype.updateParam = function(t) { - for (var i = UtSystem.getUserTimeMSec() - this.startTimeMSec, e = 0; e < this.physicsList.length; e++) this.physicsList[e].update(t, i) - }, $.load = function(t) { - for (var i = new $, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.parts_visible, n = o.length, s = 0; s < n; s++) { - for (var _ = o[s], a = _.group, h = a.length, l = new Array, p = 0; p < h; p++) { - var f = a[p], - d = new u(f.id); - if (l[p] = d, null != f.link) { - var g = f.link, - y = g.length; - d.link = new Array; - for (var m = 0; m < y; m++) { - var T = new u(g[m]); - d.link.push(T) - } - } - } - i.partsGroups.push(l) - } - return i - }, $.prototype.updateParam = function(t) { - if (null != t) { - t != this.lastModel && this.initParam(t), this.lastModel = t; - var i = UtSystem.getUserTimeMSec(), - e = 0 == this.lastTime ? 0 : (i - this.lastTime) / 1e3; - this.lastTime = i, e < 0 && (e = 0); - for (var r = 0; r < this.partsGroups.length; r++) this.normalizePartsOpacityGroup(t, this.partsGroups[r], e), this.copyOpacityOtherParts(t, this.partsGroups[r]) - } - }, $.prototype.initParam = function(t) { - if (null != t) for (var i = 0; i < this.partsGroups.length; i++) for (var e = this.partsGroups[i], r = 0; r < e.length; r++) { - e[r].initIndex(t); - var o = e[r].partsIndex, - n = e[r].paramIndex; - if (!(o < 0)) { - var s = 0 != t.getParamFloat(n); - if (t.setPartsOpacity(o, s ? 1 : 0), t.setParamFloat(n, s ? 1 : 0), null != e[r].link) for (var _ = 0; _ < e[r].link.length; _++) e[r].link[_].initIndex(t) - } - } - }, $.prototype.normalizePartsOpacityGroup = function(t, i, e) { - for (var r = -1, o = 1, n = 0; n < i.length; n++) { - var s = i[n].partsIndex, - _ = i[n].paramIndex; - if (!(s < 0) && 0 != t.getParamFloat(_)) { - if (r >= 0) break; - r = n, o = t.getPartsOpacity(s), o += e / .5, o > 1 && (o = 1) - } - } - r < 0 && (r = 0, o = 1); - for (var n = 0; n < i.length; n++) { - var s = i[n].partsIndex; - if (!(s < 0)) if (r == n) t.setPartsOpacity(s, o); - else { - var a, h = t.getPartsOpacity(s); - a = o < .5 ? -.5 * o / .5 + 1 : .5 * (1 - o) / .5; - var l = (1 - a) * (1 - o); - l > .15 && (a = 1 - .15 / (1 - o)), h > a && (h = a), t.setPartsOpacity(s, h) - } - } - }, $.prototype.copyOpacityOtherParts = function(t, i) { - for (var e = 0; e < i.length; e++) { - var r = i[e]; - if (null != r.link && !(r.partsIndex < 0)) for (var o = t.getPartsOpacity(r.partsIndex), n = 0; n < r.link.length; n++) { - var s = r.link[n]; - s.partsIndex < 0 || t.setPartsOpacity(s.partsIndex, o) - } - } - }, u.prototype.initIndex = function(t) { - this.paramIndex = t.getParamIndex("VISIBLE:" + this.id), this.partsIndex = t.getPartsDataIndex(PartsDataID.getID(this.id)), t.setParamFloat(this.paramIndex, 1) - }, p.FRAME_RATE = 30, p.prototype.setPoint = function(t, i) { - this.faceTargetX = t, this.faceTargetY = i - }, p.prototype.getX = function() { - return this.faceX - }, p.prototype.getY = function() { - return this.faceY - }, p.prototype.update = function() { - var t = 40 / 7.5 / p.FRAME_RATE; - if (0 == this.lastTimeSec) return void(this.lastTimeSec = UtSystem.getUserTimeMSec()); - var i = UtSystem.getUserTimeMSec(), - e = (i - this.lastTimeSec) * p.FRAME_RATE / 1e3; - this.lastTimeSec = i; - var r = .15 * p.FRAME_RATE, - o = e * t / r, - n = this.faceTargetX - this.faceX, - s = this.faceTargetY - this.faceY; - if (!(Math.abs(n) <= this.EPSILON && Math.abs(s) <= this.EPSILON)) { - var _ = Math.sqrt(n * n + s * s), - a = t * n / _, - h = t * s / _, - l = a - this.faceVX, - $ = h - this.faceVY, - u = Math.sqrt(l * l + $ * $); - (u < -o || u > o) && (l *= o / u, $ *= o / u, u = o), this.faceVX += l, this.faceVY += $; - var f = .5 * (Math.sqrt(o * o + 16 * o * _ - 8 * o * _) - o), - c = Math.sqrt(this.faceVX * this.faceVX + this.faceVY * this.faceVY); - c > f && (this.faceVX *= f / c, this.faceVY *= f / c), this.faceX += this.faceVX, this.faceY += this.faceVY - } - }, f.prototype = new _, f.prototype.getMaxScale = function() { - return this.max - }, f.prototype.getMinScale = function() { - return this.min - }, f.prototype.setMaxScale = function(t) { - this.max = t - }, f.prototype.setMinScale = function(t) { - this.min = t - }, f.prototype.isMaxScale = function() { - return this.getScaleX() == this.max - }, f.prototype.isMinScale = function() { - return this.getScaleX() == this.min - }, f.prototype.adjustTranslate = function(t, i) { - this.tr[0] * this.maxLeft + (this.tr[12] + t) > this.screenLeft && (t = this.screenLeft - this.tr[0] * this.maxLeft - this.tr[12]), this.tr[0] * this.maxRight + (this.tr[12] + t) < this.screenRight && (t = this.screenRight - this.tr[0] * this.maxRight - this.tr[12]), this.tr[5] * this.maxTop + (this.tr[13] + i) < this.screenTop && (i = this.screenTop - this.tr[5] * this.maxTop - this.tr[13]), this.tr[5] * this.maxBottom + (this.tr[13] + i) > this.screenBottom && (i = this.screenBottom - this.tr[5] * this.maxBottom - this.tr[13]); - var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1]; - _.mul(e, this.tr, this.tr) - }, f.prototype.adjustScale = function(t, i, e) { - var r = e * this.tr[0]; - r < this.min ? this.tr[0] > 0 && (e = this.min / this.tr[0]) : r > this.max && this.tr[0] > 0 && (e = this.max / this.tr[0]); - var o = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1], - n = [e, 0, 0, 0, 0, e, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], - s = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -t, -i, 0, 1]; - _.mul(s, this.tr, this.tr), _.mul(n, this.tr, this.tr), _.mul(o, this.tr, this.tr) - }, f.prototype.setScreenRect = function(t, i, e, r) { - this.screenLeft = t, this.screenRight = i, this.screenTop = r, this.screenBottom = e - }, f.prototype.setMaxScreenRect = function(t, i, e, r) { - this.maxLeft = t, this.maxRight = i, this.maxTop = r, this.maxBottom = e - }, f.prototype.getScreenLeft = function() { - return this.screenLeft - }, f.prototype.getScreenRight = function() { - return this.screenRight - }, f.prototype.getScreenBottom = function() { - return this.screenBottom - }, f.prototype.getScreenTop = function() { - return this.screenTop - }, f.prototype.getMaxLeft = function() { - return this.maxLeft - }, f.prototype.getMaxRight = function() { - return this.maxRight - }, f.prototype.getMaxBottom = function() { - return this.maxBottom - }, f.prototype.getMaxTop = function() { - return this.maxTop - }, c.platformManager = null, c.getPlatformManager = function() { - return c.platformManager - }, c.setPlatformManager = function(t) { - c.platformManager = t - }, t.exports = { - L2DTargetPoint: p, - Live2DFramework: c, - L2DViewMatrix: f, - L2DPose: $, - L2DPartsParam: u, - L2DPhysics: l, - L2DMotionManager: h, - L2DModelMatrix: a, - L2DMatrix44: _, - EYE_STATE: g, - L2DEyeBlink: s, - L2DExpressionParam: n, - L2DExpressionMotion: o, - L2DBaseModel: r - } -}, function(t, i, e) { - "use strict"; - var r = { - DEBUG_LOG: !1, - DEBUG_MOUSE_LOG: !1, - DEBUG_DRAW_HIT_AREA: !1, - DEBUG_DRAW_ALPHA_MODEL: !1, - VIEW_MAX_SCALE: 2, - VIEW_MIN_SCALE: .8, - VIEW_LOGICAL_LEFT: -1, - VIEW_LOGICAL_RIGHT: 1, - VIEW_LOGICAL_MAX_LEFT: -2, - VIEW_LOGICAL_MAX_RIGHT: 2, - VIEW_LOGICAL_MAX_BOTTOM: -2, - VIEW_LOGICAL_MAX_TOP: 2, - PRIORITY_NONE: 0, - PRIORITY_IDLE: 1, - PRIORITY_SLEEPY: 2, - PRIORITY_NORMAL: 3, - PRIORITY_FORCE: 4, - MOTION_GROUP_IDLE: "idle", - MOTION_GROUP_SLEEPY: "sleepy", - MOTION_GROUP_TAP_BODY: "tap_body", - MOTION_GROUP_FLICK_HEAD: "flick_head", - MOTION_GROUP_PINCH_IN: "pinch_in", - MOTION_GROUP_PINCH_OUT: "pinch_out", - MOTION_GROUP_SHAKE: "shake", - HIT_AREA_HEAD: "head", - HIT_AREA_BODY: "body" - }; - t.exports = r -}, function(t, i, e) { - "use strict"; - - function r(t) { - n = t - } - function o() { - return n - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i.setContext = r, i.getContext = o; - var n = void 0 -}, function(t, i, e) { - "use strict"; - - function r() {} - r.matrixStack = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.depth = 0, r.currentMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.tmp = new Array(16), r.reset = function() { - this.depth = 0 - }, r.loadIdentity = function() { - for (var t = 0; t < 16; t++) this.currentMatrix[t] = t % 5 == 0 ? 1 : 0 - }, r.push = function() { - var t = (this.depth, 16 * (this.depth + 1)); - this.matrixStack.length < t + 16 && (this.matrixStack.length = t + 16); - for (var i = 0; i < 16; i++) this.matrixStack[t + i] = this.currentMatrix[i]; - this.depth++ - }, r.pop = function() { - --this.depth < 0 && (myError("Invalid matrix stack."), this.depth = 0); - for (var t = 16 * this.depth, i = 0; i < 16; i++) this.currentMatrix[i] = this.matrixStack[t + i] - }, r.getMatrix = function() { - return this.currentMatrix - }, r.multMatrix = function(t) { - var i, e, r; - for (i = 0; i < 16; i++) this.tmp[i] = 0; - for (i = 0; i < 4; i++) for (e = 0; e < 4; e++) for (r = 0; r < 4; r++) this.tmp[i + 4 * e] += this.currentMatrix[i + 4 * r] * t[r + 4 * e]; - for (i = 0; i < 16; i++) this.currentMatrix[i] = this.tmp[i] - }, t.exports = r -}, function(t, i, e) { - t.exports = e(5) -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o(t) { - C = document.getElementById(t), C.addEventListener && (window.addEventListener("click", g), window.addEventListener("mousedown", g), window.addEventListener("mousemove", g), window.addEventListener("mouseup", g), document.addEventListener("mouseout", g), window.addEventListener("touchstart", y), window.addEventListener("touchend", y), window.addEventListener("touchmove", y)) - } - function n(t) { - var i = C.width, - e = C.height; - N = new M.L2DTargetPoint; - var r = e / i, - o = w. - default.VIEW_LOGICAL_LEFT, - n = w. - default.VIEW_LOGICAL_RIGHT, - _ = -r, - h = r; - if (window.Live2D.captureFrame = !1, B = new M.L2DViewMatrix, B.setScreenRect(o, n, _, h), B.setMaxScreenRect(w. - default.VIEW_LOGICAL_MAX_LEFT, w. - default.VIEW_LOGICAL_MAX_RIGHT, w. - default.VIEW_LOGICAL_MAX_BOTTOM, w. - default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w. - default.VIEW_MAX_SCALE), B.setMinScale(w. - default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for futher information.")); - window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s() - } - function s() { - b || (b = !0, function t() { - _(); - var i = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame; - if (window.Live2D.captureFrame) { - window.Live2D.captureFrame = !1; - var e = document.createElement("a"); - document.body.appendChild(e), e.setAttribute("type", "hidden"), e.href = C.toDataURL(), e.download = window.Live2D.captureName || "live2d.png", e.click() - } - i(t, C) - }()) - } - function _() { - O. - default.reset(), O. - default.loadIdentity(), N.update(), R.setDrag(N.getX(), N.getY()), F.clear(F.COLOR_BUFFER_BIT), O. - default.multMatrix(U.getArray()), O. - default.multMatrix(B.getArray()), O. - default.push(); - for (var t = 0; t < R.numModels(); t++) { - var i = R.getModel(t); - if (null == i) return; - i.initialized && !i.updating && (i.update(), i.draw(F)) - } - O. - default.pop() - } - function a(t) { - R.reloadFlg = !0, R.count++, R.changeModel(F, t) - } - function h(t, i) { - return t.x * i.x + t.y * i.y - } - function l(t, i) { - var e = Math.sqrt(t * t + i * i); - return { - x: t / e, - y: i / e - } - } - function $(t, i, e) { - function r(t, i) { - return 180 * Math.acos(h({ - x: 0, - y: 1 - }, l(t, i))) / Math.PI - } - if (i.x < e.left + e.width && i.y < e.top + e.height && i.x > e.left && i.y > e.top) return i; - var o = t.x - i.x, - n = t.y - i.y, - s = r(o, n); - i.x < t.x && (s = 360 - s); - var _ = 360 - r(e.left - t.x, -1 * (e.top - t.y)), - a = 360 - r(e.left - t.x, -1 * (e.top + e.height - t.y)), - $ = r(e.left + e.width - t.x, -1 * (e.top - t.y)), - u = r(e.left + e.width - t.x, -1 * (e.top + e.height - t.y)), - p = n / o, - f = {}; - if (s < $) { - var c = e.top - t.y, - d = c / p; - f = { - y: t.y + c, - x: t.x + d - } - } else if (s < u) { - var g = e.left + e.width - t.x, - y = g * p; - f = { - y: t.y + y, - x: t.x + g - } - } else if (s < a) { - var m = e.top + e.height - t.y, - T = m / p; - f = { - y: t.y + m, - x: t.x + T - } - } else if (s < _) { - var P = t.x - e.left, - S = P * p; - f = { - y: t.y - S, - x: t.x - P - } - } else { - var v = e.top - t.y, - L = v / p; - f = { - y: t.y + v, - x: t.x + L - } - } - return f - } - function u(t) { - Y = !0; - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, N.setPoint(n, s) - } - function p(t) { - Y = !0; - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseDown device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, R.tapEvent(n, s) - } - function f(t) { - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), Y && (k = e, V = r, N.setPoint(n, s)) - } - function c() { - Y && (Y = !1), N.setPoint(0, 0) - } - function d() { - w. - default.DEBUG_LOG && console.log("Set Session Storage."), sessionStorage.setItem("Sleepy", "1") - } - function g(t) { - if ("mousewheel" == t.type); - else if ("mousedown" == t.type) p(t); - else if ("mousemove" == t.type) { - var i = sessionStorage.getItem("Sleepy"); - "1" === i && sessionStorage.setItem("Sleepy", "0"), u(t) - } else if ("mouseup" == t.type) { - if ("button" in t && 0 != t.button) return - } else if ("mouseout" == t.type) { - w. - default.DEBUG_LOG && console.log("Mouse out Window."), c(); - var e = sessionStorage.getItem("SleepyTimer"); - window.clearTimeout(e), e = window.setTimeout(d, 5e4), sessionStorage.setItem("SleepyTimer", e) - } - } - function y(t) { - var i = t.touches[0]; - "touchstart" == t.type ? 1 == t.touches.length && u(i) : "touchmove" == t.type ? f(i) : "touchend" == t.type && c() - } - function m(t) { - var i = G.transformX(t); - return B.invertTransformX(i) - } - function T(t) { - var i = G.transformY(t); - return B.invertTransformY(i) - } - function P(t) { - return G.transformX(t) - } - function S(t) { - return G.transformY(t) - } - function v() { - for (var t = ["webgl", "experimental-webgl", "webkit-3d", "moz-webgl"], i = 0; i < t.length; i++) try { - var e = C.getContext(t[i], { - premultipliedAlpha: !0 - }); - if (e) return e - } catch (t) {} - return null - } - function L(t, i, e) { - X = void 0 === e ? .5 : e, o(t), n(i) - } - e(6); - var M = e(0), - E = e(8), - A = r(E), - I = e(1), - w = r(I), - x = e(3), - O = r(x), - D = e(2), - R = (window.navigator.platform.toLowerCase(), new A. - default), - b = !1, - F = null, - C = null, - N = null, - B = null, - U = null, - G = null, - Y = !1, - k = 0, - V = 0, - X = .5; - window.loadlive2d = L -}, function(t, i, e) { - "use strict"; - (function(t) { - ! - function() { - function i() { - At || (this._$MT = null, this._$5S = null, this._$NP = 0, i._$42++, this._$5S = new Y(this)) - } - function e(t) { - if (!At) { - this.clipContextList = new Array, this.glcontext = t.gl, this.dp_webgl = t, this.curFrameNo = 0, this.firstError_clipInNotUpdate = !0, this.colorBuffer = 0, this.isInitGLFBFunc = !1, this.tmpBoundsOnModel = new S, at.glContext.length > at.frameBuffers.length && (this.curFrameNo = this.getMaskRenderTexture()), this.tmpModelToViewMatrix = new R, this.tmpMatrix2 = new R, this.tmpMatrixForMask = new R, this.tmpMatrixForDraw = new R, this.CHANNEL_COLORS = new Array; - var i = new A; - i = new A, i.r = 0, i.g = 0, i.b = 0, i.a = 1, this.CHANNEL_COLORS.push(i), i = new A, i.r = 1, i.g = 0, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 1, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 0, i.b = 1, i.a = 0, this.CHANNEL_COLORS.push(i); - for (var e = 0; e < this.CHANNEL_COLORS.length; e++) this.dp_webgl.setChannelFlagAsColor(e, this.CHANNEL_COLORS[e]) - } - } - function r(t, i, e) { - this.clipIDList = new Array, this.clipIDList = e, this.clippingMaskDrawIndexList = new Array; - for (var r = 0; r < e.length; r++) this.clippingMaskDrawIndexList.push(i.getDrawDataIndex(e[r])); - this.clippedDrawContextList = new Array, this.isUsing = !0, this.layoutChannelNo = 0, this.layoutBounds = new S, this.allClippedDrawRect = new S, this.matrixForMask = new Float32Array(16), this.matrixForDraw = new Float32Array(16), this.owner = t - } - function o(t, i) { - this._$gP = t, this.drawDataIndex = i - } - function n() { - At || (this.color = null) - } - function s() { - At || (this._$dP = null, this._$eo = null, this._$V0 = null, this._$dP = 1e3, this._$eo = 1e3, this._$V0 = 1, this._$a0()) - } - function _() {} - function a() { - this._$r = null, this._$0S = null - } - function h() { - At || (this.x = null, this.y = null, this.width = null, this.height = null) - } - function l(t) { - At || et.prototype.constructor.call(this, t) - } - function $() {} - function u(t) { - At || et.prototype.constructor.call(this, t) - } - function p() { - At || (this._$vo = null, this._$F2 = null, this._$ao = 400, this._$1S = 400, p._$42++) - } - function f() { - At || (this.p1 = new c, this.p2 = new c, this._$Fo = 0, this._$Db = 0, this._$L2 = 0, this._$M2 = 0, this._$ks = 0, this._$9b = 0, this._$iP = 0, this._$iT = 0, this._$lL = new Array, this._$qP = new Array, this.setup(.3, .5, .1)) - } - function c() { - this._$p = 1, this.x = 0, this.y = 0, this.vx = 0, this.vy = 0, this.ax = 0, this.ay = 0, this.fx = 0, this.fy = 0, this._$s0 = 0, this._$70 = 0, this._$7L = 0, this._$HL = 0 - } - function d(t, i, e) { - this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e - } - function g(t, i, e, r) { - d.prototype.constructor.call(this, i, e, r), this._$tL = null, this._$tL = t - } - function y(t, i, e) { - this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e - } - function T(t, i, e, r) { - y.prototype.constructor.call(this, i, e, r), this._$YP = null, this._$YP = t - } - function P() { - At || (this._$fL = 0, this._$gL = 0, this._$B0 = 1, this._$z0 = 1, this._$qT = 0, this.reflectX = !1, this.reflectY = !1) - } - function S() { - At || (this.x = null, this.y = null, this.width = null, this.height = null) - } - function v() {} - function L() { - At || (this.x = null, this.y = null) - } - function M() { - At || (this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null, this.clipID = null, this.clipIDList = new Array) - } - function E() { - At || (this._$Eb = E._$ps, this._$lT = 1, this._$C0 = 1, this._$tT = 1, this._$WL = 1, this.culling = !1, this.matrix4x4 = new Float32Array(16), this.premultipliedAlpha = !1, this.anisotropy = 0, this.clippingProcess = E.CLIPPING_PROCESS_NONE, this.clipBufPre_clipContextMask = null, this.clipBufPre_clipContextDraw = null, this.CHANNEL_COLORS = new Array) - } - function A() { - At || (this.a = 1, this.r = 1, this.g = 1, this.b = 1, this.scale = 1, this._$ho = 1, this.blendMode = at.L2D_COLOR_BLEND_MODE_MULT) - } - function I() { - At || (this._$kP = null, this._$dr = null, this._$Ai = !0, this._$mS = null) - } - function w() {} - function x() { - At || (this._$VP = 0, this._$wL = null, this._$GP = null, this._$8o = x._$ds, this._$2r = -1, this._$O2 = 0, this._$ri = 0) - } - function O() {} - function D() { - At || (this._$Ob = null) - } - function R() { - this.m = new Float32Array(16), this.identity() - } - function b(t) { - At || et.prototype.constructor.call(this, t) - } - function F() { - At || (this._$7 = 1, this._$f = 0, this._$H = 0, this._$g = 1, this._$k = 0, this._$w = 0, this._$hi = STATE_IDENTITY, this._$Z = _$pS) - } - function C() { - At || (s.prototype.constructor.call(this), this.motions = new Array, this._$7r = null, this._$7r = C._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !0, this.loopFadeIn = !0, this._$AS = -1, _$a0()) - } - function N() { - this._$P = new Float32Array(100), this.size = 0 - } - function B() { - this._$4P = null, this._$I0 = null, this._$RP = null - } - function U() {} - function G() {} - function Y(t) { - At || (this._$QT = !0, this._$co = -1, this._$qo = 0, this._$pb = new Array(Y._$is), this._$_2 = new Float32Array(Y._$is), this._$vr = new Float32Array(Y._$is), this._$Rr = new Float32Array(Y._$is), this._$Or = new Float32Array(Y._$is), this._$fs = new Float32Array(Y._$is), this._$Js = new Array(Y._$is), this._$3S = new Array, this._$aS = new Array, this._$Bo = null, this._$F2 = new Array, this._$db = new Array, this._$8b = new Array, this._$Hr = new Array, this._$Ws = null, this._$Vs = null, this._$Er = null, this._$Es = new Int16Array(U._$Qb), this._$ZP = new Float32Array(2 * U._$1r), this._$Ri = t, this._$b0 = Y._$HP++, this.clipManager = null, this.dp_webgl = null) - } - function k() {} - function V() { - At || (this._$12 = null, this._$bb = null, this._$_L = null, this._$jo = null, this._$iL = null, this._$0L = null, this._$Br = null, this._$Dr = null, this._$Cb = null, this._$mr = null, this._$_L = wt.STATE_FIRST, this._$Br = 4e3, this._$Dr = 100, this._$Cb = 50, this._$mr = 150, this._$jo = !0, this._$iL = "PARAM_EYE_L_OPEN", this._$0L = "PARAM_EYE_R_OPEN") - } - function X() { - At || (E.prototype.constructor.call(this), this._$sb = new Int32Array(X._$As), this._$U2 = new Array, this.transform = null, this.gl = null, null == X._$NT && (X._$NT = X._$9r(256), X._$vS = X._$9r(256), X._$no = X._$vb(256))) - } - function z() { - At || (I.prototype.constructor.call(this), this._$GS = null, this._$Y0 = null) - } - function H(t) { - _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Yr = null, this._$Wr = null - } - function W() { - At || (M.prototype.constructor.call(this), this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null) - } - function j() { - At || (this._$NL = null, this._$3S = null, this._$aS = null, j._$42++) - } - function q() { - At || (i.prototype.constructor.call(this), this._$zo = new X) - } - function J() { - At || (s.prototype.constructor.call(this), this.motions = new Array, this._$o2 = null, this._$7r = J._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !1, this.loopFadeIn = !0, this._$rr = -1, this._$eP = 0) - } - function Q(t, i) { - return String.fromCharCode(t.getUint8(i)) - } - function N() { - this._$P = new Float32Array(100), this.size = 0 - } - function B() { - this._$4P = null, this._$I0 = null, this._$RP = null - } - function Z() { - At || (I.prototype.constructor.call(this), this._$o = 0, this._$A = 0, this._$GS = null, this._$Eo = null) - } - function K(t) { - _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Cr = null, this._$hr = null - } - function tt() { - At || (this.visible = !0, this._$g0 = !1, this._$NL = null, this._$3S = null, this._$aS = null, tt._$42++) - } - function it(t) { - this._$VS = null, this._$e0 = null, this._$e0 = t - } - function et(t) { - At || (this.id = t) - } - function rt() {} - function ot() { - At || (this._$4S = null) - } - function nt(t, i) { - this.canvas = t, this.context = i, this.viewport = new Array(0, 0, t.width, t.height), this._$6r = 1, this._$xP = 0, this._$3r = 1, this._$uP = 0, this._$Qo = -1, this.cacheImages = {} - } - function st() { - At || (this._$TT = null, this._$LT = null, this._$FS = null, this._$wL = null) - } - function _t(t) { - At || (this._$e0 = null, this._$IP = null, this._$JS = !1, this._$AT = !0, this._$e0 = t, this.totalScale = 1, this._$7s = 1, this.totalOpacity = 1) - } - function at() {} - function ht() {} - function lt(t) { - At || (this._$ib = t) - } - function $t() { - At || (W.prototype.constructor.call(this), this._$LP = -1, this._$d0 = 0, this._$Yo = 0, this._$JP = null, this._$5P = null, this._$BP = null, this._$Eo = null, this._$Qi = null, this._$6s = $t._$ms, this.culling = !0, this.gl_cacheImage = null, this.instanceNo = $t._$42++) - } - function ut(t) { - Mt.prototype.constructor.call(this, t), this._$8r = W._$ur, this._$Cr = null, this._$hr = null - } - function pt() { - At || (this.x = null, this.y = null) - } - function ft(t) { - At || (i.prototype.constructor.call(this), this.drawParamWebGL = new mt(t), this.drawParamWebGL.setGL(at.getGL(t))) - } - function ct() { - At || (this.motions = null, this._$eb = !1, this.motions = new Array) - } - function dt() { - this._$w0 = null, this._$AT = !0, this._$9L = !1, this._$z2 = -1, this._$bs = -1, this._$Do = -1, this._$sr = null, this._$sr = dt._$Gs++ - } - function gt() { - this.m = new Array(1, 0, 0, 0, 1, 0, 0, 0, 1) - } - function yt(t) { - At || et.prototype.constructor.call(this, t) - } - function mt(t) { - At || (E.prototype.constructor.call(this), this.textures = new Array, this.transform = null, this.gl = null, this.glno = t, this.firstDraw = !0, this.anisotropyExt = null, this.maxAnisotropy = 0, this._$As = 32, this._$Gr = !1, this._$NT = null, this._$vS = null, this._$no = null, this.vertShader = null, this.fragShader = null, this.vertShaderOff = null, this.fragShaderOff = null) - } - function Tt(t, i, e) { - return null == i && (i = t.createBuffer()), t.bindBuffer(t.ARRAY_BUFFER, i), t.bufferData(t.ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i - } - function Pt(t, i, e) { - return null == i && (i = t.createBuffer()), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, i), t.bufferData(t.ELEMENT_ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i - } - function St(t) { - At || (this._$P = new Int8Array(8), this._$R0 = new DataView(this._$P.buffer), this._$3i = new Int8Array(1e3), this._$hL = 0, this._$v0 = 0, this._$S2 = 0, this._$Ko = new Array, this._$T = t, this._$F = 0) - } - function vt() {} - function Lt() {} - function Mt(t) { - At || (this._$e0 = null, this._$IP = null, this._$Us = null, this._$7s = null, this._$IS = [!1], this._$VS = null, this._$AT = !0, this.baseOpacity = 1, this.clipBufPre_clipContext = null, this._$e0 = t) - } - function Et() {} - var At = !0; - i._$0s = 1, i._$4s = 2, i._$42 = 0, i._$62 = function(t, e) { - try { - if (e instanceof ArrayBuffer && (e = new DataView(e)), !(e instanceof DataView)) throw new lt("_$SS#loadModel(b) / b _$x be DataView or ArrayBuffer"); - var r, o = new St(e), - n = o._$ST(), - s = o._$ST(), - a = o._$ST(); - if (109 != n || 111 != s || 99 != a) throw new lt("_$gi _$C _$li , _$Q0 _$P0."); - if (r = o._$ST(), o._$gr(r), r > G._$T7) { - t._$NP |= i._$4s; - throw new lt("_$gi _$C _$li , _$n0 _$_ version _$li ( SDK : " + G._$T7 + " < _$f0 : " + r + " )@_$SS#loadModel()\n") - } - var h = o._$nP(); - if (r >= G._$s7) { - var l = o._$9T(), - $ = o._$9T(); - if (-30584 != l || -30584 != $) throw t._$NP |= i._$0s, new lt("_$gi _$C _$li , _$0 _$6 _$Ui.") - } - t._$KS(h); - var u = t.getModelContext(); - u.setDrawParam(t.getDrawParam()), u.init() - } catch (t) { - _._$Rb(t) - } - }, i.prototype._$KS = function(t) { - this._$MT = t - }, i.prototype.getModelImpl = function() { - return null == this._$MT && (this._$MT = new p, this._$MT._$zP()), this._$MT - }, i.prototype.getCanvasWidth = function() { - return null == this._$MT ? 0 : this._$MT.getCanvasWidth() - }, i.prototype.getCanvasHeight = function() { - return null == this._$MT ? 0 : this._$MT.getCanvasHeight() - }, i.prototype.getParamFloat = function(t) { - return "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), this._$5S.getParamFloat(t) - }, i.prototype.setParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 - e) + i * e) - }, i.prototype.addToParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) + i * e) - }, i.prototype.multParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 + (i - 1) * e)) - }, i.prototype.getParamIndex = function(t) { - return this._$5S.getParamIndex(u.getID(t)) - }, i.prototype.loadParam = function() { - this._$5S.loadParam() - }, i.prototype.saveParam = function() { - this._$5S.saveParam() - }, i.prototype.init = function() { - this._$5S.init() - }, i.prototype.update = function() { - this._$5S.update() - }, i.prototype._$Rs = function() { - return _._$li("_$60 _$PT _$Rs()"), -1 - }, i.prototype._$Ds = function(t) { - _._$li("_$60 _$PT _$SS#_$Ds() \n") - }, i.prototype._$K2 = function() {}, i.prototype.draw = function() {}, i.prototype.getModelContext = function() { - return this._$5S - }, i.prototype._$s2 = function() { - return this._$NP - }, i.prototype._$P7 = function(t, i, e, r) { - var o = -1, - n = 0, - s = this; - if (0 != e) if (1 == t.length) { - var _ = t[0], - a = 0 != s.getParamFloat(_), - h = i[0], - l = s.getPartsOpacity(h), - $ = e / r; - a ? (l += $) > 1 && (l = 1) : (l -= $) < 0 && (l = 0), s.setPartsOpacity(h, l) - } else { - for (var u = 0; u < t.length; u++) { - var _ = t[u], - p = 0 != s.getParamFloat(_); - if (p) { - if (o >= 0) break; - o = u; - var h = i[u]; - n = s.getPartsOpacity(h), n += e / r, n > 1 && (n = 1) - } - } - o < 0 && (console.log("No _$wi _$q0/ _$U default[%s]", t[0]), o = 0, n = 1, s.loadParam(), s.setParamFloat(t[o], n), s.saveParam()); - for (var u = 0; u < t.length; u++) { - var h = i[u]; - if (o == u) s.setPartsOpacity(h, n); - else { - var f, c = s.getPartsOpacity(h); - f = n < .5 ? -.5 * n / .5 + 1 : .5 * (1 - n) / .5; - var d = (1 - f) * (1 - n); - d > .15 && (f = 1 - .15 / (1 - n)), c > f && (c = f), s.setPartsOpacity(h, c) - } - } - } else for (var u = 0; u < t.length; u++) { - var _ = t[u], - h = i[u], - p = 0 != s.getParamFloat(_); - s.setPartsOpacity(h, p ? 1 : 0) - } - }, i.prototype.setPartsOpacity = function(t, i) { - "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), this._$5S.setPartsOpacity(t, i) - }, i.prototype.getPartsDataIndex = function(t) { - return t instanceof l || (t = l.getID(t)), this._$5S.getPartsDataIndex(t) - }, i.prototype.getPartsOpacity = function(t) { - return "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), t < 0 ? 0 : this._$5S.getPartsOpacity(t) - }, i.prototype.getDrawParam = function() {}, i.prototype.getDrawDataIndex = function(t) { - return this._$5S.getDrawDataIndex(b.getID(t)) - }, i.prototype.getDrawData = function(t) { - return this._$5S.getDrawData(t) - }, i.prototype.getTransformedPoints = function(t) { - var i = this._$5S._$C2(t); - return i instanceof ut ? i.getTransformedPoints() : null - }, i.prototype.getIndexArray = function(t) { - if (t < 0 || t >= this._$5S._$aS.length) return null; - var i = this._$5S._$aS[t]; - return null != i && i.getType() == W._$wb && i instanceof $t ? i.getIndexArray() : null - }, e.CHANNEL_COUNT = 4, e.RENDER_TEXTURE_USE_MIPMAP = !1, e.NOT_USED_FRAME = -100, e.prototype._$L7 = function() { - if (this.tmpModelToViewMatrix && (this.tmpModelToViewMatrix = null), this.tmpMatrix2 && (this.tmpMatrix2 = null), this.tmpMatrixForMask && (this.tmpMatrixForMask = null), this.tmpMatrixForDraw && (this.tmpMatrixForDraw = null), this.tmpBoundsOnModel && (this.tmpBoundsOnModel = null), this.CHANNEL_COLORS) { - for (var t = this.CHANNEL_COLORS.length - 1; t >= 0; --t) this.CHANNEL_COLORS.splice(t, 1); - this.CHANNEL_COLORS = [] - } - this.releaseShader() - }, e.prototype.releaseShader = function() { - for (var t = at.frameBuffers.length, i = 0; i < t; i++) this.gl.deleteFramebuffer(at.frameBuffers[i].framebuffer); - at.frameBuffers = [], at.glContext = [] - }, e.prototype.init = function(t, i, e) { - for (var o = 0; o < i.length; o++) { - var n = i[o].getClipIDList(); - if (null != n) { - var s = this.findSameClip(n); - null == s && (s = new r(this, t, n), this.clipContextList.push(s)); - var _ = i[o].getDrawDataID(), - a = t.getDrawDataIndex(_); - s.addClippedDrawData(_, a); - e[o].clipBufPre_clipContext = s - } - } - }, e.prototype.getMaskRenderTexture = function() { - var t = null; - return t = this.dp_webgl.createFramebuffer(), at.frameBuffers[this.dp_webgl.glno] = t, this.dp_webgl.glno - }, e.prototype.setupClip = function(t, i) { - for (var e = 0, r = 0; r < this.clipContextList.length; r++) { - var o = this.clipContextList[r]; - this.calcClippedDrawTotalBounds(t, o), o.isUsing && e++ - } - if (e > 0) { - var n = i.gl.getParameter(i.gl.FRAMEBUFFER_BINDING), - s = new Array(4); - s[0] = 0, s[1] = 0, s[2] = i.gl.canvas.width, s[3] = i.gl.canvas.height, i.gl.viewport(0, 0, at.clippingMaskBufferSize, at.clippingMaskBufferSize), this.setupLayoutBounds(e), i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, at.frameBuffers[this.curFrameNo].framebuffer), i.gl.clearColor(0, 0, 0, 0), i.gl.clear(i.gl.COLOR_BUFFER_BIT); - for (var r = 0; r < this.clipContextList.length; r++) { - var o = this.clipContextList[r], - _ = o.allClippedDrawRect, - a = (o.layoutChannelNo, o.layoutBounds); - this.tmpBoundsOnModel._$jL(_), this.tmpBoundsOnModel.expand(.05 * _.width, .05 * _.height); - var h = a.width / this.tmpBoundsOnModel.width, - l = a.height / this.tmpBoundsOnModel.height; - this.tmpMatrix2.identity(), this.tmpMatrix2.translate(-1, -1, 0), this.tmpMatrix2.scale(2, 2, 1), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForMask.setMatrix(this.tmpMatrix2.m), this.tmpMatrix2.identity(), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForDraw.setMatrix(this.tmpMatrix2.m); - for (var $ = this.tmpMatrixForMask.getArray(), u = 0; u < 16; u++) o.matrixForMask[u] = $[u]; - for (var p = this.tmpMatrixForDraw.getArray(), u = 0; u < 16; u++) o.matrixForDraw[u] = p[u]; - for (var f = o.clippingMaskDrawIndexList.length, c = 0; c < f; c++) { - var d = o.clippingMaskDrawIndexList[c], - g = t.getDrawData(d), - y = t._$C2(d); - i.setClipBufPre_clipContextForMask(o), g.draw(i, t, y) - } - } - i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, n), i.setClipBufPre_clipContextForMask(null), i.gl.viewport(s[0], s[1], s[2], s[3]) - } - }, e.prototype.getColorBuffer = function() { - return this.colorBuffer - }, e.prototype.findSameClip = function(t) { - for (var i = 0; i < this.clipContextList.length; i++) { - var e = this.clipContextList[i], - r = e.clipIDList.length; - if (r == t.length) { - for (var o = 0, n = 0; n < r; n++) for (var s = e.clipIDList[n], _ = 0; _ < r; _++) if (t[_] == s) { - o++; - break - } - if (o == r) return e - } - } - return null - }, e.prototype.calcClippedDrawTotalBounds = function(t, i) { - for (var e = t._$Ri.getModelImpl().getCanvasWidth(), r = t._$Ri.getModelImpl().getCanvasHeight(), o = e > r ? e : r, n = o, s = o, _ = 0, a = 0, h = i.clippedDrawContextList.length, l = 0; l < h; l++) { - var $ = i.clippedDrawContextList[l], - u = $.drawDataIndex, - p = t._$C2(u); - if (p._$yo()) { - for (var f = p.getTransformedPoints(), c = f.length, d = [], g = [], y = 0, m = U._$i2; m < c; m += U._$No) d[y] = f[m], g[y] = f[m + 1], y++; - var T = Math.min.apply(null, d), - P = Math.min.apply(null, g), - S = Math.max.apply(null, d), - v = Math.max.apply(null, g); - T < n && (n = T), P < s && (s = P), S > _ && (_ = S), v > a && (a = v) - } - } - if (n == o) i.allClippedDrawRect.x = 0, i.allClippedDrawRect.y = 0, i.allClippedDrawRect.width = 0, i.allClippedDrawRect.height = 0, i.isUsing = !1; - else { - var L = _ - n, - M = a - s; - i.allClippedDrawRect.x = n, i.allClippedDrawRect.y = s, i.allClippedDrawRect.width = L, i.allClippedDrawRect.height = M, i.isUsing = !0 - } - }, e.prototype.setupLayoutBounds = function(t) { - var i = t / e.CHANNEL_COUNT, - r = t % e.CHANNEL_COUNT; - i = ~~i, r = ~~r; - for (var o = 0, n = 0; n < e.CHANNEL_COUNT; n++) { - var s = i + (n < r ? 1 : 0); - if (0 == s); - else if (1 == s) { - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = 0, a.layoutBounds.y = 0, a.layoutBounds.width = 1, a.layoutBounds.height = 1 - } else if (2 == s) for (var h = 0; h < s; h++) { - var l = h % 2, - $ = 0; - l = ~~l; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = 0, a.layoutBounds.width = .5, a.layoutBounds.height = 1 - } else if (s <= 4) for (var h = 0; h < s; h++) { - var l = h % 2, - $ = h / 2; - l = ~~l, $ = ~~$; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = .5 * $, a.layoutBounds.width = .5, a.layoutBounds.height = .5 - } else if (s <= 9) for (var h = 0; h < s; h++) { - var l = h % 3, - $ = h / 3; - l = ~~l, $ = ~~$; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = l / 3, a.layoutBounds.y = $ / 3, a.layoutBounds.width = 1 / 3, a.layoutBounds.height = 1 / 3 - } else _._$li("_$6 _$0P mask count : %d", s) - } - }, r.prototype.addClippedDrawData = function(t, i) { - var e = new o(t, i); - this.clippedDrawContextList.push(e) - }, s._$JT = function(t, i, e) { - var r = t / i, - o = e / i, - n = o, - s = 1 - (1 - o) * (1 - o), - _ = 1 - (1 - n) * (1 - n), - a = 1 / 3 * (1 - o) * s + (n * (2 / 3) + 1 / 3 * (1 - n)) * (1 - s), - h = (n + 2 / 3 * (1 - n)) * _ + (o * (1 / 3) + 2 / 3 * (1 - o)) * (1 - _), - l = 1 - 3 * h + 3 * a - 0, - $ = 3 * h - 6 * a + 0, - u = 3 * a - 0; - if (r <= 0) return 0; - if (r >= 1) return 1; - var p = r, - f = p * p; - return l * (p * f) + $ * f + u * p + 0 - }, s.prototype._$a0 = function() {}, s.prototype.setFadeIn = function(t) { - this._$dP = t - }, s.prototype.setFadeOut = function(t) { - this._$eo = t - }, s.prototype._$pT = function(t) { - this._$V0 = t - }, s.prototype.getFadeOut = function() { - return this._$eo - }, s.prototype._$4T = function() { - return this._$eo - }, s.prototype._$mT = function() { - return this._$V0 - }, s.prototype.getDurationMSec = function() { - return -1 - }, s.prototype.getLoopDurationMSec = function() { - return -1 - }, s.prototype.updateParam = function(t, i) { - if (i._$AT && !i._$9L) { - var e = w.getUserTimeMSec(); - if (i._$z2 < 0) { - i._$z2 = e, i._$bs = e; - var r = this.getDurationMSec(); - i._$Do < 0 && (i._$Do = r <= 0 ? -1 : i._$z2 + r) - } - var o = this._$V0; - o = o * (0 == this._$dP ? 1 : ht._$r2((e - i._$bs) / this._$dP)) * (0 == this._$eo || i._$Do < 0 ? 1 : ht._$r2((i._$Do - e) / this._$eo)), 0 <= o && o <= 1 || console.log("### assert!! ### "), this.updateParamExe(t, e, o, i), i._$Do > 0 && i._$Do < e && (i._$9L = !0) - } - }, s.prototype.updateParamExe = function(t, i, e, r) {}, _._$8s = 0, _._$fT = new Object, _.start = function(t) { - var i = _._$fT[t]; - null == i && (i = new a, i._$r = t, _._$fT[t] = i), i._$0S = w.getSystemTimeMSec() - }, _.dump = function(t) { - var i = _._$fT[t]; - if (null != i) { - var e = w.getSystemTimeMSec(), - r = e - i._$0S; - return console.log(t + " : " + r + "ms"), r - } - return -1 - }, _.end = function(t) { - var i = _._$fT[t]; - if (null != i) { - return w.getSystemTimeMSec() - i._$0S - } - return -1 - }, _._$li = function(t, i) { - console.log("_$li : " + t + "\n", i) - }, _._$Ji = function(t, i) { - console.log(t, i) - }, _._$dL = function(t, i) { - console.log(t, i), console.log("\n") - }, _._$KL = function(t, i) { - for (var e = 0; e < i; e++) e % 16 == 0 && e > 0 ? console.log("\n") : e % 8 == 0 && e > 0 && console.log(" "), console.log("%02X ", 255 & t[e]); - console.log("\n") - }, _._$nr = function(t, i, e) { - console.log("%s\n", t); - for (var r = i.length, o = 0; o < r; ++o) console.log("%5d", i[o]), console.log("%s\n", e), console.log(","); - console.log("\n") - }, _._$Rb = function(t) { - console.log("dump exception : " + t), console.log("stack :: " + t.stack) - }, h.prototype._$8P = function() { - return .5 * (this.x + this.x + this.width) - }, h.prototype._$6P = function() { - return .5 * (this.y + this.y + this.height) - }, h.prototype._$EL = function() { - return this.x + this.width - }, h.prototype._$5T = function() { - return this.y + this.height - }, h.prototype._$jL = function(t, i, e, r) { - this.x = t, this.y = i, this.width = e, this.height = r - }, h.prototype._$jL = function(t) { - this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height - }, l.prototype = new et, l._$tP = new Object, l._$27 = function() { - l._$tP.clear() - }, l.getID = function(t) { - var i = l._$tP[t]; - return null == i && (i = new l(t), l._$tP[t] = i), i - }, l.prototype._$3s = function() { - return new l - }, u.prototype = new et, u._$tP = new Object, u._$27 = function() { - u._$tP.clear() - }, u.getID = function(t) { - var i = u._$tP[t]; - return null == i && (i = new u(t), u._$tP[t] = i), i - }, u.prototype._$3s = function() { - return new u - }, p._$42 = 0, p.prototype._$zP = function() { - null == this._$vo && (this._$vo = new ot), null == this._$F2 && (this._$F2 = new Array) - }, p.prototype.getCanvasWidth = function() { - return this._$ao - }, p.prototype.getCanvasHeight = function() { - return this._$1S - }, p.prototype._$F0 = function(t) { - this._$vo = t._$nP(), this._$F2 = t._$nP(), this._$ao = t._$6L(), this._$1S = t._$6L() - }, p.prototype._$6S = function(t) { - this._$F2.push(t) - }, p.prototype._$Xr = function() { - return this._$F2 - }, p.prototype._$E2 = function() { - return this._$vo - }, f.prototype.setup = function(t, i, e) { - this._$ks = this._$Yb(), this.p2._$xT(), 3 == arguments.length && (this._$Fo = t, this._$L2 = i, this.p1._$p = e, this.p2._$p = e, this.p2.y = t, this.setup()) - }, f.prototype.getPhysicsPoint1 = function() { - return this.p1 - }, f.prototype.getPhysicsPoint2 = function() { - return this.p2 - }, f.prototype._$qr = function() { - return this._$Db - }, f.prototype._$pr = function(t) { - this._$Db = t - }, f.prototype._$5r = function() { - return this._$M2 - }, f.prototype._$Cs = function() { - return this._$9b - }, f.prototype._$Yb = function() { - return -180 * Math.atan2(this.p1.x - this.p2.x, -(this.p1.y - this.p2.y)) / Math.PI - }, f.prototype.addSrcParam = function(t, i, e, r) { - var o = new g(t, i, e, r); - this._$lL.push(o) - }, f.prototype.addTargetParam = function(t, i, e, r) { - var o = new T(t, i, e, r); - this._$qP.push(o) - }, f.prototype.update = function(t, i) { - if (0 == this._$iP) return this._$iP = this._$iT = i, void(this._$Fo = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y))); - var e = (i - this._$iT) / 1e3; - if (0 != e) { - for (var r = this._$lL.length - 1; r >= 0; --r) { - this._$lL[r]._$oP(t, this) - } - this._$oo(t, e), this._$M2 = this._$Yb(), this._$9b = (this._$M2 - this._$ks) / e, this._$ks = this._$M2 - } - for (var r = this._$qP.length - 1; r >= 0; --r) { - this._$qP[r]._$YS(t, this) - } - this._$iT = i - }, f.prototype._$oo = function(t, i) { - i < .033 && (i = .033); - var e = 1 / i; - this.p1.vx = (this.p1.x - this.p1._$s0) * e, this.p1.vy = (this.p1.y - this.p1._$70) * e, this.p1.ax = (this.p1.vx - this.p1._$7L) * e, this.p1.ay = (this.p1.vy - this.p1._$HL) * e, this.p1.fx = this.p1.ax * this.p1._$p, this.p1.fy = this.p1.ay * this.p1._$p, this.p1._$xT(); - var r, o, n = -Math.atan2(this.p1.y - this.p2.y, this.p1.x - this.p2.x), - s = Math.cos(n), - _ = Math.sin(n), - a = 9.8 * this.p2._$p, - h = this._$Db * Lt._$bS, - l = a * Math.cos(n - h); - r = l * _, o = l * s; - var $ = -this.p1.fx * _ * _, - u = -this.p1.fy * _ * s, - p = -this.p2.vx * this._$L2, - f = -this.p2.vy * this._$L2; - this.p2.fx = r + $ + p, this.p2.fy = o + u + f, this.p2.ax = this.p2.fx / this.p2._$p, this.p2.ay = this.p2.fy / this.p2._$p, this.p2.vx += this.p2.ax * i, this.p2.vy += this.p2.ay * i, this.p2.x += this.p2.vx * i, this.p2.y += this.p2.vy * i; - var c = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y)); - this.p2.x = this.p1.x + this._$Fo * (this.p2.x - this.p1.x) / c, this.p2.y = this.p1.y + this._$Fo * (this.p2.y - this.p1.y) / c, this.p2.vx = (this.p2.x - this.p2._$s0) * e, this.p2.vy = (this.p2.y - this.p2._$70) * e, this.p2._$xT() - }, c.prototype._$xT = function() { - this._$s0 = this.x, this._$70 = this.y, this._$7L = this.vx, this._$HL = this.vy - }, d.prototype._$oP = function(t, i) {}, g.prototype = new d, g.prototype._$oP = function(t, i) { - var e = this.scale * t.getParamFloat(this._$wL), - r = i.getPhysicsPoint1(); - switch (this._$tL) { - default: - case f.Src.SRC_TO_X: - r.x = r.x + (e - r.x) * this._$V0; - break; - case f.Src.SRC_TO_Y: - r.y = r.y + (e - r.y) * this._$V0; - break; - case f.Src.SRC_TO_G_ANGLE: - var o = i._$qr(); - o += (e - o) * this._$V0, i._$pr(o) - } - }, y.prototype._$YS = function(t, i) {}, T.prototype = new y, T.prototype._$YS = function(t, i) { - switch (this._$YP) { - default: - case f.Target.TARGET_FROM_ANGLE: - t.setParamFloat(this._$wL, this.scale * i._$5r(), this._$V0); - break; - case f.Target.TARGET_FROM_ANGLE_V: - t.setParamFloat(this._$wL, this.scale * i._$Cs(), this._$V0) - } - }, f.Src = function() {}, f.Src.SRC_TO_X = "SRC_TO_X", f.Src.SRC_TO_Y = "SRC_TO_Y", f.Src.SRC_TO_G_ANGLE = "SRC_TO_G_ANGLE", f.Target = function() {}, f.Target.TARGET_FROM_ANGLE = "TARGET_FROM_ANGLE", f.Target.TARGET_FROM_ANGLE_V = "TARGET_FROM_ANGLE_V", P.prototype.init = function(t) { - this._$fL = t._$fL, this._$gL = t._$gL, this._$B0 = t._$B0, this._$z0 = t._$z0, this._$qT = t._$qT, this.reflectX = t.reflectX, this.reflectY = t.reflectY - }, P.prototype._$F0 = function(t) { - this._$fL = t._$_T(), this._$gL = t._$_T(), this._$B0 = t._$_T(), this._$z0 = t._$_T(), this._$qT = t._$_T(), t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this.reflectX = t._$po(), this.reflectY = t._$po()) - }, P.prototype._$e = function() {}; - var It = function() {}; - It._$ni = function(t, i, e, r, o, n, s, _, a) { - var h = s * n - _ * o; - if (0 == h) return null; - var l, $ = ((t - e) * n - (i - r) * o) / h; - return l = 0 != o ? (t - e - $ * s) / o : (i - r - $ * _) / n, isNaN(l) && (l = (t - e - $ * s) / o, isNaN(l) && (l = (i - r - $ * _) / n), isNaN(l) && (console.log("a is NaN @UtVector#_$ni() "), console.log("v1x : " + o), console.log("v1x != 0 ? " + (0 != o)))), null == a ? new Array(l, $) : (a[0] = l, a[1] = $, a) - }, S.prototype._$8P = function() { - return this.x + .5 * this.width - }, S.prototype._$6P = function() { - return this.y + .5 * this.height - }, S.prototype._$EL = function() { - return this.x + this.width - }, S.prototype._$5T = function() { - return this.y + this.height - }, S.prototype._$jL = function(t, i, e, r) { - this.x = t, this.y = i, this.width = e, this.height = r - }, S.prototype._$jL = function(t) { - this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height - }, S.prototype.contains = function(t, i) { - return this.x <= this.x && this.y <= this.y && this.x <= this.x + this.width && this.y <= this.y + this.height - }, S.prototype.expand = function(t, i) { - this.x -= t, this.y -= i, this.width += 2 * t, this.height += 2 * i - }, v._$Z2 = function(t, i, e, r) { - var o = i._$Q2(t, e), - n = t._$vs(), - s = t._$Tr(); - if (i._$zr(n, s, o), o <= 0) return r[n[0]]; - if (1 == o) { - var _ = r[n[0]], - a = r[n[1]], - h = s[0]; - return _ + (a - _) * h | 0 - } - if (2 == o) { - var _ = r[n[0]], - a = r[n[1]], - l = r[n[2]], - $ = r[n[3]], - h = s[0], - u = s[1], - p = _ + (a - _) * h | 0, - f = l + ($ - l) * h | 0; - return p + (f - p) * u | 0 - } - if (3 == o) { - var c = r[n[0]], - d = r[n[1]], - g = r[n[2]], - y = r[n[3]], - m = r[n[4]], - T = r[n[5]], - P = r[n[6]], - S = r[n[7]], - h = s[0], - u = s[1], - v = s[2], - _ = c + (d - c) * h | 0, - a = g + (y - g) * h | 0, - l = m + (T - m) * h | 0, - $ = P + (S - P) * h | 0, - p = _ + (a - _) * u | 0, - f = l + ($ - l) * u | 0; - return p + (f - p) * v | 0 - } - if (4 == o) { - var L = r[n[0]], - M = r[n[1]], - E = r[n[2]], - A = r[n[3]], - I = r[n[4]], - w = r[n[5]], - x = r[n[6]], - O = r[n[7]], - D = r[n[8]], - R = r[n[9]], - b = r[n[10]], - F = r[n[11]], - C = r[n[12]], - N = r[n[13]], - B = r[n[14]], - U = r[n[15]], - h = s[0], - u = s[1], - v = s[2], - G = s[3], - c = L + (M - L) * h | 0, - d = E + (A - E) * h | 0, - g = I + (w - I) * h | 0, - y = x + (O - x) * h | 0, - m = D + (R - D) * h | 0, - T = b + (F - b) * h | 0, - P = C + (N - C) * h | 0, - S = B + (U - B) * h | 0, - _ = c + (d - c) * u | 0, - a = g + (y - g) * u | 0, - l = m + (T - m) * u | 0, - $ = P + (S - P) * u | 0, - p = _ + (a - _) * v | 0, - f = l + ($ - l) * v | 0; - return p + (f - p) * G | 0 - } - for (var Y = 1 << o, k = new Float32Array(Y), V = 0; V < Y; V++) { - for (var X = V, z = 1, H = 0; H < o; H++) z *= X % 2 == 0 ? 1 - s[H] : s[H], X /= 2; - k[V] = z - } - for (var W = new Float32Array(Y), j = 0; j < Y; j++) W[j] = r[n[j]]; - for (var q = 0, j = 0; j < Y; j++) q += k[j] * W[j]; - return q + .5 | 0 - }, v._$br = function(t, i, e, r) { - var o = i._$Q2(t, e), - n = t._$vs(), - s = t._$Tr(); - if (i._$zr(n, s, o), o <= 0) return r[n[0]]; - if (1 == o) { - var _ = r[n[0]], - a = r[n[1]], - h = s[0]; - return _ + (a - _) * h - } - if (2 == o) { - var _ = r[n[0]], - a = r[n[1]], - l = r[n[2]], - $ = r[n[3]], - h = s[0], - u = s[1]; - return (1 - u) * (_ + (a - _) * h) + u * (l + ($ - l) * h) - } - if (3 == o) { - var p = r[n[0]], - f = r[n[1]], - c = r[n[2]], - d = r[n[3]], - g = r[n[4]], - y = r[n[5]], - m = r[n[6]], - T = r[n[7]], - h = s[0], - u = s[1], - P = s[2]; - return (1 - P) * ((1 - u) * (p + (f - p) * h) + u * (c + (d - c) * h)) + P * ((1 - u) * (g + (y - g) * h) + u * (m + (T - m) * h)) - } - if (4 == o) { - var S = r[n[0]], - v = r[n[1]], - L = r[n[2]], - M = r[n[3]], - E = r[n[4]], - A = r[n[5]], - I = r[n[6]], - w = r[n[7]], - x = r[n[8]], - O = r[n[9]], - D = r[n[10]], - R = r[n[11]], - b = r[n[12]], - F = r[n[13]], - C = r[n[14]], - N = r[n[15]], - h = s[0], - u = s[1], - P = s[2], - B = s[3]; - return (1 - B) * ((1 - P) * ((1 - u) * (S + (v - S) * h) + u * (L + (M - L) * h)) + P * ((1 - u) * (E + (A - E) * h) + u * (I + (w - I) * h))) + B * ((1 - P) * ((1 - u) * (x + (O - x) * h) + u * (D + (R - D) * h)) + P * ((1 - u) * (b + (F - b) * h) + u * (C + (N - C) * h))) - } - for (var U = 1 << o, G = new Float32Array(U), Y = 0; Y < U; Y++) { - for (var k = Y, V = 1, X = 0; X < o; X++) V *= k % 2 == 0 ? 1 - s[X] : s[X], k /= 2; - G[Y] = V - } - for (var z = new Float32Array(U), H = 0; H < U; H++) z[H] = r[n[H]]; - for (var W = 0, H = 0; H < U; H++) W += G[H] * z[H]; - return W - }, v._$Vr = function(t, i, e, r, o, n, s, _) { - var a = i._$Q2(t, e), - h = t._$vs(), - l = t._$Tr(); - i._$zr(h, l, a); - var $ = 2 * r, - u = s; - if (a <= 0) { - var p = h[0], - f = o[p]; - if (2 == _ && 0 == s) w._$jT(f, 0, n, 0, $); - else for (var c = 0; c < $;) n[u] = f[c++], n[u + 1] = f[c++], u += _ - } else if (1 == a) for (var f = o[h[0]], d = o[h[1]], g = l[0], y = 1 - g, c = 0; c < $;) n[u] = f[c] * y + d[c] * g, ++c, n[u + 1] = f[c] * y + d[c] * g, ++c, u += _; - else if (2 == a) for (var f = o[h[0]], d = o[h[1]], m = o[h[2]], T = o[h[3]], g = l[0], P = l[1], y = 1 - g, S = 1 - P, v = S * y, L = S * g, M = P * y, E = P * g, c = 0; c < $;) n[u] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, n[u + 1] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, u += _; - else if (3 == a) for (var A = o[h[0]], I = o[h[1]], x = o[h[2]], O = o[h[3]], D = o[h[4]], R = o[h[5]], b = o[h[6]], F = o[h[7]], g = l[0], P = l[1], C = l[2], y = 1 - g, S = 1 - P, N = 1 - C, B = N * S * y, U = N * S * g, G = N * P * y, Y = N * P * g, k = C * S * y, V = C * S * g, X = C * P * y, z = C * P * g, c = 0; c < $;) n[u] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, n[u + 1] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, u += _; - else if (4 == a) for (var H = o[h[0]], W = o[h[1]], j = o[h[2]], q = o[h[3]], J = o[h[4]], Q = o[h[5]], Z = o[h[6]], K = o[h[7]], tt = o[h[8]], it = o[h[9]], et = o[h[10]], rt = o[h[11]], ot = o[h[12]], nt = o[h[13]], st = o[h[14]], _t = o[h[15]], g = l[0], P = l[1], C = l[2], at = l[3], y = 1 - g, S = 1 - P, N = 1 - C, ht = 1 - at, lt = ht * N * S * y, $t = ht * N * S * g, ut = ht * N * P * y, pt = ht * N * P * g, ft = ht * C * S * y, ct = ht * C * S * g, dt = ht * C * P * y, gt = ht * C * P * g, yt = at * N * S * y, mt = at * N * S * g, Tt = at * N * P * y, Pt = at * N * P * g, St = at * C * S * y, vt = at * C * S * g, Lt = at * C * P * y, Mt = at * C * P * g, c = 0; c < $;) n[u] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, n[u + 1] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, u += _; - else { - for (var Et = 1 << a, At = new Float32Array(Et), It = 0; It < Et; It++) { - for (var wt = It, xt = 1, Ot = 0; Ot < a; Ot++) xt *= wt % 2 == 0 ? 1 - l[Ot] : l[Ot], wt /= 2; - At[It] = xt - } - for (var Dt = new Float32Array(Et), Rt = 0; Rt < Et; Rt++) Dt[Rt] = o[h[Rt]]; - for (var c = 0; c < $;) { - for (var bt = 0, Ft = 0, Ct = c + 1, Rt = 0; Rt < Et; Rt++) bt += At[Rt] * Dt[Rt][c], Ft += At[Rt] * Dt[Rt][Ct]; - c += 2, n[u] = bt, n[u + 1] = Ft, u += _ - } - } - }, L.prototype._$HT = function(t, i) { - this.x = t, this.y = i - }, L.prototype._$HT = function(t) { - this.x = t.x, this.y = t.y - }, M._$ur = -2, M._$ES = 500, M._$wb = 2, M._$8S = 3, M._$52 = M._$ES, M._$R2 = M._$ES, M._$or = function() { - return M._$52 - }, M._$Pr = function() { - return M._$R2 - }, M.prototype.convertClipIDForV2_11 = function(t) { - var i = []; - return null == t ? null : 0 == t.length ? null : /,/.test(t) ? i = t.id.split(",") : (i.push(t.id), i) - }, M.prototype._$F0 = function(t) { - this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = [], this._$MS(this._$Lb) - }, M.prototype.getClipIDList = function() { - return this.clipIDList - }, M.prototype.init = function(t) {}, M.prototype._$Nr = function(t, i) { - if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs); - else if (i._$IS[0]) return; - i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS) - }, M.prototype._$2b = function(t, i) {}, M.prototype.getDrawDataID = function() { - return this._$gP - }, M.prototype._$j2 = function(t) { - this._$gP = t - }, M.prototype.getOpacity = function(t, i) { - return i._$7s - }, M.prototype._$zS = function(t, i) { - return i._$Us - }, M.prototype._$MS = function(t) { - for (var i = t.length - 1; i >= 0; --i) { - var e = t[i]; - e < M._$52 ? M._$52 = e : e > M._$R2 && (M._$R2 = e) - } - }, M.prototype.getTargetBaseDataID = function() { - return this._$dr - }, M.prototype._$gs = function(t) { - this._$dr = t - }, M.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, M.prototype.preDraw = function(t, i, e) {}, M.prototype.draw = function(t, i, e) {}, M.prototype.getType = function() {}, M.prototype._$B2 = function(t, i, e) {}, E._$ps = 32, E.CLIPPING_PROCESS_NONE = 0, E.CLIPPING_PROCESS_OVERWRITE_ALPHA = 1, E.CLIPPING_PROCESS_MULTIPLY_ALPHA = 2, E.CLIPPING_PROCESS_DRAW = 3, E.CLIPPING_PROCESS_CLEAR_ALPHA = 4, E.prototype.setChannelFlagAsColor = function(t, i) { - this.CHANNEL_COLORS[t] = i - }, E.prototype.getChannelFlagAsColor = function(t) { - return this.CHANNEL_COLORS[t] - }, E.prototype._$ZT = function() {}, E.prototype._$Uo = function(t, i, e, r, o, n, s) {}, E.prototype._$Rs = function() { - return -1 - }, E.prototype._$Ds = function(t) {}, E.prototype.setBaseColor = function(t, i, e, r) { - t < 0 ? t = 0 : t > 1 && (t = 1), i < 0 ? i = 0 : i > 1 && (i = 1), e < 0 ? e = 0 : e > 1 && (e = 1), r < 0 ? r = 0 : r > 1 && (r = 1), this._$lT = t, this._$C0 = i, this._$tT = e, this._$WL = r - }, E.prototype._$WP = function(t) { - this.culling = t - }, E.prototype.setMatrix = function(t) { - for (var i = 0; i < 16; i++) this.matrix4x4[i] = t[i] - }, E.prototype._$IT = function() { - return this.matrix4x4 - }, E.prototype.setPremultipliedAlpha = function(t) { - this.premultipliedAlpha = t - }, E.prototype.isPremultipliedAlpha = function() { - return this.premultipliedAlpha - }, E.prototype.setAnisotropy = function(t) { - this.anisotropy = t - }, E.prototype.getAnisotropy = function() { - return this.anisotropy - }, E.prototype.getClippingProcess = function() { - return this.clippingProcess - }, E.prototype.setClippingProcess = function(t) { - this.clippingProcess = t - }, E.prototype.setClipBufPre_clipContextForMask = function(t) { - this.clipBufPre_clipContextMask = t - }, E.prototype.getClipBufPre_clipContextMask = function() { - return this.clipBufPre_clipContextMask - }, E.prototype.setClipBufPre_clipContextForDraw = function(t) { - this.clipBufPre_clipContextDraw = t - }, E.prototype.getClipBufPre_clipContextDraw = function() { - return this.clipBufPre_clipContextDraw - }, I._$ur = -2, I._$c2 = 1, I._$_b = 2, I.prototype._$F0 = function(t) { - this._$kP = t._$nP(), this._$dr = t._$nP() - }, I.prototype.readV2_opacity = function(t) { - t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this._$mS = t._$Tb()) - }, I.prototype.init = function(t) {}, I.prototype._$Nr = function(t, i) {}, I.prototype.interpolateOpacity = function(t, i, e, r) { - null == this._$mS ? e.setInterpolatedOpacity(1) : e.setInterpolatedOpacity(v._$br(t, i, r, this._$mS)) - }, I.prototype._$2b = function(t, i) {}, I.prototype._$nb = function(t, i, e, r, o, n, s) {}, I.prototype.getType = function() {}, I.prototype._$gs = function(t) { - this._$dr = t - }, I.prototype._$a2 = function(t) { - this._$kP = t - }, I.prototype.getTargetBaseDataID = function() { - return this._$dr - }, I.prototype.getBaseDataID = function() { - return this._$kP - }, I.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, w._$W2 = 0, w._$CS = w._$W2, w._$Mo = function() { - return !0 - }, w._$XP = function(t) { - try { - for (var i = getTimeMSec(); getTimeMSec() - i < t;); - } catch (t) { - t._$Rb() - } - }, w.getUserTimeMSec = function() { - return w._$CS == w._$W2 ? w.getSystemTimeMSec() : w._$CS - }, w.setUserTimeMSec = function(t) { - w._$CS = t - }, w.updateUserTimeMSec = function() { - return w._$CS = w.getSystemTimeMSec() - }, w.getTimeMSec = function() { - return (new Date).getTime() - }, w.getSystemTimeMSec = function() { - return (new Date).getTime() - }, w._$Q = function(t) {}, w._$jT = function(t, i, e, r, o) { - for (var n = 0; n < o; n++) e[r + n] = t[i + n] - }, x._$ds = -2, x.prototype._$F0 = function(t) { - this._$wL = t._$nP(), this._$VP = t._$6L(), this._$GP = t._$nP() - }, x.prototype.getParamIndex = function(t) { - return this._$2r != t && (this._$8o = x._$ds), this._$8o - }, x.prototype._$Pb = function(t, i) { - this._$8o = t, this._$2r = i - }, x.prototype.getParamID = function() { - return this._$wL - }, x.prototype._$yP = function(t) { - this._$wL = t - }, x.prototype._$N2 = function() { - return this._$VP - }, x.prototype._$d2 = function() { - return this._$GP - }, x.prototype._$t2 = function(t, i) { - this._$VP = t, this._$GP = i - }, x.prototype._$Lr = function() { - return this._$O2 - }, x.prototype._$wr = function(t) { - this._$O2 = t - }, x.prototype._$SL = function() { - return this._$ri - }, x.prototype._$AL = function(t) { - this._$ri = t - }, O.startsWith = function(t, i, e) { - var r = i + e.length; - if (r >= t.length) return !1; - for (var o = i; o < r; o++) if (O.getChar(t, o) != e.charAt(o - i)) return !1; - return !0 - }, O.getChar = function(t, i) { - return String.fromCharCode(t.getUint8(i)) - }, O.createString = function(t, i, e) { - for (var r = new ArrayBuffer(2 * e), o = new Uint16Array(r), n = 0; n < e; n++) o[n] = t.getUint8(i + n); - return String.fromCharCode.apply(null, o) - }, O._$LS = function(t, i, e, r) { - t instanceof ArrayBuffer && (t = new DataView(t)); - var o = e, - n = !1, - s = !1, - _ = 0, - a = O.getChar(t, o); - "-" == a && (n = !0, o++); - for (var h = !1; o < i; o++) { - switch (a = O.getChar(t, o)) { - case "0": - _ *= 10; - break; - case "1": - _ = 10 * _ + 1; - break; - case "2": - _ = 10 * _ + 2; - break; - case "3": - _ = 10 * _ + 3; - break; - case "4": - _ = 10 * _ + 4; - break; - case "5": - _ = 10 * _ + 5; - break; - case "6": - _ = 10 * _ + 6; - break; - case "7": - _ = 10 * _ + 7; - break; - case "8": - _ = 10 * _ + 8; - break; - case "9": - _ = 10 * _ + 9; - break; - case ".": - s = !0, o++, h = !0; - break; - default: - h = !0 - } - if (h) break - } - if (s) for (var l = .1, $ = !1; o < i; o++) { - switch (a = O.getChar(t, o)) { - case "0": - break; - case "1": - _ += 1 * l; - break; - case "2": - _ += 2 * l; - break; - case "3": - _ += 3 * l; - break; - case "4": - _ += 4 * l; - break; - case "5": - _ += 5 * l; - break; - case "6": - _ += 6 * l; - break; - case "7": - _ += 7 * l; - break; - case "8": - _ += 8 * l; - break; - case "9": - _ += 9 * l; - break; - default: - $ = !0 - } - if (l *= .1, $) break - } - return n && (_ = -_), r[0] = o, _ - }, D.prototype._$zP = function() { - this._$Ob = new Array - }, D.prototype._$F0 = function(t) { - this._$Ob = t._$nP() - }, D.prototype._$Ur = function(t) { - if (t._$WS()) return !0; - for (var i = t._$v2(), e = this._$Ob.length - 1; e >= 0; --e) { - var r = this._$Ob[e].getParamIndex(i); - if (r == x._$ds && (r = t.getParamIndex(this._$Ob[e].getParamID())), t._$Xb(r)) return !0 - } - return !1 - }, D.prototype._$Q2 = function(t, i) { - for (var e, r, o = this._$Ob.length, n = t._$v2(), s = 0, _ = 0; _ < o; _++) { - var a = this._$Ob[_]; - if (e = a.getParamIndex(n), e == x._$ds && (e = t.getParamIndex(a.getParamID()), a._$Pb(e, n)), e < 0) throw new Exception("err 23242 : " + a.getParamID()); - var h = e < 0 ? 0 : t.getParamFloat(e); - r = a._$N2(); - var l, $, u = a._$d2(), - p = -1, - f = 0; - if (r < 1); - else if (1 == r) l = u[0], l - U._$J < h && h < l + U._$J ? (p = 0, f = 0) : (p = 0, i[0] = !0); - else if (l = u[0], h < l - U._$J) p = 0, i[0] = !0; - else if (h < l + U._$J) p = 0; - else { - for (var c = !1, d = 1; d < r; ++d) { - if ($ = u[d], h < $ + U._$J) { - $ - U._$J < h ? p = d : (p = d - 1, f = (h - l) / ($ - l), s++), c = !0; - break - } - l = $ - } - c || (p = r - 1, f = 0, i[0] = !0) - } - a._$wr(p), a._$AL(f) - } - return s - }, D.prototype._$zr = function(t, i, e) { - var r = 1 << e; - r + 1 > U._$Qb && console.log("err 23245\n"); - for (var o = this._$Ob.length, n = 1, s = 1, _ = 0, a = 0; a < r; ++a) t[a] = 0; - for (var h = 0; h < o; ++h) { - var l = this._$Ob[h]; - if (0 == l._$SL()) { - var $ = l._$Lr() * n; - if ($ < 0 && at._$3T) throw new Exception("err 23246"); - for (var a = 0; a < r; ++a) t[a] += $ - } else { - for (var $ = n * l._$Lr(), u = n * (l._$Lr() + 1), a = 0; a < r; ++a) t[a] += (a / s | 0) % 2 == 0 ? $ : u; - i[_++] = l._$SL(), s *= 2 - } - n *= l._$N2() - } - t[r] = 65535, i[_] = -1 - }, D.prototype._$h2 = function(t, i, e) { - for (var r = new Float32Array(i), o = 0; o < i; ++o) r[o] = e[o]; - var n = new x; - n._$yP(t), n._$t2(i, r), this._$Ob.push(n) - }, D.prototype._$J2 = function(t) { - for (var i = t, e = this._$Ob.length, r = 0; r < e; ++r) { - var o = this._$Ob[r], - n = o._$N2(), - s = i % o._$N2(), - _ = o._$d2()[s]; - console.log("%s[%d]=%7.2f / ", o.getParamID(), s, _), i /= n - } - console.log("\n") - }, D.prototype.getParamCount = function() { - return this._$Ob.length - }, D.prototype._$zs = function() { - return this._$Ob - }, R.prototype.identity = function() { - for (var t = 0; t < 16; t++) this.m[t] = t % 5 == 0 ? 1 : 0 - }, R.prototype.getArray = function() { - return this.m - }, R.prototype.getCopyMatrix = function() { - return new Float32Array(this.m) - }, R.prototype.setMatrix = function(t) { - if (null != t && 16 == t.length) for (var i = 0; i < 16; i++) this.m[i] = t[i] - }, R.prototype.mult = function(t, i, e) { - return null == i ? null : (this == i ? this.mult_safe(this.m, t.m, i.m, e) : this.mult_fast(this.m, t.m, i.m, e), i) - }, R.prototype.mult_safe = function(t, i, e, r) { - if (t == e) { - var o = new Array(16); - this.mult_fast(t, i, o, r); - for (var n = 15; n >= 0; --n) e[n] = o[n] - } else this.mult_fast(t, i, e, r) - }, R.prototype.mult_fast = function(t, i, e, r) { - r ? (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14], e[3] = e[7] = e[11] = 0, e[15] = 1) : (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2] + t[12] * i[3], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6] + t[12] * i[7], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10] + t[12] * i[11], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12] * i[15], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2] + t[13] * i[3], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6] + t[13] * i[7], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10] + t[13] * i[11], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13] * i[15], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2] + t[14] * i[3], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6] + t[14] * i[7], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10] + t[14] * i[11], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14] * i[15], e[3] = t[3] * i[0] + t[7] * i[1] + t[11] * i[2] + t[15] * i[3], e[7] = t[3] * i[4] + t[7] * i[5] + t[11] * i[6] + t[15] * i[7], e[11] = t[3] * i[8] + t[7] * i[9] + t[11] * i[10] + t[15] * i[11], e[15] = t[3] * i[12] + t[7] * i[13] + t[11] * i[14] + t[15] * i[15]) - }, R.prototype.translate = function(t, i, e) { - this.m[12] = this.m[0] * t + this.m[4] * i + this.m[8] * e + this.m[12], this.m[13] = this.m[1] * t + this.m[5] * i + this.m[9] * e + this.m[13], this.m[14] = this.m[2] * t + this.m[6] * i + this.m[10] * e + this.m[14], this.m[15] = this.m[3] * t + this.m[7] * i + this.m[11] * e + this.m[15] - }, R.prototype.scale = function(t, i, e) { - this.m[0] *= t, this.m[4] *= i, this.m[8] *= e, this.m[1] *= t, this.m[5] *= i, this.m[9] *= e, this.m[2] *= t, this.m[6] *= i, this.m[10] *= e, this.m[3] *= t, this.m[7] *= i, this.m[11] *= e - }, R.prototype.rotateX = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[4]; - this.m[4] = r * i + this.m[8] * e, this.m[8] = r * -e + this.m[8] * i, r = this.m[5], this.m[5] = r * i + this.m[9] * e, this.m[9] = r * -e + this.m[9] * i, r = this.m[6], this.m[6] = r * i + this.m[10] * e, this.m[10] = r * -e + this.m[10] * i, r = this.m[7], this.m[7] = r * i + this.m[11] * e, this.m[11] = r * -e + this.m[11] * i - }, R.prototype.rotateY = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[0]; - this.m[0] = r * i + this.m[8] * -e, this.m[8] = r * e + this.m[8] * i, r = this.m[1], this.m[1] = r * i + this.m[9] * -e, this.m[9] = r * e + this.m[9] * i, r = m[2], this.m[2] = r * i + this.m[10] * -e, this.m[10] = r * e + this.m[10] * i, r = m[3], this.m[3] = r * i + this.m[11] * -e, this.m[11] = r * e + this.m[11] * i - }, R.prototype.rotateZ = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[0]; - this.m[0] = r * i + this.m[4] * e, this.m[4] = r * -e + this.m[4] * i, r = this.m[1], this.m[1] = r * i + this.m[5] * e, this.m[5] = r * -e + this.m[5] * i, r = this.m[2], this.m[2] = r * i + this.m[6] * e, this.m[6] = r * -e + this.m[6] * i, r = this.m[3], this.m[3] = r * i + this.m[7] * e, this.m[7] = r * -e + this.m[7] * i - }, b.prototype = new et, b._$tP = new Object, b._$27 = function() { - b._$tP.clear() - }, b.getID = function(t) { - var i = b._$tP[t]; - return null == i && (i = new b(t), b._$tP[t] = i), i - }, b.prototype._$3s = function() { - return new b - }, F._$kS = -1, F._$pS = 0, F._$hb = 1, F.STATE_IDENTITY = 0, F._$gb = 1, F._$fo = 2, F._$go = 4, F.prototype.transform = function(t, i, e) { - var r, o, n, s, _, a, h = 0, - l = 0; - switch (this._$hi) { - default: - return; - case F._$go | F._$fo | F._$gb: - for (r = this._$7, o = this._$H, n = this._$k, s = this._$f, _ = this._$g, a = this._$w; --e >= 0;) { - var $ = t[h++], - u = t[h++]; - i[l++] = r * $ + o * u + n, i[l++] = s * $ + _ * u + a - } - return; - case F._$go | F._$fo: - for (r = this._$7, o = this._$H, s = this._$f, _ = this._$g; --e >= 0;) { - var $ = t[h++], - u = t[h++]; - i[l++] = r * $ + o * u, i[l++] = s * $ + _ * u - } - return; - case F._$go | F._$gb: - for (o = this._$H, n = this._$k, s = this._$f, a = this._$w; --e >= 0;) { - var $ = t[h++]; - i[l++] = o * t[h++] + n, i[l++] = s * $ + a - } - return; - case F._$go: - for (o = this._$H, s = this._$f; --e >= 0;) { - var $ = t[h++]; - i[l++] = o * t[h++], i[l++] = s * $ - } - return; - case F._$fo | F._$gb: - for (r = this._$7, n = this._$k, _ = this._$g, a = this._$w; --e >= 0;) i[l++] = r * t[h++] + n, i[l++] = _ * t[h++] + a; - return; - case F._$fo: - for (r = this._$7, _ = this._$g; --e >= 0;) i[l++] = r * t[h++], i[l++] = _ * t[h++]; - return; - case F._$gb: - for (n = this._$k, a = this._$w; --e >= 0;) i[l++] = t[h++] + n, i[l++] = t[h++] + a; - return; - case F.STATE_IDENTITY: - return void(t == i && h == l || w._$jT(t, h, i, l, 2 * e)) - } - }, F.prototype.update = function() { - 0 == this._$H && 0 == this._$f ? 1 == this._$7 && 1 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F.STATE_IDENTITY, this._$Z = F._$pS) : (this._$hi = F._$gb, this._$Z = F._$hb) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$fo, this._$Z = F._$kS) : (this._$hi = F._$fo | F._$gb, this._$Z = F._$kS) : 0 == this._$7 && 0 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go, this._$Z = F._$kS) : (this._$hi = F._$go | F._$gb, this._$Z = F._$kS) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go | F._$fo, this._$Z = F._$kS) : (this._$hi = F._$go | F._$fo | F._$gb, this._$Z = F._$kS) - }, F.prototype._$RT = function(t) { - this._$IT(t); - var i = t[0], - e = t[2], - r = t[1], - o = t[3], - n = Math.sqrt(i * i + r * r), - s = i * o - e * r; - 0 == n ? at._$so && console.log("affine._$RT() / rt==0") : (t[0] = n, t[1] = s / n, t[2] = (r * o + i * e) / s, t[3] = Math.atan2(r, i)) - }, F.prototype._$ho = function(t, i, e, r) { - var o = new Float32Array(6), - n = new Float32Array(6); - t._$RT(o), i._$RT(n); - var s = new Float32Array(6); - s[0] = o[0] + (n[0] - o[0]) * e, s[1] = o[1] + (n[1] - o[1]) * e, s[2] = o[2] + (n[2] - o[2]) * e, s[3] = o[3] + (n[3] - o[3]) * e, s[4] = o[4] + (n[4] - o[4]) * e, s[5] = o[5] + (n[5] - o[5]) * e, r._$CT(s) - }, F.prototype._$CT = function(t) { - var i = Math.cos(t[3]), - e = Math.sin(t[3]); - this._$7 = t[0] * i, this._$f = t[0] * e, this._$H = t[1] * (t[2] * i - e), this._$g = t[1] * (t[2] * e + i), this._$k = t[4], this._$w = t[5], this.update() - }, F.prototype._$IT = function(t) { - t[0] = this._$7, t[1] = this._$f, t[2] = this._$H, t[3] = this._$g, t[4] = this._$k, t[5] = this._$w - }, C.prototype = new s, C._$cs = "VISIBLE:", C._$ar = "LAYOUT:", C._$Co = 0, C._$D2 = [], C._$1T = 1, C.loadMotion = function(t) { - var i = new C, - e = [0], - r = t.length; - i._$yT = 0; - for (var o = 0; o < r; ++o) { - var n = 255 & t[o]; - if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) { - if ("a" <= n && n <= "z" || "A" <= n && n <= "Z" || "_" == n) { - for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) { - _ = o; - break - } - if (_ >= 0) { - var a = new B; - O.startsWith(t, s, C._$cs) ? (a._$RP = B._$hs, a._$4P = new String(t, s, _ - s)) : O.startsWith(t, s, C._$ar) ? (a._$4P = new String(t, s + 7, _ - s - 7), O.startsWith(t, s + 7, "ANCHOR_X") ? a._$RP = B._$xs : O.startsWith(t, s + 7, "ANCHOR_Y") ? a._$RP = B._$us : O.startsWith(t, s + 7, "SCALE_X") ? a._$RP = B._$qs : O.startsWith(t, s + 7, "SCALE_Y") ? a._$RP = B._$Ys : O.startsWith(t, s + 7, "X") ? a._$RP = B._$ws : O.startsWith(t, s + 7, "Y") && (a._$RP = B._$Ns)) : (a._$RP = B._$Fr, a._$4P = new String(t, s, _ - s)), i.motions.push(a); - var h = 0; - for (C._$D2.clear(), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var l = O._$LS(t, r, o, e); - if (e[0] > 0) { - C._$D2.push(l), h++; - var $ = e[0]; - if ($ < o) { - console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n"); - break - } - o = $ - } - } - a._$I0 = C._$D2._$BL(), h > i._$yT && (i._$yT = h) - } - } - } else { - for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) { - _ = o; - break - } - var u = !1; - if (_ >= 0) for (_ == s + 4 && "f" == t[s + 1] && "p" == t[s + 2] && "s" == t[s + 3] && (u = !0), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var l = O._$LS(t, r, o, e); - e[0] > 0 && u && 5 < l && l < 121 && (i._$D0 = l), o = e[0] - } - for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o); - } else for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o); - } - return i._$AS = 1e3 * i._$yT / i._$D0 | 0, i - }, C.prototype.getDurationMSec = function() { - return this._$AS - }, C.prototype.dump = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length); - for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]); - console.log("\n") - } - }, C.prototype.updateParamExe = function(t, i, e, r) { - for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) { - var h = this.motions[a], - l = h._$I0.length, - $ = h._$4P; - if (h._$RP == B._$hs) { - var u = h._$I0[s >= l ? l - 1 : s]; - t.setParamFloat($, u) - } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys); - else { - var p = t.getParamFloat($), - f = h._$I0[s >= l ? l - 1 : s], - c = h._$I0[s + 1 >= l ? l - 1 : s + 1], - d = f + (c - f) * _, - g = p + (d - p) * e; - t.setParamFloat($, g) - } - } - s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0) - }, C.prototype._$r0 = function() { - return this._$E - }, C.prototype._$aL = function(t) { - this._$E = t - }, C.prototype.isLoopFadeIn = function() { - return this.loopFadeIn - }, C.prototype.setLoopFadeIn = function(t) { - this.loopFadeIn = t - }, N.prototype.clear = function() { - this.size = 0 - }, N.prototype.add = function(t) { - if (this._$P.length <= this.size) { - var i = new Float32Array(2 * this.size); - w._$jT(this._$P, 0, i, 0, this.size), this._$P = i - } - this._$P[this.size++] = t - }, N.prototype._$BL = function() { - var t = new Float32Array(this.size); - return w._$jT(this._$P, 0, t, 0, this.size), t - }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, U._$Ms = 1, U._$Qs = 2, U._$i2 = 0, U._$No = 2, U._$do = U._$Ms, U._$Ls = !0, U._$1r = 5, U._$Qb = 65, U._$J = 1e-4, U._$FT = .001, U._$Ss = 3, G._$o7 = 6, G._$S7 = 7, G._$s7 = 8, G._$77 = 9, G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 = 10, G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1 = 11, G._$T7 = G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1, G._$Is = -2004318072, G._$h0 = 0, G._$4L = 23, G._$7P = 33, G._$uT = function(t) { - console.log("_$bo :: _$6 _$mo _$E0 : %d\n", t) - }, G._$9o = function(t) { - if (t < 40) return G._$uT(t), null; - if (t < 50) return G._$uT(t), null; - if (t < 60) return G._$uT(t), null; - if (t < 100) switch (t) { - case 65: - return new Z; - case 66: - return new D; - case 67: - return new x; - case 68: - return new z; - case 69: - return new P; - case 70: - return new $t; - default: - return G._$uT(t), null - } else if (t < 150) switch (t) { - case 131: - return new st; - case 133: - return new tt; - case 136: - return new p; - case 137: - return new ot; - case 142: - return new j - } - return G._$uT(t), null - }, Y._$HP = 0, Y._$_0 = !0; - Y._$V2 = -1, Y._$W0 = -1, Y._$jr = !1, Y._$ZS = !0, Y._$tr = -1e6, Y._$lr = 1e6, Y._$is = 32, Y._$e = !1, Y.prototype.getDrawDataIndex = function(t) { - for (var i = this._$aS.length - 1; i >= 0; --i) if (null != this._$aS[i] && this._$aS[i].getDrawDataID() == t) return i; - return -1 - }, Y.prototype.getDrawData = function(t) { - if (t instanceof b) { - if (null == this._$Bo) { - this._$Bo = new Object; - for (var i = this._$aS.length, e = 0; e < i; e++) { - var r = this._$aS[e], - o = r.getDrawDataID(); - null != o && (this._$Bo[o] = r) - } - } - return this._$Bo[id] - } - return t < this._$aS.length ? this._$aS[t] : null - }, Y.prototype.release = function() { - this._$3S.clear(), this._$aS.clear(), this._$F2.clear(), null != this._$Bo && this._$Bo.clear(), this._$db.clear(), this._$8b.clear(), this._$Hr.clear() - }, Y.prototype.init = function() { - this._$co++, this._$F2.length > 0 && this.release(); - for (var t = this._$Ri.getModelImpl(), i = t._$Xr(), r = i.length, o = new Array, n = new Array, s = 0; s < r; ++s) { - var _ = i[s]; - this._$F2.push(_), this._$Hr.push(_.init(this)); - for (var a = _.getBaseData(), h = a.length, l = 0; l < h; ++l) o.push(a[l]); - for (var l = 0; l < h; ++l) { - var $ = a[l].init(this); - $._$l2(s), n.push($) - } - for (var u = _.getDrawData(), p = u.length, l = 0; l < p; ++l) { - var f = u[l], - c = f.init(this); - c._$IP = s, this._$aS.push(f), this._$8b.push(c) - } - } - for (var d = o.length, g = yt._$2o();;) { - for (var y = !1, s = 0; s < d; ++s) { - var m = o[s]; - if (null != m) { - var T = m.getTargetBaseDataID(); - (null == T || T == g || this.getBaseDataIndex(T) >= 0) && (this._$3S.push(m), this._$db.push(n[s]), o[s] = null, y = !0) - } - } - if (!y) break - } - var P = t._$E2(); - if (null != P) { - var S = P._$1s(); - if (null != S) for (var v = S.length, s = 0; s < v; ++s) { - var L = S[s]; - null != L && this._$02(L.getParamID(), L.getDefaultValue(), L.getMinValue(), L.getMaxValue()) - } - } - this.clipManager = new e(this.dp_webgl), this.clipManager.init(this, this._$aS, this._$8b), this._$QT = !0 - }, Y.prototype.update = function() { - Y._$e && _.start("_$zL"); - for (var t = this._$_2.length, i = 0; i < t; i++) this._$_2[i] != this._$vr[i] && (this._$Js[i] = Y._$ZS, this._$vr[i] = this._$_2[i]); - var e = this._$3S.length, - r = this._$aS.length, - o = W._$or(), - n = W._$Pr(), - s = n - o + 1; - (null == this._$Ws || this._$Ws.length < s) && (this._$Ws = new Int16Array(s), this._$Vs = new Int16Array(s)); - for (var i = 0; i < s; i++) this._$Ws[i] = Y._$V2, this._$Vs[i] = Y._$V2; - (null == this._$Er || this._$Er.length < r) && (this._$Er = new Int16Array(r)); - for (var i = 0; i < r; i++) this._$Er[i] = Y._$W0; - Y._$e && _.dump("_$zL"), Y._$e && _.start("_$UL"); - for (var a = null, h = 0; h < e; ++h) { - var l = this._$3S[h], - $ = this._$db[h]; - try { - l._$Nr(this, $), l._$2b(this, $) - } catch (t) { - null == a && (a = t) - } - } - null != a && Y._$_0 && _._$Rb(a), Y._$e && _.dump("_$UL"), Y._$e && _.start("_$DL"); - for (var u = null, p = 0; p < r; ++p) { - var f = this._$aS[p], - c = this._$8b[p]; - try { - if (f._$Nr(this, c), c._$u2()) continue; - f._$2b(this, c); - var d, g = Math.floor(f._$zS(this, c) - o); - try { - d = this._$Vs[g] - } catch (t) { - console.log("_$li :: %s / %s \t\t\t\t@@_$fS\n", t.toString(), f.getDrawDataID().toString()), g = Math.floor(f._$zS(this, c) - o); - continue - } - d == Y._$V2 ? this._$Ws[g] = p : this._$Er[d] = p, this._$Vs[g] = p - } catch (t) { - null == u && (u = t, at._$sT(at._$H7)) - } - } - null != u && Y._$_0 && _._$Rb(u), Y._$e && _.dump("_$DL"), Y._$e && _.start("_$eL"); - for (var i = this._$Js.length - 1; i >= 0; i--) this._$Js[i] = Y._$jr; - return this._$QT = !1, Y._$e && _.dump("_$eL"), !1 - }, Y.prototype.preDraw = function(t) { - null != this.clipManager && (t._$ZT(), this.clipManager.setupClip(this, t)) - }, Y.prototype.draw = function(t) { - if (null == this._$Ws) return void _._$li("call _$Ri.update() before _$Ri.draw() "); - var i = this._$Ws.length; - t._$ZT(); - for (var e = 0; e < i; ++e) { - var r = this._$Ws[e]; - if (r != Y._$V2) for (;;) { - var o = this._$aS[r], - n = this._$8b[r]; - if (n._$yo()) { - var s = n._$IP, - a = this._$Hr[s]; - n._$VS = a.getPartsOpacity(), o.draw(t, this, n) - } - var h = this._$Er[r]; - if (h <= r || h == Y._$W0) break; - r = h - } - } - }, Y.prototype.getParamIndex = function(t) { - for (var i = this._$pb.length - 1; i >= 0; --i) if (this._$pb[i] == t) return i; - return this._$02(t, 0, Y._$tr, Y._$lr) - }, Y.prototype._$BS = function(t) { - return this.getBaseDataIndex(t) - }, Y.prototype.getBaseDataIndex = function(t) { - for (var i = this._$3S.length - 1; i >= 0; --i) if (null != this._$3S[i] && this._$3S[i].getBaseDataID() == t) return i; - return -1 - }, Y.prototype._$UT = function(t, i) { - var e = new Float32Array(i); - return w._$jT(t, 0, e, 0, t.length), e - }, Y.prototype._$02 = function(t, i, e, r) { - if (this._$qo >= this._$pb.length) { - var o = this._$pb.length, - n = new Array(2 * o); - w._$jT(this._$pb, 0, n, 0, o), this._$pb = n, this._$_2 = this._$UT(this._$_2, 2 * o), this._$vr = this._$UT(this._$vr, 2 * o), this._$Rr = this._$UT(this._$Rr, 2 * o), this._$Or = this._$UT(this._$Or, 2 * o); - var s = new Array; - w._$jT(this._$Js, 0, s, 0, o), this._$Js = s - } - return this._$pb[this._$qo] = t, this._$_2[this._$qo] = i, this._$vr[this._$qo] = i, this._$Rr[this._$qo] = e, this._$Or[this._$qo] = r, this._$Js[this._$qo] = Y._$ZS, this._$qo++ - }, Y.prototype._$Zo = function(t, i) { - this._$3S[t] = i - }, Y.prototype.setParamFloat = function(t, i) { - i < this._$Rr[t] && (i = this._$Rr[t]), i > this._$Or[t] && (i = this._$Or[t]), this._$_2[t] = i - }, Y.prototype.loadParam = function() { - var t = this._$_2.length; - t > this._$fs.length && (t = this._$fs.length), w._$jT(this._$fs, 0, this._$_2, 0, t) - }, Y.prototype.saveParam = function() { - var t = this._$_2.length; - t > this._$fs.length && (this._$fs = new Float32Array(t)), w._$jT(this._$_2, 0, this._$fs, 0, t) - }, Y.prototype._$v2 = function() { - return this._$co - }, Y.prototype._$WS = function() { - return this._$QT - }, Y.prototype._$Xb = function(t) { - return this._$Js[t] == Y._$ZS - }, Y.prototype._$vs = function() { - return this._$Es - }, Y.prototype._$Tr = function() { - return this._$ZP - }, Y.prototype.getBaseData = function(t) { - return this._$3S[t] - }, Y.prototype.getParamFloat = function(t) { - return this._$_2[t] - }, Y.prototype.getParamMax = function(t) { - return this._$Or[t] - }, Y.prototype.getParamMin = function(t) { - return this._$Rr[t] - }, Y.prototype.setPartsOpacity = function(t, i) { - this._$Hr[t].setPartsOpacity(i) - }, Y.prototype.getPartsOpacity = function(t) { - return this._$Hr[t].getPartsOpacity() - }, Y.prototype.getPartsDataIndex = function(t) { - for (var i = this._$F2.length - 1; i >= 0; --i) if (null != this._$F2[i] && this._$F2[i]._$p2() == t) return i; - return -1 - }, Y.prototype._$q2 = function(t) { - return this._$db[t] - }, Y.prototype._$C2 = function(t) { - return this._$8b[t] - }, Y.prototype._$Bb = function(t) { - return this._$Hr[t] - }, Y.prototype._$5s = function(t, i) { - for (var e = this._$Ws.length, r = t, o = 0; o < e; ++o) { - var n = this._$Ws[o]; - if (n != Y._$V2) for (;;) { - var s = this._$8b[n]; - s._$yo() && (s._$GT()._$B2(this, s, r), r += i); - var _ = this._$Er[n]; - if (_ <= n || _ == Y._$W0) break; - n = _ - } - } - }, Y.prototype.setDrawParam = function(t) { - this.dp_webgl = t - }, Y.prototype.getDrawParam = function() { - return this.dp_webgl - }, k._$0T = function(t) { - return k._$0T(new _$5(t)) - }, k._$0T = function(t) { - if (!t.exists()) throw new _$ls(t._$3b()); - for (var i, e = t.length(), r = new Int8Array(e), o = new _$Xs(new _$kb(t), 8192), n = 0; - (i = o.read(r, n, e - n)) > 0;) n += i; - return r - }, k._$C = function(t) { - var i = null, - e = null; - try { - i = t instanceof Array ? t : new _$Xs(t, 8192), e = new _$js; - for (var r, o = new Int8Array(1e3); - (r = i.read(o)) > 0;) e.write(o, 0, r); - return e._$TS() - } finally { - null != t && t.close(), null != e && (e.flush(), e.close()) - } - }, V.prototype._$T2 = function() { - return w.getUserTimeMSec() + Math._$10() * (2 * this._$Br - 1) - }, V.prototype._$uo = function(t) { - this._$Br = t - }, V.prototype._$QS = function(t, i, e) { - this._$Dr = t, this._$Cb = i, this._$mr = e - }, V.prototype._$7T = function(t) { - var i, e = w.getUserTimeMSec(), - r = 0; - switch (this._$_L) { - case STATE_CLOSING: - r = (e - this._$bb) / this._$Dr, r >= 1 && (r = 1, this._$_L = wt.STATE_CLOSED, this._$bb = e), i = 1 - r; - break; - case STATE_CLOSED: - r = (e - this._$bb) / this._$Cb, r >= 1 && (this._$_L = wt.STATE_OPENING, this._$bb = e), i = 0; - break; - case STATE_OPENING: - r = (e - this._$bb) / this._$mr, r >= 1 && (r = 1, this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2()), i = r; - break; - case STATE_INTERVAL: - this._$12 < e && (this._$_L = wt.STATE_CLOSING, this._$bb = e), i = 1; - break; - case STATE_FIRST: - default: - this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2(), i = 1 - } - this._$jo || (i = -i), t.setParamFloat(this._$iL, i), t.setParamFloat(this._$0L, i) - }; - var wt = function() {}; - wt.STATE_FIRST = "STATE_FIRST", wt.STATE_INTERVAL = "STATE_INTERVAL", wt.STATE_CLOSING = "STATE_CLOSING", wt.STATE_CLOSED = "STATE_CLOSED", wt.STATE_OPENING = "STATE_OPENING", X.prototype = new E, X._$As = 32, X._$Gr = !1, X._$NT = null, X._$vS = null, X._$no = null, X._$9r = function(t) { - return new Float32Array(t) - }, X._$vb = function(t) { - return new Int16Array(t) - }, X._$cr = function(t, i) { - return null == t || t._$yL() < i.length ? (t = X._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, X._$mb = function(t, i) { - return null == t || t._$yL() < i.length ? (t = X._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, X._$Hs = function() { - return X._$Gr - }, X._$as = function(t) { - X._$Gr = t - }, X.prototype.setGL = function(t) { - this.gl = t - }, X.prototype.setTransform = function(t) { - this.transform = t - }, X.prototype._$ZT = function() {}, X.prototype._$Uo = function(t, i, e, r, o, n, s, _) { - if (!(n < .01)) { - var a = this._$U2[t], - h = n > .9 ? at.EXPAND_W : 0; - this.gl.drawElements(a, e, r, o, n, h, this.transform, _) - } - }, X.prototype._$Rs = function() { - throw new Error("_$Rs") - }, X.prototype._$Ds = function(t) { - throw new Error("_$Ds") - }, X.prototype._$K2 = function() { - for (var t = 0; t < this._$sb.length; t++) { - 0 != this._$sb[t] && (this.gl._$Sr(1, this._$sb, t), this._$sb[t] = 0) - } - }, X.prototype.setTexture = function(t, i) { - this._$sb.length < t + 1 && this._$nS(t), this._$sb[t] = i - }, X.prototype.setTexture = function(t, i) { - this._$sb.length < t + 1 && this._$nS(t), this._$U2[t] = i - }, X.prototype._$nS = function(t) { - var i = Math.max(2 * this._$sb.length, t + 1 + 10), - e = new Int32Array(i); - w._$jT(this._$sb, 0, e, 0, this._$sb.length), this._$sb = e; - var r = new Array; - w._$jT(this._$U2, 0, r, 0, this._$U2.length), this._$U2 = r - }, z.prototype = new I, z._$Xo = new Float32Array(2), z._$io = new Float32Array(2), z._$0o = new Float32Array(2), z._$Lo = new Float32Array(2), z._$To = new Float32Array(2), z._$Po = new Float32Array(2), z._$gT = new Array, z.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP(), this._$Y0 = new Array - }, z.prototype.getType = function() { - return I._$c2 - }, z.prototype._$F0 = function(t) { - I.prototype._$F0.call(this, t), this._$GS = t._$nP(), this._$Y0 = t._$nP(), I.prototype.readV2_opacity.call(this, t) - }, z.prototype.init = function(t) { - var i = new H(this); - return i._$Yr = new P, this._$32() && (i._$Wr = new P), i - }, z.prototype._$Nr = function(t, i) { - this != i._$GT() && console.log("### assert!! ### "); - var e = i; - if (this._$GS._$Ur(t)) { - var r = z._$gT; - r[0] = !1; - var o = this._$GS._$Q2(t, r); - i._$Ib(r[0]), this.interpolateOpacity(t, this._$GS, i, r); - var n = t._$vs(), - s = t._$Tr(); - if (this._$GS._$zr(n, s, o), o <= 0) { - var _ = this._$Y0[n[0]]; - e._$Yr.init(_) - } else if (1 == o) { - var _ = this._$Y0[n[0]], - a = this._$Y0[n[1]], - h = s[0]; - e._$Yr._$fL = _._$fL + (a._$fL - _._$fL) * h, e._$Yr._$gL = _._$gL + (a._$gL - _._$gL) * h, e._$Yr._$B0 = _._$B0 + (a._$B0 - _._$B0) * h, e._$Yr._$z0 = _._$z0 + (a._$z0 - _._$z0) * h, e._$Yr._$qT = _._$qT + (a._$qT - _._$qT) * h - } else if (2 == o) { - var _ = this._$Y0[n[0]], - a = this._$Y0[n[1]], - l = this._$Y0[n[2]], - $ = this._$Y0[n[3]], - h = s[0], - u = s[1], - p = _._$fL + (a._$fL - _._$fL) * h, - f = l._$fL + ($._$fL - l._$fL) * h; - e._$Yr._$fL = p + (f - p) * u, p = _._$gL + (a._$gL - _._$gL) * h, f = l._$gL + ($._$gL - l._$gL) * h, e._$Yr._$gL = p + (f - p) * u, p = _._$B0 + (a._$B0 - _._$B0) * h, f = l._$B0 + ($._$B0 - l._$B0) * h, e._$Yr._$B0 = p + (f - p) * u, p = _._$z0 + (a._$z0 - _._$z0) * h, f = l._$z0 + ($._$z0 - l._$z0) * h, e._$Yr._$z0 = p + (f - p) * u, p = _._$qT + (a._$qT - _._$qT) * h, f = l._$qT + ($._$qT - l._$qT) * h, e._$Yr._$qT = p + (f - p) * u - } else if (3 == o) { - var c = this._$Y0[n[0]], - d = this._$Y0[n[1]], - g = this._$Y0[n[2]], - y = this._$Y0[n[3]], - m = this._$Y0[n[4]], - T = this._$Y0[n[5]], - P = this._$Y0[n[6]], - S = this._$Y0[n[7]], - h = s[0], - u = s[1], - v = s[2], - p = c._$fL + (d._$fL - c._$fL) * h, - f = g._$fL + (y._$fL - g._$fL) * h, - L = m._$fL + (T._$fL - m._$fL) * h, - M = P._$fL + (S._$fL - P._$fL) * h; - e._$Yr._$fL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$gL + (d._$gL - c._$gL) * h, f = g._$gL + (y._$gL - g._$gL) * h, L = m._$gL + (T._$gL - m._$gL) * h, M = P._$gL + (S._$gL - P._$gL) * h, e._$Yr._$gL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$B0 + (d._$B0 - c._$B0) * h, f = g._$B0 + (y._$B0 - g._$B0) * h, L = m._$B0 + (T._$B0 - m._$B0) * h, M = P._$B0 + (S._$B0 - P._$B0) * h, e._$Yr._$B0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$z0 + (d._$z0 - c._$z0) * h, f = g._$z0 + (y._$z0 - g._$z0) * h, L = m._$z0 + (T._$z0 - m._$z0) * h, M = P._$z0 + (S._$z0 - P._$z0) * h, e._$Yr._$z0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$qT + (d._$qT - c._$qT) * h, f = g._$qT + (y._$qT - g._$qT) * h, L = m._$qT + (T._$qT - m._$qT) * h, M = P._$qT + (S._$qT - P._$qT) * h, e._$Yr._$qT = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u) - } else if (4 == o) { - var E = this._$Y0[n[0]], - A = this._$Y0[n[1]], - I = this._$Y0[n[2]], - w = this._$Y0[n[3]], - x = this._$Y0[n[4]], - O = this._$Y0[n[5]], - D = this._$Y0[n[6]], - R = this._$Y0[n[7]], - b = this._$Y0[n[8]], - F = this._$Y0[n[9]], - C = this._$Y0[n[10]], - N = this._$Y0[n[11]], - B = this._$Y0[n[12]], - U = this._$Y0[n[13]], - G = this._$Y0[n[14]], - Y = this._$Y0[n[15]], - h = s[0], - u = s[1], - v = s[2], - k = s[3], - p = E._$fL + (A._$fL - E._$fL) * h, - f = I._$fL + (w._$fL - I._$fL) * h, - L = x._$fL + (O._$fL - x._$fL) * h, - M = D._$fL + (R._$fL - D._$fL) * h, - V = b._$fL + (F._$fL - b._$fL) * h, - X = C._$fL + (N._$fL - C._$fL) * h, - H = B._$fL + (U._$fL - B._$fL) * h, - W = G._$fL + (Y._$fL - G._$fL) * h; - e._$Yr._$fL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$gL + (A._$gL - E._$gL) * h, f = I._$gL + (w._$gL - I._$gL) * h, L = x._$gL + (O._$gL - x._$gL) * h, M = D._$gL + (R._$gL - D._$gL) * h, V = b._$gL + (F._$gL - b._$gL) * h, X = C._$gL + (N._$gL - C._$gL) * h, H = B._$gL + (U._$gL - B._$gL) * h, W = G._$gL + (Y._$gL - G._$gL) * h, e._$Yr._$gL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$B0 + (A._$B0 - E._$B0) * h, f = I._$B0 + (w._$B0 - I._$B0) * h, L = x._$B0 + (O._$B0 - x._$B0) * h, M = D._$B0 + (R._$B0 - D._$B0) * h, V = b._$B0 + (F._$B0 - b._$B0) * h, X = C._$B0 + (N._$B0 - C._$B0) * h, H = B._$B0 + (U._$B0 - B._$B0) * h, W = G._$B0 + (Y._$B0 - G._$B0) * h, e._$Yr._$B0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$z0 + (A._$z0 - E._$z0) * h, f = I._$z0 + (w._$z0 - I._$z0) * h, L = x._$z0 + (O._$z0 - x._$z0) * h, M = D._$z0 + (R._$z0 - D._$z0) * h, V = b._$z0 + (F._$z0 - b._$z0) * h, X = C._$z0 + (N._$z0 - C._$z0) * h, H = B._$z0 + (U._$z0 - B._$z0) * h, W = G._$z0 + (Y._$z0 - G._$z0) * h, e._$Yr._$z0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$qT + (A._$qT - E._$qT) * h, f = I._$qT + (w._$qT - I._$qT) * h, L = x._$qT + (O._$qT - x._$qT) * h, M = D._$qT + (R._$qT - D._$qT) * h, V = b._$qT + (F._$qT - b._$qT) * h, X = C._$qT + (N._$qT - C._$qT) * h, H = B._$qT + (U._$qT - B._$qT) * h, W = G._$qT + (Y._$qT - G._$qT) * h, e._$Yr._$qT = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)) - } else { - for (var j = 0 | Math.pow(2, o), q = new Float32Array(j), J = 0; J < j; J++) { - for (var Q = J, Z = 1, K = 0; K < o; K++) Z *= Q % 2 == 0 ? 1 - s[K] : s[K], Q /= 2; - q[J] = Z - } - for (var tt = new Array, it = 0; it < j; it++) tt[it] = this._$Y0[n[it]]; - for (var et = 0, rt = 0, ot = 0, nt = 0, st = 0, it = 0; it < j; it++) et += q[it] * tt[it]._$fL, rt += q[it] * tt[it]._$gL, ot += q[it] * tt[it]._$B0, nt += q[it] * tt[it]._$z0, st += q[it] * tt[it]._$qT; - e._$Yr._$fL = et, e._$Yr._$gL = rt, e._$Yr._$B0 = ot, e._$Yr._$z0 = nt, e._$Yr._$qT = st - } - var _ = this._$Y0[n[0]]; - e._$Yr.reflectX = _.reflectX, e._$Yr.reflectY = _.reflectY - } - }, z.prototype._$2b = function(t, i) { - this != i._$GT() && console.log("### assert!! ### "); - var e = i; - if (e._$hS(!0), this._$32()) { - var r = this.getTargetBaseDataID(); - if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1); - else { - var o = t.getBaseData(e._$8r); - if (null != o) { - var n = t._$q2(e._$8r), - s = z._$Xo; - s[0] = e._$Yr._$fL, s[1] = e._$Yr._$gL; - var a = z._$io; - a[0] = 0, a[1] = -.1; - n._$GT().getType() == I._$c2 ? a[1] = -10 : a[1] = -.1; - var h = z._$0o; - this._$Jr(t, o, n, s, a, h); - var l = Lt._$92(a, h); - o._$nb(t, n, s, s, 1, 0, 2), e._$Wr._$fL = s[0], e._$Wr._$gL = s[1], e._$Wr._$B0 = e._$Yr._$B0, e._$Wr._$z0 = e._$Yr._$z0, e._$Wr._$qT = e._$Yr._$qT - l * Lt._$NS; - var $ = n.getTotalScale(); - e.setTotalScale_notForClient($ * e._$Wr._$B0); - var u = n.getTotalOpacity(); - e.setTotalOpacity(u * e.getInterpolatedOpacity()), e._$Wr.reflectX = e._$Yr.reflectX, e._$Wr.reflectY = e._$Yr.reflectY, e._$hS(n._$yo()) - } else e._$hS(!1) - } - } else e.setTotalScale_notForClient(e._$Yr._$B0), e.setTotalOpacity(e.getInterpolatedOpacity()) - }, z.prototype._$nb = function(t, i, e, r, o, n, s) { - this != i._$GT() && console.log("### assert!! ### "); - for (var _, a, h = i, l = null != h._$Wr ? h._$Wr : h._$Yr, $ = Math.sin(Lt._$bS * l._$qT), u = Math.cos(Lt._$bS * l._$qT), p = h.getTotalScale(), f = l.reflectX ? -1 : 1, c = l.reflectY ? -1 : 1, d = u * p * f, g = -$ * p * c, y = $ * p * f, m = u * p * c, T = l._$fL, P = l._$gL, S = o * s, v = n; v < S; v += s) _ = e[v], a = e[v + 1], r[v] = d * _ + g * a + T, r[v + 1] = y * _ + m * a + P - }, z.prototype._$Jr = function(t, i, e, r, o, n) { - i != e._$GT() && console.log("### assert!! ### "); - var s = z._$Lo; - z._$Lo[0] = r[0], z._$Lo[1] = r[1], i._$nb(t, e, s, s, 1, 0, 2); - for (var _ = z._$To, a = z._$Po, h = 1, l = 0; l < 10; l++) { - if (a[0] = r[0] + h * o[0], a[1] = r[1] + h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return n[0] = _[0], void(n[1] = _[1]); - if (a[0] = r[0] - h * o[0], a[1] = r[1] - h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return _[0] = -_[0], _[0] = -_[0], n[0] = _[0], void(n[1] = _[1]); - h *= .1 - } - at._$so && console.log("_$L0 to transform _$SP\n") - }, H.prototype = new _t, W.prototype = new M, W._$ur = -2, W._$ES = 500, W._$wb = 2, W._$8S = 3, W._$os = 4, W._$52 = W._$ES, W._$R2 = W._$ES, W._$Sb = function(t) { - for (var i = t.length - 1; i >= 0; --i) { - var e = t[i]; - e < W._$52 ? W._$52 = e : e > W._$R2 && (W._$R2 = e) - } - }, W._$or = function() { - return W._$52 - }, W._$Pr = function() { - return W._$R2 - }, W.prototype._$F0 = function(t) { - this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = null, W._$Sb(this._$Lb) - }, W.prototype.getClipIDList = function() { - return this.clipIDList - }, W.prototype._$Nr = function(t, i) { - if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs); - else if (i._$IS[0]) return; - i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS) - }, W.prototype._$2b = function(t) {}, W.prototype.getDrawDataID = function() { - return this._$gP - }, W.prototype._$j2 = function(t) { - this._$gP = t - }, W.prototype.getOpacity = function(t, i) { - return i._$7s - }, W.prototype._$zS = function(t, i) { - return i._$Us - }, W.prototype.getTargetBaseDataID = function() { - return this._$dr - }, W.prototype._$gs = function(t) { - this._$dr = t - }, W.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, W.prototype.getType = function() {}, j._$42 = 0, j.prototype._$1b = function() { - return this._$3S - }, j.prototype.getDrawDataList = function() { - return this._$aS - }, j.prototype._$F0 = function(t) { - this._$NL = t._$nP(), this._$aS = t._$nP(), this._$3S = t._$nP() - }, j.prototype._$kr = function(t) { - t._$Zo(this._$3S), t._$xo(this._$aS), this._$3S = null, this._$aS = null - }, q.prototype = new i, q.loadModel = function(t) { - var e = new q; - return i._$62(e, t), e - }, q.loadModel = function(t) { - var e = new q; - return i._$62(e, t), e - }, q._$to = function() { - return new q - }, q._$er = function(t) { - var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d"); - if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL()); - for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = q.loadModel(i._$3b()), o = 0; o < e.length; o++) { - var n = new _$5(e[o]); - if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL()); - r.setTexture(o, _$nL._$_o(t, n._$3b())) - } - return r - }, q.prototype.setGL = function(t) { - this._$zo.setGL(t) - }, q.prototype.setTransform = function(t) { - this._$zo.setTransform(t) - }, q.prototype.draw = function() { - this._$5S.draw(this._$zo) - }, q.prototype._$K2 = function() { - this._$zo._$K2() - }, q.prototype.setTexture = function(t, i) { - null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i) - }, q.prototype.setTexture = function(t, i) { - null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i) - }, q.prototype._$Rs = function() { - return this._$zo._$Rs() - }, q.prototype._$Ds = function(t) { - this._$zo._$Ds(t) - }, q.prototype.getDrawParam = function() { - return this._$zo - }, J.prototype = new s, J._$cs = "VISIBLE:", J._$ar = "LAYOUT:", J.MTN_PREFIX_FADEIN = "FADEIN:", J.MTN_PREFIX_FADEOUT = "FADEOUT:", J._$Co = 0, J._$1T = 1, J.loadMotion = function(t) { - var i = k._$C(t); - return J.loadMotion(i) - }, J.loadMotion = function(t) { - t instanceof ArrayBuffer && (t = new DataView(t)); - var i = new J, - e = [0], - r = t.byteLength; - i._$yT = 0; - for (var o = 0; o < r; ++o) { - var n = Q(t, o), - s = n.charCodeAt(0); - if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) { - if (97 <= s && s <= 122 || 65 <= s && s <= 90 || "_" == n) { - for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) { - a = o; - break - } - if (a >= 0) { - var h = new B; - O.startsWith(t, _, J._$cs) ? (h._$RP = B._$hs, h._$4P = O.createString(t, _, a - _)) : O.startsWith(t, _, J._$ar) ? (h._$4P = O.createString(t, _ + 7, a - _ - 7), O.startsWith(t, _ + 7, "ANCHOR_X") ? h._$RP = B._$xs : O.startsWith(t, _ + 7, "ANCHOR_Y") ? h._$RP = B._$us : O.startsWith(t, _ + 7, "SCALE_X") ? h._$RP = B._$qs : O.startsWith(t, _ + 7, "SCALE_Y") ? h._$RP = B._$Ys : O.startsWith(t, _ + 7, "X") ? h._$RP = B._$ws : O.startsWith(t, _ + 7, "Y") && (h._$RP = B._$Ns)) : (h._$RP = B._$Fr, h._$4P = O.createString(t, _, a - _)), i.motions.push(h); - var l = 0, - $ = []; - for (o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var u = O._$LS(t, r, o, e); - if (e[0] > 0) { - $.push(u), l++; - var p = e[0]; - if (p < o) { - console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n"); - break - } - o = p - 1 - } - } - h._$I0 = new Float32Array($), l > i._$yT && (i._$yT = l) - } - } - } else { - for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) { - a = o; - break - } - var f = !1; - if (a >= 0) for (a == _ + 4 && "f" == Q(t, _ + 1) && "p" == Q(t, _ + 2) && "s" == Q(t, _ + 3) && (f = !0), o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var u = O._$LS(t, r, o, e); - e[0] > 0 && f && 5 < u && u < 121 && (i._$D0 = u), o = e[0] - } - for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o); - } else for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o); - } - return i._$rr = 1e3 * i._$yT / i._$D0 | 0, i - }, J.prototype.getDurationMSec = function() { - return this._$E ? -1 : this._$rr - }, J.prototype.getLoopDurationMSec = function() { - return this._$rr - }, J.prototype.dump = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length); - for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]); - console.log("\n") - } - }, J.prototype.updateParamExe = function(t, i, e, r) { - for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) { - var h = this.motions[a], - l = h._$I0.length, - $ = h._$4P; - if (h._$RP == B._$hs) { - var u = h._$I0[s >= l ? l - 1 : s]; - t.setParamFloat($, u) - } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys); - else { - var p, f = t.getParamIndex($), - c = t.getModelContext(), - d = c.getParamMax(f), - g = c.getParamMin(f), - y = .4 * (d - g), - m = c.getParamFloat(f), - T = h._$I0[s >= l ? l - 1 : s], - P = h._$I0[s + 1 >= l ? l - 1 : s + 1]; - p = T < P && P - T > y || T > P && T - P > y ? T : T + (P - T) * _; - var S = m + (p - m) * e; - t.setParamFloat($, S) - } - } - s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0), this._$eP = e - }, J.prototype._$r0 = function() { - return this._$E - }, J.prototype._$aL = function(t) { - this._$E = t - }, J.prototype._$S0 = function() { - return this._$D0 - }, J.prototype._$U0 = function(t) { - this._$D0 = t - }, J.prototype.isLoopFadeIn = function() { - return this.loopFadeIn - }, J.prototype.setLoopFadeIn = function(t) { - this.loopFadeIn = t - }, N.prototype.clear = function() { - this.size = 0 - }, N.prototype.add = function(t) { - if (this._$P.length <= this.size) { - var i = new Float32Array(2 * this.size); - w._$jT(this._$P, 0, i, 0, this.size), this._$P = i - } - this._$P[this.size++] = t - }, N.prototype._$BL = function() { - var t = new Float32Array(this.size); - return w._$jT(this._$P, 0, t, 0, this.size), t - }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, Z.prototype = new I, Z._$gT = new Array, Z.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP() - }, Z.prototype._$F0 = function(t) { - I.prototype._$F0.call(this, t), this._$A = t._$6L(), this._$o = t._$6L(), this._$GS = t._$nP(), this._$Eo = t._$nP(), I.prototype.readV2_opacity.call(this, t) - }, Z.prototype.init = function(t) { - var i = new K(this), - e = (this._$o + 1) * (this._$A + 1); - return null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(2 * e), null != i._$hr && (i._$hr = null), this._$32() ? i._$hr = new Float32Array(2 * e) : i._$hr = null, i - }, Z.prototype._$Nr = function(t, i) { - var e = i; - if (this._$GS._$Ur(t)) { - var r = this._$VT(), - o = Z._$gT; - o[0] = !1, v._$Vr(t, this._$GS, o, r, this._$Eo, e._$Cr, 0, 2), i._$Ib(o[0]), this.interpolateOpacity(t, this._$GS, i, o) - } - }, Z.prototype._$2b = function(t, i) { - var e = i; - if (e._$hS(!0), this._$32()) { - var r = this.getTargetBaseDataID(); - if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1); - else { - var o = t.getBaseData(e._$8r), - n = t._$q2(e._$8r); - if (null != o && n._$yo()) { - var s = n.getTotalScale(); - e.setTotalScale_notForClient(s); - var a = n.getTotalOpacity(); - e.setTotalOpacity(a * e.getInterpolatedOpacity()), o._$nb(t, n, e._$Cr, e._$hr, this._$VT(), 0, 2), e._$hS(!0) - } else e._$hS(!1) - } - } else e.setTotalOpacity(e.getInterpolatedOpacity()) - }, Z.prototype._$nb = function(t, i, e, r, o, n, s) { - var _ = i, - a = null != _._$hr ? _._$hr : _._$Cr; - Z.transformPoints_sdk2(e, r, o, n, s, a, this._$o, this._$A) - }, Z.transformPoints_sdk2 = function(i, e, r, o, n, s, _, a) { - for (var h, l, $, u = r * n, p = 0, f = 0, c = 0, d = 0, g = 0, y = 0, m = !1, T = o; T < u; T += n) { - var P, S, v, L; - if (v = i[T], L = i[T + 1], P = v * _, S = L * a, P < 0 || S < 0 || _ <= P || a <= S) { - var M = _ + 1; - if (!m) { - m = !0, p = .25 * (s[2 * (0 + 0 * M)] + s[2 * (_ + 0 * M)] + s[2 * (0 + a * M)] + s[2 * (_ + a * M)]), f = .25 * (s[2 * (0 + 0 * M) + 1] + s[2 * (_ + 0 * M) + 1] + s[2 * (0 + a * M) + 1] + s[2 * (_ + a * M) + 1]); - var E = s[2 * (_ + a * M)] - s[2 * (0 + 0 * M)], - A = s[2 * (_ + a * M) + 1] - s[2 * (0 + 0 * M) + 1], - I = s[2 * (_ + 0 * M)] - s[2 * (0 + a * M)], - w = s[2 * (_ + 0 * M) + 1] - s[2 * (0 + a * M) + 1]; - c = .5 * (E + I), d = .5 * (A + w), g = .5 * (E - I), y = .5 * (A - w), p -= .5 * (c + g), f -= .5 * (d + y) - } - if (-2 < v && v < 3 && -2 < L && L < 3) if (v <= 0) if (L <= 0) { - var x = s[2 * (0 + 0 * M)], - O = s[2 * (0 + 0 * M) + 1], - D = p - 2 * c, - R = f - 2 * d, - b = p - 2 * g, - F = f - 2 * y, - C = p - 2 * c - 2 * g, - N = f - 2 * d - 2 * y, - B = .5 * (v - -2), - U = .5 * (L - -2); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var b = s[2 * (0 + a * M)], - F = s[2 * (0 + a * M) + 1], - C = p - 2 * c + 1 * g, - N = f - 2 * d + 1 * y, - x = p + 3 * g, - O = f + 3 * y, - D = p - 2 * c + 3 * g, - R = f - 2 * d + 3 * y, - B = .5 * (v - -2), - U = .5 * (L - 1); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else { - var G = 0 | S; - G == a && (G = a - 1); - var B = .5 * (v - -2), - U = S - G, - Y = G / a, - k = (G + 1) / a, - b = s[2 * (0 + G * M)], - F = s[2 * (0 + G * M) + 1], - x = s[2 * (0 + (G + 1) * M)], - O = s[2 * (0 + (G + 1) * M) + 1], - C = p - 2 * c + Y * g, - N = f - 2 * d + Y * y, - D = p - 2 * c + k * g, - R = f - 2 * d + k * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (1 <= v) if (L <= 0) { - var D = s[2 * (_ + 0 * M)], - R = s[2 * (_ + 0 * M) + 1], - x = p + 3 * c, - O = f + 3 * d, - C = p + 1 * c - 2 * g, - N = f + 1 * d - 2 * y, - b = p + 3 * c - 2 * g, - F = f + 3 * d - 2 * y, - B = .5 * (v - 1), - U = .5 * (L - -2); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var C = s[2 * (_ + a * M)], - N = s[2 * (_ + a * M) + 1], - b = p + 3 * c + 1 * g, - F = f + 3 * d + 1 * y, - D = p + 1 * c + 3 * g, - R = f + 1 * d + 3 * y, - x = p + 3 * c + 3 * g, - O = f + 3 * d + 3 * y, - B = .5 * (v - 1), - U = .5 * (L - 1); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else { - var G = 0 | S; - G == a && (G = a - 1); - var B = .5 * (v - 1), - U = S - G, - Y = G / a, - k = (G + 1) / a, - C = s[2 * (_ + G * M)], - N = s[2 * (_ + G * M) + 1], - D = s[2 * (_ + (G + 1) * M)], - R = s[2 * (_ + (G + 1) * M) + 1], - b = p + 3 * c + Y * g, - F = f + 3 * d + Y * y, - x = p + 3 * c + k * g, - O = f + 3 * d + k * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L <= 0) { - var V = 0 | P; - V == _ && (V = _ - 1); - var B = P - V, - U = .5 * (L - -2), - X = V / _, - z = (V + 1) / _, - D = s[2 * (V + 0 * M)], - R = s[2 * (V + 0 * M) + 1], - x = s[2 * (V + 1 + 0 * M)], - O = s[2 * (V + 1 + 0 * M) + 1], - C = p + X * c - 2 * g, - N = f + X * d - 2 * y, - b = p + z * c - 2 * g, - F = f + z * d - 2 * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var V = 0 | P; - V == _ && (V = _ - 1); - var B = P - V, - U = .5 * (L - 1), - X = V / _, - z = (V + 1) / _, - C = s[2 * (V + a * M)], - N = s[2 * (V + a * M) + 1], - b = s[2 * (V + 1 + a * M)], - F = s[2 * (V + 1 + a * M) + 1], - D = p + X * c + 3 * g, - R = f + X * d + 3 * y, - x = p + z * c + 3 * g, - O = f + z * d + 3 * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else t.err.printf("_$li calc : %.4f , %.4f\t\t\t\t\t@@BDBoxGrid\n", v, L); - else e[T] = p + v * c + L * g, e[T + 1] = f + v * d + L * y - } else l = P - (0 | P), $ = S - (0 | S), h = 2 * ((0 | P) + (0 | S) * (_ + 1)), l + $ < 1 ? (e[T] = s[h] * (1 - l - $) + s[h + 2] * l + s[h + 2 * (_ + 1)] * $, e[T + 1] = s[h + 1] * (1 - l - $) + s[h + 3] * l + s[h + 2 * (_ + 1) + 1] * $) : (e[T] = s[h + 2 * (_ + 1) + 2] * (l - 1 + $) + s[h + 2 * (_ + 1)] * (1 - l) + s[h + 2] * (1 - $), e[T + 1] = s[h + 2 * (_ + 1) + 3] * (l - 1 + $) + s[h + 2 * (_ + 1) + 1] * (1 - l) + s[h + 3] * (1 - $)) - } - }, Z.prototype.transformPoints_sdk1 = function(t, i, e, r, o, n, s) { - for (var _, a, h, l, $, u, p, f = i, c = this._$o, d = this._$A, g = o * s, y = null != f._$hr ? f._$hr : f._$Cr, m = n; m < g; m += s) at._$ts ? (_ = e[m], a = e[m + 1], _ < 0 ? _ = 0 : _ > 1 && (_ = 1), a < 0 ? a = 0 : a > 1 && (a = 1), _ *= c, a *= d, h = 0 | _, l = 0 | a, h > c - 1 && (h = c - 1), l > d - 1 && (l = d - 1), u = _ - h, p = a - l, $ = 2 * (h + l * (c + 1))) : (_ = e[m] * c, a = e[m + 1] * d, u = _ - (0 | _), p = a - (0 | a), $ = 2 * ((0 | _) + (0 | a) * (c + 1))), u + p < 1 ? (r[m] = y[$] * (1 - u - p) + y[$ + 2] * u + y[$ + 2 * (c + 1)] * p, r[m + 1] = y[$ + 1] * (1 - u - p) + y[$ + 3] * u + y[$ + 2 * (c + 1) + 1] * p) : (r[m] = y[$ + 2 * (c + 1) + 2] * (u - 1 + p) + y[$ + 2 * (c + 1)] * (1 - u) + y[$ + 2] * (1 - p), r[m + 1] = y[$ + 2 * (c + 1) + 3] * (u - 1 + p) + y[$ + 2 * (c + 1) + 1] * (1 - u) + y[$ + 3] * (1 - p)) - }, Z.prototype._$VT = function() { - return (this._$o + 1) * (this._$A + 1) - }, Z.prototype.getType = function() { - return I._$_b - }, K.prototype = new _t, tt._$42 = 0, tt.prototype._$zP = function() { - this._$3S = new Array, this._$aS = new Array - }, tt.prototype._$F0 = function(t) { - this._$g0 = t._$8L(), this.visible = t._$8L(), this._$NL = t._$nP(), this._$3S = t._$nP(), this._$aS = t._$nP() - }, tt.prototype.init = function(t) { - var i = new it(this); - return i.setPartsOpacity(this.isVisible() ? 1 : 0), i - }, tt.prototype._$6o = function(t) { - if (null == this._$3S) throw new Error("_$3S _$6 _$Wo@_$6o"); - this._$3S.push(t) - }, tt.prototype._$3o = function(t) { - if (null == this._$aS) throw new Error("_$aS _$6 _$Wo@_$3o"); - this._$aS.push(t) - }, tt.prototype._$Zo = function(t) { - this._$3S = t - }, tt.prototype._$xo = function(t) { - this._$aS = t - }, tt.prototype.isVisible = function() { - return this.visible - }, tt.prototype._$uL = function() { - return this._$g0 - }, tt.prototype._$KP = function(t) { - this.visible = t - }, tt.prototype._$ET = function(t) { - this._$g0 = t - }, tt.prototype.getBaseData = function() { - return this._$3S - }, tt.prototype.getDrawData = function() { - return this._$aS - }, tt.prototype._$p2 = function() { - return this._$NL - }, tt.prototype._$ob = function(t) { - this._$NL = t - }, tt.prototype.getPartsID = function() { - return this._$NL - }, tt.prototype._$MP = function(t) { - this._$NL = t - }, it.prototype = new $, it.prototype.getPartsOpacity = function() { - return this._$VS - }, it.prototype.setPartsOpacity = function(t) { - this._$VS = t - }, et._$L7 = function() { - u._$27(), yt._$27(), b._$27(), l._$27() - }, et.prototype.toString = function() { - return this.id - }, rt.prototype._$F0 = function(t) {}, ot.prototype._$1s = function() { - return this._$4S - }, ot.prototype._$zP = function() { - this._$4S = new Array - }, ot.prototype._$F0 = function(t) { - this._$4S = t._$nP() - }, ot.prototype._$Ks = function(t) { - this._$4S.push(t) - }, nt.tr = new gt, nt._$50 = new gt, nt._$Ti = new Array(0, 0), nt._$Pi = new Array(0, 0), nt._$B = new Array(0, 0), nt.prototype._$lP = function(t, i, e, r) { - this.viewport = new Array(t, i, e, r) - }, nt.prototype._$bL = function() { - this.context.save(); - var t = this.viewport; - null != t && (this.context.beginPath(), this.context._$Li(t[0], t[1], t[2], t[3]), this.context.clip()) - }, nt.prototype._$ei = function() { - this.context.restore() - }, nt.prototype.drawElements = function(t, i, e, r, o, n, s, a) { - try { - o != this._$Qo && (this._$Qo = o, this.context.globalAlpha = o); - for (var h = i.length, l = t.width, $ = t.height, u = this.context, p = this._$xP, f = this._$uP, c = this._$6r, d = this._$3r, g = nt.tr, y = nt._$Ti, m = nt._$Pi, T = nt._$B, P = 0; P < h; P += 3) { - u.save(); - var S = i[P], - v = i[P + 1], - L = i[P + 2], - M = p + c * e[2 * S], - E = f + d * e[2 * S + 1], - A = p + c * e[2 * v], - I = f + d * e[2 * v + 1], - w = p + c * e[2 * L], - x = f + d * e[2 * L + 1]; - s && (s._$PS(M, E, T), M = T[0], E = T[1], s._$PS(A, I, T), A = T[0], I = T[1], s._$PS(w, x, T), w = T[0], x = T[1]); - var O = l * r[2 * S], - D = $ - $ * r[2 * S + 1], - R = l * r[2 * v], - b = $ - $ * r[2 * v + 1], - F = l * r[2 * L], - C = $ - $ * r[2 * L + 1], - N = Math.atan2(b - D, R - O), - B = Math.atan2(I - E, A - M), - U = A - M, - G = I - E, - Y = Math.sqrt(U * U + G * G), - k = R - O, - V = b - D, - X = Math.sqrt(k * k + V * V), - z = Y / X; - It._$ni(F, C, O, D, R - O, b - D, -(b - D), R - O, y), It._$ni(w, x, M, E, A - M, I - E, -(I - E), A - M, m); - var H = (m[0] - y[0]) / y[1], - W = Math.min(O, R, F), - j = Math.max(O, R, F), - q = Math.min(D, b, C), - J = Math.max(D, b, C), - Q = Math.floor(W), - Z = Math.floor(q), - K = Math.ceil(j), - tt = Math.ceil(J); - g.identity(), g.translate(M, E), g.rotate(B), g.scale(1, m[1] / y[1]), g.shear(H, 0), g.scale(z, z), g.rotate(-N), g.translate(-O, -D), g.setContext(u); - if (n || (n = 1.2), at.IGNORE_EXPAND && (n = 0), at.USE_CACHED_POLYGON_IMAGE) { - var it = a._$e0; - if (it.gl_cacheImage = it.gl_cacheImage || {}, !it.gl_cacheImage[P]) { - var et = nt.createCanvas(K - Q, tt - Z); - at.DEBUG_DATA.LDGL_CANVAS_MB = at.DEBUG_DATA.LDGL_CANVAS_MB || 0, at.DEBUG_DATA.LDGL_CANVAS_MB += (K - Q) * (tt - Z) * 4; - var rt = et.getContext("2d"); - rt.translate(-Q, -Z), nt.clip(rt, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), rt.drawImage(t, 0, 0), it.gl_cacheImage[P] = { - cacheCanvas: et, - cacheContext: rt - } - } - u.drawImage(it.gl_cacheImage[P].cacheCanvas, Q, Z) - } else at.IGNORE_CLIP || nt.clip(u, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), at.USE_ADJUST_TRANSLATION && (W = 0, j = l, q = 0, J = $), u.drawImage(t, W, q, j - W, J - q, W, q, j - W, J - q); - u.restore() - } - } catch (t) { - _._$Rb(t) - } - }, nt.clip = function(t, i, e, r, o, n, s, _, a, h, l, $, u, p, f, c) { - e > .02 ? nt.expandClip(t, i, e, r, l, $, u, p, f, c) : nt.clipWithTransform(t, null, o, n, s, _, a, h) - }, nt.expandClip = function(t, i, e, r, o, n, s, _, a, h) { - var l = s - o, - $ = _ - n, - u = a - o, - p = h - n, - f = l * p - $ * u > 0 ? e : -e, - c = -$, - d = l, - g = a - s, - y = h - _, - m = -y, - T = g, - P = Math.sqrt(g * g + y * y), - S = -p, - v = u, - L = Math.sqrt(u * u + p * p), - M = o - f * c / r, - E = n - f * d / r, - A = s - f * c / r, - I = _ - f * d / r, - w = s - f * m / P, - x = _ - f * T / P, - O = a - f * m / P, - D = h - f * T / P, - R = o + f * S / L, - b = n + f * v / L, - F = a + f * S / L, - C = h + f * v / L, - N = nt._$50; - return null != i._$P2(N) && (nt.clipWithTransform(t, N, M, E, A, I, w, x, O, D, F, C, R, b), !0) - }, nt.clipWithTransform = function(t, i, e, r, o, n, s, a) { - if (arguments.length < 7) return void _._$li("err : @LDGL.clip()"); - if (!(arguments[1] instanceof gt)) return void _._$li("err : a[0] is _$6 LDTransform @LDGL.clip()"); - var h = nt._$B, - l = i, - $ = arguments; - if (t.beginPath(), l) { - l._$PS($[2], $[3], h), t.moveTo(h[0], h[1]); - for (var u = 4; u < $.length; u += 2) l._$PS($[u], $[u + 1], h), t.lineTo(h[0], h[1]) - } else { - t.moveTo($[2], $[3]); - for (var u = 4; u < $.length; u += 2) t.lineTo($[u], $[u + 1]) - } - t.clip() - }, nt.createCanvas = function(t, i) { - var e = document.createElement("canvas"); - return e.setAttribute("width", t), e.setAttribute("height", i), e || _._$li("err : " + e), e - }, nt.dumpValues = function() { - for (var t = "", i = 0; i < arguments.length; i++) t += "[" + i + "]= " + arguments[i].toFixed(3) + " , "; - console.log(t) - }, st.prototype._$F0 = function(t) { - this._$TT = t._$_T(), this._$LT = t._$_T(), this._$FS = t._$_T(), this._$wL = t._$nP() - }, st.prototype.getMinValue = function() { - return this._$TT - }, st.prototype.getMaxValue = function() { - return this._$LT - }, st.prototype.getDefaultValue = function() { - return this._$FS - }, st.prototype.getParamID = function() { - return this._$wL - }, _t.prototype._$yo = function() { - return this._$AT && !this._$JS - }, _t.prototype._$hS = function(t) { - this._$AT = t - }, _t.prototype._$GT = function() { - return this._$e0 - }, _t.prototype._$l2 = function(t) { - this._$IP = t - }, _t.prototype.getPartsIndex = function() { - return this._$IP - }, _t.prototype._$x2 = function() { - return this._$JS - }, _t.prototype._$Ib = function(t) { - this._$JS = t - }, _t.prototype.getTotalScale = function() { - return this.totalScale - }, _t.prototype.setTotalScale_notForClient = function(t) { - this.totalScale = t - }, _t.prototype.getInterpolatedOpacity = function() { - return this._$7s - }, _t.prototype.setInterpolatedOpacity = function(t) { - this._$7s = t - }, _t.prototype.getTotalOpacity = function(t) { - return this.totalOpacity - }, _t.prototype.setTotalOpacity = function(t) { - this.totalOpacity = t - }, at._$2s = "2.1.00_1", at._$Kr = 201001e3, at._$sP = !0, at._$so = !0, at._$cb = !1, at._$3T = !0, at._$Ts = !0, at._$fb = !0, at._$ts = !0, at.L2D_DEFORMER_EXTEND = !0, at._$Wb = !1; - at._$yr = !1, at._$Zs = !1, at.L2D_NO_ERROR = 0, at._$i7 = 1e3, at._$9s = 1001, at._$es = 1100, at._$r7 = 2e3, at._$07 = 2001, at._$b7 = 2002, at._$H7 = 4e3, at.L2D_COLOR_BLEND_MODE_MULT = 0, at.L2D_COLOR_BLEND_MODE_ADD = 1, at.L2D_COLOR_BLEND_MODE_INTERPOLATE = 2, at._$6b = !0, at._$cT = 0, at.clippingMaskBufferSize = 256, at.glContext = new Array, at.frameBuffers = new Array, at.fTexture = new Array, at.IGNORE_CLIP = !1, at.IGNORE_EXPAND = !1, at.EXPAND_W = 2, at.USE_ADJUST_TRANSLATION = !0, at.USE_CANVAS_TRANSFORM = !0, at.USE_CACHED_POLYGON_IMAGE = !1, at.DEBUG_DATA = {}, at.PROFILE_IOS_SPEED = { - PROFILE_NAME: "iOS Speed", - USE_ADJUST_TRANSLATION: !0, - USE_CACHED_POLYGON_IMAGE: !0, - EXPAND_W: 4 - }, at.PROFILE_IOS_QUALITY = { - PROFILE_NAME: "iOS HiQ", - USE_ADJUST_TRANSLATION: !0, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.PROFILE_IOS_DEFAULT = at.PROFILE_IOS_QUALITY, at.PROFILE_ANDROID = { - PROFILE_NAME: "Android", - USE_ADJUST_TRANSLATION: !1, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.PROFILE_DESKTOP = { - PROFILE_NAME: "Desktop", - USE_ADJUST_TRANSLATION: !1, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.initProfile = function() { - Et.isIOS() ? at.setupProfile(at.PROFILE_IOS_DEFAULT) : Et.isAndroid() ? at.setupProfile(at.PROFILE_ANDROID) : at.setupProfile(at.PROFILE_DESKTOP) - }, at.setupProfile = function(t, i) { - if ("number" == typeof t) switch (t) { - case 9901: - t = at.PROFILE_IOS_SPEED; - break; - case 9902: - t = at.PROFILE_IOS_QUALITY; - break; - case 9903: - t = at.PROFILE_IOS_DEFAULT; - break; - case 9904: - t = at.PROFILE_ANDROID; - break; - case 9905: - t = at.PROFILE_DESKTOP; - break; - default: - alert("profile _$6 _$Ui : " + t) - } - arguments.length < 2 && (i = !0), i && console.log("profile : " + t.PROFILE_NAME); - for (var e in t) at[e] = t[e], i && console.log(" [" + e + "] = " + t[e]) - }, at.init = function() { - if (at._$6b) { - console.log("Live2D %s", at._$2s), at._$6b = !1; - !0, at.initProfile() - } - }, at.getVersionStr = function() { - return at._$2s - }, at.getVersionNo = function() { - return at._$Kr - }, at._$sT = function(t) { - at._$cT = t - }, at.getError = function() { - var t = at._$cT; - return at._$cT = 0, t - }, at.dispose = function() { - at.glContext = [], at.frameBuffers = [], at.fTexture = [] - }, at.setGL = function(t, i) { - var e = i || 0; - at.glContext[e] = t - }, at.getGL = function(t) { - return at.glContext[t] - }, at.setClippingMaskBufferSize = function(t) { - at.clippingMaskBufferSize = t - }, at.getClippingMaskBufferSize = function() { - return at.clippingMaskBufferSize - }, at.deleteBuffer = function(t) { - at.getGL(t).deleteFramebuffer(at.frameBuffers[t].framebuffer), delete at.frameBuffers[t], delete at.glContext[t] - }, ht._$r2 = function(t) { - return t < 0 ? 0 : t > 1 ? 1 : .5 - .5 * Math.cos(t * Lt.PI_F) - }, lt._$fr = -1, lt.prototype.toString = function() { - return this._$ib - }, $t.prototype = new W, $t._$42 = 0, $t._$Os = 30, $t._$ms = 0, $t._$ns = 1, $t._$_s = 2, $t._$gT = new Array, $t.prototype._$_S = function(t) { - this._$LP = t - }, $t.prototype.getTextureNo = function() { - return this._$LP - }, $t.prototype._$ZL = function() { - return this._$Qi - }, $t.prototype._$H2 = function() { - return this._$JP - }, $t.prototype.getNumPoints = function() { - return this._$d0 - }, $t.prototype.getType = function() { - return W._$wb - }, $t.prototype._$B2 = function(t, i, e) { - var r = i, - o = null != r._$hr ? r._$hr : r._$Cr; - switch (U._$do) { - default: - case U._$Ms: - throw new Error("_$L _$ro "); - case U._$Qs: - for (var n = this._$d0 - 1; n >= 0; --n) o[n * U._$No + 4] = e - } - }, $t.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP() - }, $t.prototype._$F0 = function(t) { - W.prototype._$F0.call(this, t), this._$LP = t._$6L(), this._$d0 = t._$6L(), this._$Yo = t._$6L(); - var i = t._$nP(); - this._$BP = new Int16Array(3 * this._$Yo); - for (var e = 3 * this._$Yo - 1; e >= 0; --e) this._$BP[e] = i[e]; - if (this._$Eo = t._$nP(), this._$Qi = t._$nP(), t.getFormatVersion() >= G._$s7) { - if (this._$JP = t._$6L(), 0 != this._$JP) { - if (0 != (1 & this._$JP)) { - var r = t._$6L(); - null == this._$5P && (this._$5P = new Object), this._$5P._$Hb = parseInt(r) - } - 0 != (this._$JP & $t._$Os) ? this._$6s = (this._$JP & $t._$Os) >> 1 : this._$6s = $t._$ms, 0 != (32 & this._$JP) && (this.culling = !1) - } - } else this._$JP = 0 - }, $t.prototype.init = function(t) { - var i = new ut(this), - e = this._$d0 * U._$No, - r = this._$32(); - switch (null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(e), null != i._$hr && (i._$hr = null), i._$hr = r ? new Float32Array(e) : null, U._$do) { - default: - case U._$Ms: - if (U._$Ls) for (var o = this._$d0 - 1; o >= 0; --o) { - var n = o << 1; - this._$Qi[n + 1] = 1 - this._$Qi[n + 1] - } - break; - case U._$Qs: - for (var o = this._$d0 - 1; o >= 0; --o) { - var n = o << 1, - s = o * U._$No, - _ = this._$Qi[n], - a = this._$Qi[n + 1]; - i._$Cr[s] = _, i._$Cr[s + 1] = a, i._$Cr[s + 4] = 0, r && (i._$hr[s] = _, i._$hr[s + 1] = a, i._$hr[s + 4] = 0) - } - } - return i - }, $t.prototype._$Nr = function(t, i) { - var e = i; - if (this != e._$GT() && console.log("### assert!! ### "), this._$GS._$Ur(t) && (W.prototype._$Nr.call(this, t, e), !e._$IS[0])) { - var r = $t._$gT; - r[0] = !1, v._$Vr(t, this._$GS, r, this._$d0, this._$Eo, e._$Cr, U._$i2, U._$No) - } - }, $t.prototype._$2b = function(t, i) { - try { - this != i._$GT() && console.log("### assert!! ### "); - var e = !1; - i._$IS[0] && (e = !0); - var r = i; - if (!e && (W.prototype._$2b.call(this, t), this._$32())) { - var o = this.getTargetBaseDataID(); - if (r._$8r == W._$ur && (r._$8r = t.getBaseDataIndex(o)), r._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", o); - else { - var n = t.getBaseData(r._$8r), - s = t._$q2(r._$8r); - null == n || s._$x2() ? r._$AT = !1 : (n._$nb(t, s, r._$Cr, r._$hr, this._$d0, U._$i2, U._$No), r._$AT = !0), r.baseOpacity = s.getTotalOpacity() - } - } - } catch (t) { - throw t - } - }, $t.prototype.draw = function(t, i, e) { - if (this != e._$GT() && console.log("### assert!! ### "), !e._$IS[0]) { - var r = e, - o = this._$LP; - o < 0 && (o = 1); - var n = this.getOpacity(i, r) * e._$VS * e.baseOpacity, - s = null != r._$hr ? r._$hr : r._$Cr; - t.setClipBufPre_clipContextForDraw(e.clipBufPre_clipContext), t._$WP(this.culling), t._$Uo(o, 3 * this._$Yo, this._$BP, s, this._$Qi, n, this._$6s, r) - } - }, $t.prototype.dump = function() { - console.log(" _$yi( %d ) , _$d0( %d ) , _$Yo( %d ) \n", this._$LP, this._$d0, this._$Yo), console.log(" _$Oi _$di = { "); - for (var t = 0; t < this._$BP.length; t++) console.log("%5d ,", this._$BP[t]); - console.log("\n _$5i _$30"); - for (var t = 0; t < this._$Eo.length; t++) { - console.log("\n _$30[%d] = ", t); - for (var i = this._$Eo[t], e = 0; e < i.length; e++) console.log("%6.2f, ", i[e]) - } - console.log("\n") - }, $t.prototype._$72 = function(t) { - return null == this._$5P ? null : this._$5P[t] - }, $t.prototype.getIndexArray = function() { - return this._$BP - }, ut.prototype = new Mt, ut.prototype.getTransformedPoints = function() { - return null != this._$hr ? this._$hr : this._$Cr - }, pt.prototype._$HT = function(t) { - this.x = t.x, this.y = t.y - }, pt.prototype._$HT = function(t, i) { - this.x = t, this.y = i - }, ft.prototype = new i, ft.loadModel = function(t) { - var e = new ft; - return i._$62(e, t), e - }, ft.loadModel = function(t, e) { - var r = e || 0, - o = new ft(r); - return i._$62(o, t), o - }, ft._$to = function() { - return new ft - }, ft._$er = function(t) { - var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d"); - if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL()); - for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = ft.loadModel(i._$3b()), o = 0; o < e.length; o++) { - var n = new _$5(e[o]); - if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL()); - r.setTexture(o, _$nL._$_o(t, n._$3b())) - } - return r - }, ft.prototype.setGL = function(t) { - at.setGL(t) - }, ft.prototype.setTransform = function(t) { - this.drawParamWebGL.setTransform(t) - }, ft.prototype.update = function() { - this._$5S.update(), this._$5S.preDraw(this.drawParamWebGL) - }, ft.prototype.draw = function() { - this._$5S.draw(this.drawParamWebGL) - }, ft.prototype._$K2 = function() { - this.drawParamWebGL._$K2() - }, ft.prototype.setTexture = function(t, i) { - null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i) - }, ft.prototype.setTexture = function(t, i) { - null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i) - }, ft.prototype._$Rs = function() { - return this.drawParamWebGL._$Rs() - }, ft.prototype._$Ds = function(t) { - this.drawParamWebGL._$Ds(t) - }, ft.prototype.getDrawParam = function() { - return this.drawParamWebGL - }, ft.prototype.setMatrix = function(t) { - this.drawParamWebGL.setMatrix(t) - }, ft.prototype.setPremultipliedAlpha = function(t) { - this.drawParamWebGL.setPremultipliedAlpha(t) - }, ft.prototype.isPremultipliedAlpha = function() { - return this.drawParamWebGL.isPremultipliedAlpha() - }, ft.prototype.setAnisotropy = function(t) { - this.drawParamWebGL.setAnisotropy(t) - }, ft.prototype.getAnisotropy = function() { - return this.drawParamWebGL.getAnisotropy() - }, ct.prototype._$tb = function() { - return this.motions - }, ct.prototype.startMotion = function(t, i) { - for (var e = null, r = this.motions.length, o = 0; o < r; ++o) null != (e = this.motions[o]) && (e._$qS(e._$w0.getFadeOut()), this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / start _$K _$3 (m%d)\n", r, e._$sr)); - if (null == t) return -1; - e = new dt, e._$w0 = t, this.motions.push(e); - var n = e._$sr; - return this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / new _$w0 (m%d)\n", r, n), n - }, ct.prototype.updateParam = function(t) { - try { - for (var i = !1, e = 0; e < this.motions.length; e++) { - var r = this.motions[e]; - if (null != r) { - var o = r._$w0; - null != o ? (o.updateParam(t, r), i = !0, r.isFinished() && (this._$eb && _._$Ji("MotionQueueManager[size:%2d]->updateParam() / _$T0 _$w0 (m%d)\n", this.motions.length - 1, r._$sr), this.motions.splice(e, 1), e--)) : (this.motions = this.motions.splice(e, 1), e--) - } else this.motions.splice(e, 1), e-- - } - return i - } catch (t) { - return _._$li(t), !0 - } - }, ct.prototype.isFinished = function(t) { - if (arguments.length >= 1) { - for (var i = 0; i < this.motions.length; i++) { - var e = this.motions[i]; - if (null != e && (e._$sr == t && !e.isFinished())) return !1 - } - return !0 - } - for (var i = 0; i < this.motions.length; i++) { - var e = this.motions[i]; - if (null != e) { - if (null != e._$w0) { - if (!e.isFinished()) return !1 - } else this.motions.splice(i, 1), i-- - } else this.motions.splice(i, 1), i-- - } - return !0 - }, ct.prototype.stopAllMotions = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - if (null != i) { - i._$w0; - this.motions.splice(t, 1), t-- - } else this.motions.splice(t, 1), t-- - } - }, ct.prototype._$Zr = function(t) { - this._$eb = t - }, ct.prototype._$e = function() { - console.log("-- _$R --\n"); - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t], - e = i._$w0; - console.log("MotionQueueEnt[%d] :: %s\n", this.motions.length, e.toString()) - } - }, dt._$Gs = 0, dt.prototype.isFinished = function() { - return this._$9L - }, dt.prototype._$qS = function(t) { - var i = w.getUserTimeMSec(), - e = i + t; - (this._$Do < 0 || e < this._$Do) && (this._$Do = e) - }, dt.prototype._$Bs = function() { - return this._$sr - }, gt.prototype.setContext = function(t) { - var i = this.m; - t.transform(i[0], i[1], i[3], i[4], i[6], i[7]) - }, gt.prototype.toString = function() { - for (var t = "LDTransform { ", i = 0; i < 9; i++) t += this.m[i].toFixed(2) + " ,"; - return t += " }" - }, gt.prototype.identity = function() { - var t = this.m; - t[0] = t[4] = t[8] = 1, t[1] = t[2] = t[3] = t[5] = t[6] = t[7] = 0 - }, gt.prototype._$PS = function(t, i, e) { - null == e && (e = new Array(0, 0)); - var r = this.m; - return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e - }, gt.prototype._$P2 = function(t) { - t || (t = new gt); - var i = this.m, - e = i[0], - r = i[1], - o = i[2], - n = i[3], - s = i[4], - _ = i[5], - a = i[6], - h = i[7], - l = i[8], - $ = e * s * l + r * _ * a + o * n * h - e * _ * h - o * s * a - r * n * l; - if (0 == $) return null; - var u = 1 / $; - return t.m[0] = u * (s * l - h * _), t.m[1] = u * (h * o - r * l), t.m[2] = u * (r * _ - s * o), t.m[3] = u * (a * _ - n * l), t.m[4] = u * (e * l - a * o), t.m[5] = u * (n * o - e * _), t.m[6] = u * (n * h - a * s), t.m[7] = u * (a * r - e * h), t.m[8] = u * (e * s - n * r), t - }, gt.prototype.transform = function(t, i, e) { - null == e && (e = new Array(0, 0)); - var r = this.m; - return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e - }, gt.prototype.translate = function(t, i) { - var e = this.m; - e[6] = e[0] * t + e[3] * i + e[6], e[7] = e[1] * t + e[4] * i + e[7], e[8] = e[2] * t + e[5] * i + e[8] - }, gt.prototype.scale = function(t, i) { - var e = this.m; - e[0] *= t, e[1] *= t, e[2] *= t, e[3] *= i, e[4] *= i, e[5] *= i - }, gt.prototype.shear = function(t, i) { - var e = this.m, - r = e[0] + e[3] * i, - o = e[1] + e[4] * i, - n = e[2] + e[5] * i; - e[3] = e[0] * t + e[3], e[4] = e[1] * t + e[4], e[5] = e[2] * t + e[5], e[0] = r, e[1] = o, e[2] = n - }, gt.prototype.rotate = function(t) { - var i = this.m, - e = Math.cos(t), - r = Math.sin(t), - o = i[0] * e + i[3] * r, - n = i[1] * e + i[4] * r, - s = i[2] * e + i[5] * r; - i[3] = -i[0] * r + i[3] * e, i[4] = -i[1] * r + i[4] * e, i[5] = -i[2] * r + i[5] * e, i[0] = o, i[1] = n, i[2] = s - }, gt.prototype.concatenate = function(t) { - var i = this.m, - e = t.m, - r = i[0] * e[0] + i[3] * e[1] + i[6] * e[2], - o = i[1] * e[0] + i[4] * e[1] + i[7] * e[2], - n = i[2] * e[0] + i[5] * e[1] + i[8] * e[2], - s = i[0] * e[3] + i[3] * e[4] + i[6] * e[5], - _ = i[1] * e[3] + i[4] * e[4] + i[7] * e[5], - a = i[2] * e[3] + i[5] * e[4] + i[8] * e[5], - h = i[0] * e[6] + i[3] * e[7] + i[6] * e[8], - l = i[1] * e[6] + i[4] * e[7] + i[7] * e[8], - $ = i[2] * e[6] + i[5] * e[7] + i[8] * e[8]; - m[0] = r, m[1] = o, m[2] = n, m[3] = s, m[4] = _, m[5] = a, m[6] = h, m[7] = l, m[8] = $ - }, yt.prototype = new et, yt._$eT = null, yt._$tP = new Object, yt._$2o = function() { - return null == yt._$eT && (yt._$eT = yt.getID("DST_BASE")), yt._$eT - }, yt._$27 = function() { - yt._$tP.clear(), yt._$eT = null - }, yt.getID = function(t) { - var i = yt._$tP[t]; - return null == i && (i = new yt(t), yt._$tP[t] = i), i - }, yt.prototype._$3s = function() { - return new yt - }, mt.prototype = new E, mt._$9r = function(t) { - return new Float32Array(t) - }, mt._$vb = function(t) { - return new Int16Array(t) - }, mt._$cr = function(t, i) { - return null == t || t._$yL() < i.length ? (t = mt._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, mt._$mb = function(t, i) { - return null == t || t._$yL() < i.length ? (t = mt._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, mt._$Hs = function() { - return this._$Gr - }, mt._$as = function(t) { - this._$Gr = t - }, mt.prototype.getGL = function() { - return this.gl - }, mt.prototype.setGL = function(t) { - this.gl = t - }, mt.prototype.setTransform = function(t) { - this.transform = t - }, mt.prototype._$ZT = function() { - var t = this.gl; - this.firstDraw && (this.initShader(), this.firstDraw = !1, this.anisotropyExt = t.getExtension("EXT_texture_filter_anisotropic") || t.getExtension("WEBKIT_EXT_texture_filter_anisotropic") || t.getExtension("MOZ_EXT_texture_filter_anisotropic"), this.anisotropyExt && (this.maxAnisotropy = t.getParameter(this.anisotropyExt.MAX_TEXTURE_MAX_ANISOTROPY_EXT))), t.disable(t.SCISSOR_TEST), t.disable(t.STENCIL_TEST), t.disable(t.DEPTH_TEST), t.frontFace(t.CW), t.enable(t.BLEND), t.colorMask(1, 1, 1, 1), t.bindBuffer(t.ARRAY_BUFFER, null), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, null) - }, mt.prototype._$Uo = function(t, i, e, r, o, n, s, _) { - if (!(n < .01 && null == this.clipBufPre_clipContextMask)) { - var a = (n > .9 && at.EXPAND_W, this.gl); - if (null == this.gl) throw new Error("gl is null"); - var h = 1 * this._$C0 * n, - l = 1 * this._$tT * n, - $ = 1 * this._$WL * n, - u = this._$lT * n; - if (null != this.clipBufPre_clipContextMask) { - a.frontFace(a.CCW), a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.getClipBufPre_clipContextMask().matrixForMask); - var p = this.getClipBufPre_clipContextMask().layoutChannelNo, - f = this.getChannelFlagAsColor(p); - a.uniform4f(this.u_channelFlag, f.r, f.g, f.b, f.a); - var c = this.getClipBufPre_clipContextMask().layoutBounds; - a.uniform4f(this.u_baseColor_Loc, 2 * c.x - 1, 2 * c.y - 1, 2 * c._$EL() - 1, 2 * c._$5T() - 1), a.uniform1i(this.u_maskFlag_Loc, !0) - } else if (null != this.getClipBufPre_clipContextDraw()) { - a.useProgram(this.shaderProgramOff), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc_Off), a.vertexAttribPointer(this.a_position_Loc_Off, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc_Off, 1), a.enableVertexAttribArray(this.a_texCoord_Loc_Off), a.vertexAttribPointer(this.a_texCoord_Loc_Off, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_clipMatrix_Loc_Off, !1, this.getClipBufPre_clipContextDraw().matrixForDraw), a.uniformMatrix4fv(this.u_matrix_Loc_Off, !1, this.matrix4x4), a.activeTexture(a.TEXTURE2), a.bindTexture(a.TEXTURE_2D, at.fTexture[this.glno]), a.uniform1i(this.s_texture1_Loc_Off, 2); - var p = this.getClipBufPre_clipContextDraw().layoutChannelNo, - f = this.getChannelFlagAsColor(p); - a.uniform4f(this.u_channelFlag_Loc_Off, f.r, f.g, f.b, f.a), a.uniform4f(this.u_baseColor_Loc_Off, h, l, $, u) - } else a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.matrix4x4), a.uniform4f(this.u_baseColor_Loc, h, l, $, u), a.uniform1i(this.u_maskFlag_Loc, !1); - this.culling ? this.gl.enable(a.CULL_FACE) : this.gl.disable(a.CULL_FACE), this.gl.enable(a.BLEND); - var d, g, y, m; - if (null != this.clipBufPre_clipContextMask) d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA; - else switch (s) { - case $t._$ms: - d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA; - break; - case $t._$ns: - d = a.ONE, g = a.ONE, y = a.ZERO, m = a.ONE; - break; - case $t._$_s: - d = a.DST_COLOR, g = a.ONE_MINUS_SRC_ALPHA, y = a.ZERO, m = a.ONE - } - a.blendEquationSeparate(a.FUNC_ADD, a.FUNC_ADD), a.blendFuncSeparate(d, g, y, m), this.anisotropyExt && a.texParameteri(a.TEXTURE_2D, this.anisotropyExt.TEXTURE_MAX_ANISOTROPY_EXT, this.maxAnisotropy); - var T = e.length; - a.drawElements(a.TRIANGLES, T, a.UNSIGNED_SHORT, 0), a.bindTexture(a.TEXTURE_2D, null) - } - }, mt.prototype._$Rs = function() { - throw new Error("_$Rs") - }, mt.prototype._$Ds = function(t) { - throw new Error("_$Ds") - }, mt.prototype._$K2 = function() { - for (var t = 0; t < this.textures.length; t++) { - 0 != this.textures[t] && (this.gl._$K2(1, this.textures, t), this.textures[t] = null) - } - }, mt.prototype.setTexture = function(t, i) { - this.textures[t] = i - }, mt.prototype.initShader = function() { - var t = this.gl; - this.loadShaders2(), this.a_position_Loc = t.getAttribLocation(this.shaderProgram, "a_position"), this.a_texCoord_Loc = t.getAttribLocation(this.shaderProgram, "a_texCoord"), this.u_matrix_Loc = t.getUniformLocation(this.shaderProgram, "u_mvpMatrix"), this.s_texture0_Loc = t.getUniformLocation(this.shaderProgram, "s_texture0"), this.u_channelFlag = t.getUniformLocation(this.shaderProgram, "u_channelFlag"), this.u_baseColor_Loc = t.getUniformLocation(this.shaderProgram, "u_baseColor"), this.u_maskFlag_Loc = t.getUniformLocation(this.shaderProgram, "u_maskFlag"), this.a_position_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_position"), this.a_texCoord_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_texCoord"), this.u_matrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_mvpMatrix"), this.u_clipMatrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_ClipMatrix"), this.s_texture0_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture0"), this.s_texture1_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture1"), this.u_channelFlag_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_channelFlag"), this.u_baseColor_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_baseColor") - }, mt.prototype.disposeShader = function() { - var t = this.gl; - this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = null), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = null) - }, mt.prototype.compileShader = function(t, i) { - var e = this.gl, - r = i, - o = e.createShader(t); - if (null == o) return _._$Ji("_$L0 to create shader"), null; - if (e.shaderSource(o, r), e.compileShader(o), !e.getShaderParameter(o, e.COMPILE_STATUS)) { - var n = e.getShaderInfoLog(o); - return _._$Ji("_$L0 to compile shader : " + n), e.deleteShader(o), null - } - return o - }, mt.prototype.loadShaders2 = function() { - var t = this.gl; - if (this.shaderProgram = t.createProgram(), !this.shaderProgram) return !1; - if (this.shaderProgramOff = t.createProgram(), !this.shaderProgramOff) return !1; - if (this.vertShader = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_mvpMatrix * a_position; v_texCoord = a_texCoord;}"), !this.vertShader) return _._$Ji("Vertex shader compile _$li!"), !1; - if (this.vertShaderOff = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;uniform mat4 u_ClipMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_ClipMatrix * a_position; v_texCoord = a_texCoord ;}"), !this.vertShaderOff) return _._$Ji("OffVertex shader compile _$li!"), !1; - if (this.fragShader = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform vec4 u_channelFlag;uniform vec4 u_baseColor;uniform bool u_maskFlag;void main(){ vec4 smpColor; if(u_maskFlag){ float isInside = step(u_baseColor.x, v_ClipPos.x/v_ClipPos.w) * step(u_baseColor.y, v_ClipPos.y/v_ClipPos.w) * step(v_ClipPos.x/v_ClipPos.w, u_baseColor.z) * step(v_ClipPos.y/v_ClipPos.w, u_baseColor.w); smpColor = u_channelFlag * texture2D(s_texture0 , v_texCoord).a * isInside; }else{ smpColor = texture2D(s_texture0 , v_texCoord) * u_baseColor; } gl_FragColor = smpColor;}"), !this.fragShader) return _._$Ji("Fragment shader compile _$li!"), !1; - if (this.fragShaderOff = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float ;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform sampler2D s_texture1;uniform vec4 u_channelFlag;uniform vec4 u_baseColor ;void main(){ vec4 col_formask = texture2D(s_texture0, v_texCoord) * u_baseColor; vec4 clipMask = texture2D(s_texture1, v_ClipPos.xy / v_ClipPos.w) * u_channelFlag; float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a; col_formask = col_formask * maskVal; gl_FragColor = col_formask;}"), !this.fragShaderOff) return _._$Ji("OffFragment shader compile _$li!"), !1; - if (t.attachShader(this.shaderProgram, this.vertShader), t.attachShader(this.shaderProgram, this.fragShader), t.attachShader(this.shaderProgramOff, this.vertShaderOff), t.attachShader(this.shaderProgramOff, this.fragShaderOff), t.linkProgram(this.shaderProgram), t.linkProgram(this.shaderProgramOff), !t.getProgramParameter(this.shaderProgram, t.LINK_STATUS)) { - var i = t.getProgramInfoLog(this.shaderProgram); - return _._$Ji("_$L0 to link program: " + i), this.vertShader && (t.deleteShader(this.vertShader), this.vertShader = 0), this.fragShader && (t.deleteShader(this.fragShader), this.fragShader = 0), this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = 0), this.vertShaderOff && (t.deleteShader(this.vertShaderOff), this.vertShaderOff = 0), this.fragShaderOff && (t.deleteShader(this.fragShaderOff), this.fragShaderOff = 0), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = 0), !1 - } - return !0 - }, mt.prototype.createFramebuffer = function() { - var t = this.gl, - i = at.clippingMaskBufferSize, - e = t.createFramebuffer(); - t.bindFramebuffer(t.FRAMEBUFFER, e); - var r = t.createRenderbuffer(); - t.bindRenderbuffer(t.RENDERBUFFER, r), t.renderbufferStorage(t.RENDERBUFFER, t.RGBA4, i, i), t.framebufferRenderbuffer(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.RENDERBUFFER, r); - var o = t.createTexture(); - return t.bindTexture(t.TEXTURE_2D, o), t.texImage2D(t.TEXTURE_2D, 0, t.RGBA, i, i, 0, t.RGBA, t.UNSIGNED_BYTE, null), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MIN_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MAG_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_S, t.CLAMP_TO_EDGE), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_T, t.CLAMP_TO_EDGE), t.framebufferTexture2D(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.TEXTURE_2D, o, 0), t.bindTexture(t.TEXTURE_2D, null), t.bindRenderbuffer(t.RENDERBUFFER, null), t.bindFramebuffer(t.FRAMEBUFFER, null), at.fTexture[this.glno] = o, { - framebuffer: e, - renderbuffer: r, - texture: at.fTexture[this.glno] - } - }, St.prototype._$fP = function() { - var t, i, e, r = this._$ST(); - if (0 == (128 & r)) return 255 & r; - if (0 == (128 & (t = this._$ST()))) return (127 & r) << 7 | 127 & t; - if (0 == (128 & (i = this._$ST()))) return (127 & r) << 14 | (127 & t) << 7 | 255 & i; - if (0 == (128 & (e = this._$ST()))) return (127 & r) << 21 | (127 & t) << 14 | (127 & i) << 7 | 255 & e; - throw new lt("_$L _$0P _") - }, St.prototype.getFormatVersion = function() { - return this._$S2 - }, St.prototype._$gr = function(t) { - this._$S2 = t - }, St.prototype._$3L = function() { - return this._$fP() - }, St.prototype._$mP = function() { - return this._$zT(), this._$F += 8, this._$T.getFloat64(this._$F - 8) - }, St.prototype._$_T = function() { - return this._$zT(), this._$F += 4, this._$T.getFloat32(this._$F - 4) - }, St.prototype._$6L = function() { - return this._$zT(), this._$F += 4, this._$T.getInt32(this._$F - 4) - }, St.prototype._$ST = function() { - return this._$zT(), this._$T.getInt8(this._$F++) - }, St.prototype._$9T = function() { - return this._$zT(), this._$F += 2, this._$T.getInt16(this._$F - 2) - }, St.prototype._$2T = function() { - throw this._$zT(), this._$F += 8, new lt("_$L _$q read long") - }, St.prototype._$po = function() { - return this._$zT(), 0 != this._$T.getInt8(this._$F++) - }; - var xt = !0; - St.prototype._$bT = function() { - this._$zT(); - var t = this._$3L(), - i = null; - if (xt) try { - var e = new ArrayBuffer(2 * t); - i = new Uint16Array(e); - for (var r = 0; r < t; ++r) i[r] = this._$T.getUint8(this._$F++); - return String.fromCharCode.apply(null, i) - } catch (t) { - xt = !1 - } - try { - var o = new Array; - if (null == i) for (var r = 0; r < t; ++r) o[r] = this._$T.getUint8(this._$F++); - else for (var r = 0; r < t; ++r) o[r] = i[r]; - return String.fromCharCode.apply(null, o) - } catch (t) { - console.log("read utf8 / _$rT _$L0 !! : " + t) - } - }, St.prototype._$cS = function() { - this._$zT(); - for (var t = this._$3L(), i = new Int32Array(t), e = 0; e < t; e++) i[e] = this._$T.getInt32(this._$F), this._$F += 4; - return i - }, St.prototype._$Tb = function() { - this._$zT(); - for (var t = this._$3L(), i = new Float32Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat32(this._$F), this._$F += 4; - return i - }, St.prototype._$5b = function() { - this._$zT(); - for (var t = this._$3L(), i = new Float64Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat64(this._$F), this._$F += 8; - return i - }, St.prototype._$nP = function() { - return this._$Jb(-1) - }, St.prototype._$Jb = function(t) { - if (this._$zT(), t < 0 && (t = this._$3L()), t == G._$7P) { - var i = this._$6L(); - if (0 <= i && i < this._$Ko.length) return this._$Ko[i]; - throw new lt("_$sL _$4i @_$m0") - } - var e = this._$4b(t); - return this._$Ko.push(e), e - }, St.prototype._$4b = function(t) { - if (0 == t) return null; - if (50 == t) { - var i = this._$bT(), - e = b.getID(i); - return e - } - if (51 == t) { - var i = this._$bT(), - e = yt.getID(i); - return e - } - if (134 == t) { - var i = this._$bT(), - e = l.getID(i); - return e - } - if (60 == t) { - var i = this._$bT(), - e = u.getID(i); - return e - } - if (t >= 48) { - var r = G._$9o(t); - return null != r ? (r._$F0(this), r) : null - } - switch (t) { - case 1: - return this._$bT(); - case 10: - return new n(this._$6L(), !0); - case 11: - return new S(this._$mP(), this._$mP(), this._$mP(), this._$mP()); - case 12: - return new S(this._$_T(), this._$_T(), this._$_T(), this._$_T()); - case 13: - return new L(this._$mP(), this._$mP()); - case 14: - return new L(this._$_T(), this._$_T()); - case 15: - for (var o = this._$3L(), e = new Array(o), s = 0; s < o; s++) e[s] = this._$nP(); - return e; - case 17: - var e = new F(this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP()); - return e; - case 21: - return new h(this._$6L(), this._$6L(), this._$6L(), this._$6L()); - case 22: - return new pt(this._$6L(), this._$6L()); - case 23: - throw new Error("_$L _$ro "); - case 16: - case 25: - return this._$cS(); - case 26: - return this._$5b(); - case 27: - return this._$Tb(); - case 2: - case 3: - case 4: - case 5: - case 6: - case 7: - case 8: - case 9: - case 18: - case 19: - case 20: - case 24: - case 28: - throw new lt("_$6 _$q : _$nP() of 2-9 ,18,19,20,24,28 : " + t); - default: - throw new lt("_$6 _$q : _$nP() NO _$i : " + t) - } - }, St.prototype._$8L = function() { - return 0 == this._$hL ? this._$v0 = this._$ST() : 8 == this._$hL && (this._$v0 = this._$ST(), this._$hL = 0), 1 == (this._$v0 >> 7 - this._$hL++ & 1) - }, St.prototype._$zT = function() { - 0 != this._$hL && (this._$hL = 0) - }, vt.prototype._$wP = function(t, i, e) { - for (var r = 0; r < e; r++) { - for (var o = 0; o < i; o++) { - var n = 2 * (o + r * i); - console.log("(% 7.3f , % 7.3f) , ", t[n], t[n + 1]) - } - console.log("\n") - } - console.log("\n") - }, Lt._$2S = Math.PI / 180, Lt._$bS = Math.PI / 180, Lt._$wS = 180 / Math.PI, Lt._$NS = 180 / Math.PI, Lt.PI_F = Math.PI, Lt._$kT = [0, .012368, .024734, .037097, .049454, .061803, .074143, .086471, .098786, .111087, .12337, .135634, .147877, .160098, .172295, .184465, .196606, .208718, .220798, .232844, .244854, .256827, .268761, .280654, .292503, .304308, .316066, .327776, .339436, .351044, .362598, .374097, .385538, .396921, .408243, .419502, .430697, .441826, .452888, .463881, .474802, .485651, .496425, .507124, .517745, .528287, .538748, .549126, .559421, .56963, .579752, .589785, .599728, .609579, .619337, .629, .638567, .648036, .657406, .666676, .675843, .684908, .693867, .70272, .711466, .720103, .72863, .737045, .745348, .753536, .76161, .769566, .777405, .785125, .792725, .800204, .807561, .814793, .821901, .828884, .835739, .842467, .849066, .855535, .861873, .868079, .874153, .880093, .885898, .891567, .897101, .902497, .907754, .912873, .917853, .922692, .92739, .931946, .936359, .940629, .944755, .948737, .952574, .956265, .959809, .963207, .966457, .96956, .972514, .97532, .977976, .980482, .982839, .985045, .987101, .989006, .990759, .992361, .993811, .995109, .996254, .997248, .998088, .998776, .999312, .999694, .999924, 1], Lt._$92 = function(t, i) { - var e = Math.atan2(t[1], t[0]), - r = Math.atan2(i[1], i[0]); - return Lt._$tS(e, r) - }, Lt._$tS = function(t, i) { - for (var e = t - i; e < -Math.PI;) e += 2 * Math.PI; - for (; e > Math.PI;) e -= 2 * Math.PI; - return e - }, Lt._$9 = function(t) { - return Math.sin(t) - }, Lt.fcos = function(t) { - return Math.cos(t) - }, Mt.prototype._$u2 = function() { - return this._$IS[0] - }, Mt.prototype._$yo = function() { - return this._$AT && !this._$IS[0] - }, Mt.prototype._$GT = function() { - return this._$e0 - }, Et._$W2 = 0, Et.SYSTEM_INFO = null, Et.USER_AGENT = navigator.userAgent, Et.isIPhone = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone - }, Et.isIOS = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad - }, Et.isAndroid = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isAndroid - }, Et.getOSVersion = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO.version - }, Et.getOS = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad ? "iOS" : Et.SYSTEM_INFO._isAndroid ? "Android" : "_$Q0 OS" - }, Et.setup = function() { - function t(t, i) { - for (var e = t.substring(i).split(/[ _,;\.]/), r = 0, o = 0; o <= 2 && !isNaN(e[o]); o++) { - var n = parseInt(e[o]); - if (n < 0 || n > 999) { - _._$li("err : " + n + " @UtHtml5.setup()"), r = 0; - break - } - r += n * Math.pow(1e3, 2 - o) - } - return r - } - var i, e = Et.USER_AGENT, - r = Et.SYSTEM_INFO = { - userAgent: e - }; - if ((i = e.indexOf("iPhone OS ")) >= 0) r.os = "iPhone", r._isIPhone = !0, r.version = t(e, i + "iPhone OS ".length); - else if ((i = e.indexOf("iPad")) >= 0) { - if ((i = e.indexOf("CPU OS")) < 0) return void _._$li(" err : " + e + " @UtHtml5.setup()"); - r.os = "iPad", r._isIPad = !0, r.version = t(e, i + "CPU OS ".length) - } else(i = e.indexOf("Android")) >= 0 ? (r.os = "Android", r._isAndroid = !0, r.version = t(e, i + "Android ".length)) : (r.os = "-", r.version = -1) - }, window.UtSystem = w, window.UtDebug = _, window.LDTransform = gt, window.LDGL = nt, window.Live2D = at, window.Live2DModelWebGL = ft, window.Live2DModelJS = q, window.Live2DMotion = J, window.MotionQueueManager = ct, window.PhysicsHair = f, window.AMotion = s, window.PartsDataID = l, window.DrawDataID = b, window.BaseDataID = yt, window.ParamID = u, at.init(); - var At = !1 - }() - }).call(i, e(7)) -}, function(t, i) { - t.exports = { - import: function() { - throw new Error("System.import cannot be used indirectly") - } - } -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o() { - this.models = [], this.count = -1, this.reloadFlg = !1, Live2D.init(), n.Live2DFramework.setPlatformManager(new _. - default) - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = o; - var n = e(0), - s = e(9), - _ = r(s), - a = e(10), - h = r(a), - l = e(1), - $ = r(l); - o.prototype.createModel = function() { - var t = new h. - default; - return this.models.push(t), t - }, o.prototype.changeModel = function(t, i) { - if (this.reloadFlg) { - this.reloadFlg = !1; - this.releaseModel(0, t), this.createModel(), this.models[0].load(t, i) - } - }, o.prototype.getModel = function(t) { - return t >= this.models.length ? null : this.models[t] - }, o.prototype.releaseModel = function(t, i) { - this.models.length <= t || (this.models[t].release(i), delete this.models[t], this.models.splice(t, 1)) - }, o.prototype.numModels = function() { - return this.models.length - }, o.prototype.setDrag = function(t, i) { - for (var e = 0; e < this.models.length; e++) this.models[e].setDrag(t, i) - }, o.prototype.maxScaleEvent = function() { - $. - default.DEBUG_LOG && console.log("Max scale event."); - for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($. - default.MOTION_GROUP_PINCH_IN, $. - default.PRIORITY_NORMAL) - }, o.prototype.minScaleEvent = function() { - $. - default.DEBUG_LOG && console.log("Min scale event."); - for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($. - default.MOTION_GROUP_PINCH_OUT, $. - default.PRIORITY_NORMAL) - }, o.prototype.tapEvent = function(t, i) { - $. - default.DEBUG_LOG && console.log("tapEvent view x:" + t + " y:" + i); - for (var e = 0; e < this.models.length; e++) this.models[e].hitTest($. - default.HIT_AREA_HEAD, t, i) ? ($. - default.DEBUG_LOG && console.log("Tap face."), this.models[e].setRandomExpression()): - this.models[e].hitTest($. - default.HIT_AREA_BODY, t, i) ? ($. - default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($. - default.MOTION_GROUP_TAP_BODY, $. - default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("head", t, i) ? ($. - default.DEBUG_LOG && console.log("Tap face."), this.models[e].startRandomMotion($. - default.MOTION_GROUP_FLICK_HEAD, $. - default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("body", t, i) && ($. - default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($. - default.MOTION_GROUP_TAP_BODY, $. - default.PRIORITY_NORMAL)); - return !0 - } -}, function(t, i, e) { - "use strict"; - - function r() {} - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = r; - var o = e(2); - var requestCache = {}; - r.prototype.loadBytes = function(t, i) { - // Cache 相同的请求,减少请求数量 - if (requestCache[t] !== undefined) { - i(requestCache[t]); - return; - } - var e = new XMLHttpRequest; - e.open("GET", t, !0), e.responseType = "arraybuffer", e.onload = function() { - switch (e.status) { - case 200: - requestCache[t] = e.response; - i(e.response); - break; - default: - console.error("Failed to load (" + e.status + ") : " + t) - } - }, e.send(null) - }, r.prototype.loadString = function(t) { - this.loadBytes(t, function(t) { - return t - }) - }, r.prototype.loadLive2DModel = function(t, i) { - var e = null; - this.loadBytes(t, function(t) { - e = Live2DModelWebGL.loadModel(t), i(e) - }) - }, r.prototype.loadTexture = function(t, i, e, r) { - var n = new Image; - n.crossOrigin = "Anonymous", n.src = e; - n.onload = function() { - var e = (0, o.getContext)(), - s = e.createTexture(); - if (!s) return console.error("Failed to generate gl texture name."), -1; - 0 == t.isPremultipliedAlpha() && e.pixelStorei(e.UNPACK_PREMULTIPLY_ALPHA_WEBGL, 1), e.pixelStorei(e.UNPACK_FLIP_Y_WEBGL, 1), e.activeTexture(e.TEXTURE0), e.bindTexture(e.TEXTURE_2D, s), e.texImage2D(e.TEXTURE_2D, 0, e.RGBA, e.RGBA, e.UNSIGNED_BYTE, n), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MAG_FILTER, e.LINEAR), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MIN_FILTER, e.LINEAR_MIPMAP_NEAREST), e.generateMipmap(e.TEXTURE_2D), t.setTexture(i, s), s = null, "function" == typeof r && r() - }, n.onerror = function() { - console.error("Failed to load image : " + e) - } - }, r.prototype.jsonParseFromBytes = function(t) { - var i, e = new Uint8Array(t, 0, 3); - return i = 239 == e[0] && 187 == e[1] && 191 == e[2] ? String.fromCharCode.apply(null, new Uint8Array(t, 3)) : String.fromCharCode.apply(null, new Uint8Array(t)), JSON.parse(i) - }, r.prototype.log = function(t) {} -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o() { - n.L2DBaseModel.prototype.constructor.call(this), this.modelHomeDir = "", this.modelSetting = null, this.tmpMatrix = [] - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = o; - var n = e(0), - s = e(11), - _ = r(s), - a = e(1), - h = r(a), - l = e(3), - $ = r(l); - o.prototype = new n.L2DBaseModel, o.prototype.load = function(t, i, e) { - this.setUpdating(!0), this.setInitialized(!1), this.modelHomeDir = i.substring(0, i.lastIndexOf("/") + 1), this.modelSetting = new _. - default; - var r = this; - this.modelSetting.loadModelSetting(i, function() { - var t = r.modelHomeDir + r.modelSetting.getModelFile(); - r.loadModelData(t, function(t) { - for (var i = 0; i < r.modelSetting.getTextureNum(); i++) { - if (/^https?:\/\/|^\/\//i.test(r.modelSetting.getTextureFile(i))) var o = r.modelSetting.getTextureFile(i); - else var o = r.modelHomeDir + r.modelSetting.getTextureFile(i); - r.loadTexture(i, o, function() { - if (r.isTexLoaded) { - if (r.modelSetting.getExpressionNum() > 0) { - r.expressions = {}; - for (var t = 0; t < r.modelSetting.getExpressionNum(); t++) { - var i = r.modelSetting.getExpressionName(t), - o = r.modelHomeDir + r.modelSetting.getExpressionFile(t); - r.loadExpression(i, o) - } - } else r.expressionManager = null, r.expressions = {}; - if (r.eyeBlink, null != r.modelSetting.getPhysicsFile() ? r.loadPhysics(r.modelHomeDir + r.modelSetting.getPhysicsFile()) : r.physics = null, null != r.modelSetting.getPoseFile() ? r.loadPose(r.modelHomeDir + r.modelSetting.getPoseFile(), function() { - r.pose.updateParam(r.live2DModel) - }) : r.pose = null, null != r.modelSetting.getLayout()) { - var n = r.modelSetting.getLayout(); - null != n.width && r.modelMatrix.setWidth(n.width), null != n.height && r.modelMatrix.setHeight(n.height), null != n.x && r.modelMatrix.setX(n.x), null != n.y && r.modelMatrix.setY(n.y), null != n.center_x && r.modelMatrix.centerX(n.center_x), null != n.center_y && r.modelMatrix.centerY(n.center_y), null != n.top && r.modelMatrix.top(n.top), null != n.bottom && r.modelMatrix.bottom(n.bottom), null != n.left && r.modelMatrix.left(n.left), null != n.right && r.modelMatrix.right(n.right) - } - if (null != r.modelSetting.getHitAreasCustom()) { - var s = r.modelSetting.getHitAreasCustom(); - null != s.head_x && (h. - default.hit_areas_custom_head_x = s.head_x), null != s.head_y && (h. - default.hit_areas_custom_head_y = s.head_y), null != s.body_x && (h. - default.hit_areas_custom_body_x = s.body_x), null != s.body_y && (h. - default.hit_areas_custom_body_y = s.body_y) - } - for (var t = 0; t < r.modelSetting.getInitParamNum(); t++) r.live2DModel.setParamFloat(r.modelSetting.getInitParamID(t), r.modelSetting.getInitParamValue(t)); - for (var t = 0; t < r.modelSetting.getInitPartsVisibleNum(); t++) r.live2DModel.setPartsOpacity(r.modelSetting.getInitPartsVisibleID(t), r.modelSetting.getInitPartsVisibleValue(t)); - r.live2DModel.saveParam(), r.preloadMotionGroup(h. - default.MOTION_GROUP_IDLE), r.preloadMotionGroup(h. - default.MOTION_GROUP_SLEEPY), r.mainMotionManager.stopAllMotions(), r.setUpdating(!1), r.setInitialized(!0), "function" == typeof e && e() - } - }) - } - }) - }) - }, o.prototype.release = function(t) { - var i = n.Live2DFramework.getPlatformManager(); - t.deleteTexture(i.texture) - }, o.prototype.preloadMotionGroup = function(t) { - for (var i = this, e = 0; e < this.modelSetting.getMotionNum(t); e++) { - var r = this.modelSetting.getMotionFile(t, e); - this.loadMotion(r, this.modelHomeDir + r, function(r) { - r.setFadeIn(i.modelSetting.getMotionFadeIn(t, e)), r.setFadeOut(i.modelSetting.getMotionFadeOut(t, e)) - }) - } - }, o.prototype.update = function() { - if (null == this.live2DModel) return void(h. - default.DEBUG_LOG && console.error("Failed to update.")); - var t = UtSystem.getUserTimeMSec() - this.startTimeMSec, - i = t / 1e3, - e = 2 * i * Math.PI; - if (this.mainMotionManager.isFinished()) { - "1" === sessionStorage.getItem("Sleepy") ? this.startRandomMotion(h. - default.MOTION_GROUP_SLEEPY, h. - default.PRIORITY_SLEEPY) : this.startRandomMotion(h. - default.MOTION_GROUP_IDLE, h. - default.PRIORITY_IDLE) - } - this.live2DModel.loadParam(), this.mainMotionManager.updateParam(this.live2DModel) || null != this.eyeBlink && this.eyeBlink.updateParam(this.live2DModel), this.live2DModel.saveParam(), null == this.expressionManager || null == this.expressions || this.expressionManager.isFinished() || this.expressionManager.updateParam(this.live2DModel), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", 30 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", 30 * this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", this.dragX * this.dragY * -30, 1), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", 10 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_X", this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_Y", this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", Number(15 * Math.sin(e / 6.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", Number(8 * Math.sin(e / 3.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", Number(10 * Math.sin(e / 5.5345)), .5), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", Number(4 * Math.sin(e / 15.5345)), .5), this.live2DModel.setParamFloat("PARAM_BREATH", Number(.5 + .5 * Math.sin(e / 3.2345)), 1), null != this.physics && this.physics.updateParam(this.live2DModel), null == this.lipSync && this.live2DModel.setParamFloat("PARAM_MOUTH_OPEN_Y", this.lipSyncValue), null != this.pose && this.pose.updateParam(this.live2DModel), this.live2DModel.update() - }, o.prototype.setRandomExpression = function() { - var t = []; - for (var i in this.expressions) t.push(i); - var e = parseInt(Math.random() * t.length); - this.setExpression(t[e]) - }, o.prototype.startRandomMotion = function(t, i) { - var e = this.modelSetting.getMotionNum(t), - r = parseInt(Math.random() * e); - this.startMotion(t, r, i) - }, o.prototype.startMotion = function(t, i, e) { - var r = this.modelSetting.getMotionFile(t, i); - if (null == r || "" == r) return void(h. - default.DEBUG_LOG && console.error("Failed to motion.")); - if (e == h. - default.PRIORITY_FORCE) this.mainMotionManager.setReservePriority(e); - else if (!this.mainMotionManager.reserveMotion(e)) return void(h. - default.DEBUG_LOG && console.log("Motion is running.")); - var o, n = this; - null == this.motions[t] ? this.loadMotion(null, this.modelHomeDir + r, function(r) { - o = r, n.setFadeInFadeOut(t, i, e, o) - }) : (o = this.motions[t], n.setFadeInFadeOut(t, i, e, o)) - }, o.prototype.setFadeInFadeOut = function(t, i, e, r) { - var o = this.modelSetting.getMotionFile(t, i); - if (r.setFadeIn(this.modelSetting.getMotionFadeIn(t, i)), r.setFadeOut(this.modelSetting.getMotionFadeOut(t, i)), h. - default.DEBUG_LOG && console.log("Start motion : " + o), null == this.modelSetting.getMotionSound(t, i)) this.mainMotionManager.startMotionPrio(r, e); - else { - var n = this.modelSetting.getMotionSound(t, i), - s = document.createElement("audio"); - s.src = this.modelHomeDir + n, h. - default.DEBUG_LOG && console.log("Start sound : " + n), s.play(), this.mainMotionManager.startMotionPrio(r, e) - } - }, o.prototype.setExpression = function(t) { - var i = this.expressions[t]; - h. - default.DEBUG_LOG && console.log("Expression : " + t), this.expressionManager.startMotion(i, !1) - }, o.prototype.draw = function(t) { - $. - default.push(), $. - default.multMatrix(this.modelMatrix.getArray()), this.tmpMatrix = $. - default.getMatrix(), this.live2DModel.setMatrix(this.tmpMatrix), this.live2DModel.draw(), $. - default.pop() - }, o.prototype.hitTest = function(t, i, e) { - for (var r = this.modelSetting.getHitAreaNum(), o = 0; o < r; o++) if (t == this.modelSetting.getHitAreaName(o)) { - var n = this.modelSetting.getHitAreaID(o); - return this.hitTestSimple(n, i, e) - } - return !1 - }, o.prototype.hitTestCustom = function(t, i, e) { - return "head" == t ? this.hitTestSimpleCustom(h. - default.hit_areas_custom_head_x, h. - default.hit_areas_custom_head_y, i, e) : "body" == t && this.hitTestSimpleCustom(h. - default.hit_areas_custom_body_x, h. - default.hit_areas_custom_body_y, i, e) - } -}, function(t, i, e) { - "use strict"; - - function r() { - this.NAME = "name", this.ID = "id", this.MODEL = "model", this.TEXTURES = "textures", this.HIT_AREAS = "hit_areas", this.PHYSICS = "physics", this.POSE = "pose", this.EXPRESSIONS = "expressions", this.MOTION_GROUPS = "motions", this.SOUND = "sound", this.FADE_IN = "fade_in", this.FADE_OUT = "fade_out", this.LAYOUT = "layout", this.HIT_AREAS_CUSTOM = "hit_areas_custom", this.INIT_PARAM = "init_param", this.INIT_PARTS_VISIBLE = "init_parts_visible", this.VALUE = "val", this.FILE = "file", this.json = {} - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = r; - var o = e(0); - r.prototype.loadModelSetting = function(t, i) { - var e = this; - o.Live2DFramework.getPlatformManager().loadBytes(t, function(t) { - var r = String.fromCharCode.apply(null, new Uint8Array(t)); - e.json = JSON.parse(r), i() - }) - }, r.prototype.getTextureFile = function(t) { - return null == this.json[this.TEXTURES] || null == this.json[this.TEXTURES][t] ? null : this.json[this.TEXTURES][t] - }, r.prototype.getModelFile = function() { - return this.json[this.MODEL] - }, r.prototype.getTextureNum = function() { - return null == this.json[this.TEXTURES] ? 0 : this.json[this.TEXTURES].length - }, r.prototype.getHitAreaNum = function() { - return null == this.json[this.HIT_AREAS] ? 0 : this.json[this.HIT_AREAS].length - }, r.prototype.getHitAreaID = function(t) { - return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.ID] - }, r.prototype.getHitAreaName = function(t) { - return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.NAME] - }, r.prototype.getPhysicsFile = function() { - return this.json[this.PHYSICS] - }, r.prototype.getPoseFile = function() { - return this.json[this.POSE] - }, r.prototype.getExpressionNum = function() { - return null == this.json[this.EXPRESSIONS] ? 0 : this.json[this.EXPRESSIONS].length - }, r.prototype.getExpressionFile = function(t) { - return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.FILE] - }, r.prototype.getExpressionName = function(t) { - return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.NAME] - }, r.prototype.getLayout = function() { - return this.json[this.LAYOUT] - }, r.prototype.getHitAreasCustom = function() { - return this.json[this.HIT_AREAS_CUSTOM] - }, r.prototype.getInitParamNum = function() { - return null == this.json[this.INIT_PARAM] ? 0 : this.json[this.INIT_PARAM].length - }, r.prototype.getMotionNum = function(t) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] ? 0 : this.json[this.MOTION_GROUPS][t].length - }, r.prototype.getMotionFile = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] ? null : this.json[this.MOTION_GROUPS][t][i][this.FILE] - }, r.prototype.getMotionSound = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.SOUND] ? null : this.json[this.MOTION_GROUPS][t][i][this.SOUND] - }, r.prototype.getMotionFadeIn = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] - }, r.prototype.getMotionFadeOut = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] - }, r.prototype.getInitParamID = function(t) { - return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? null : this.json[this.INIT_PARAM][t][this.ID] - }, r.prototype.getInitParamValue = function(t) { - return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? NaN : this.json[this.INIT_PARAM][t][this.VALUE] - }, r.prototype.getInitPartsVisibleNum = function() { - return null == this.json[this.INIT_PARTS_VISIBLE] ? 0 : this.json[this.INIT_PARTS_VISIBLE].length - }, r.prototype.getInitPartsVisibleID = function(t) { - return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? null : this.json[this.INIT_PARTS_VISIBLE][t][this.ID] - }, r.prototype.getInitPartsVisibleValue = function(t) { - return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? NaN : this.json[this.INIT_PARTS_VISIBLE][t][this.VALUE] - } -}]); -//# sourceMappingURL=live2d.js.map diff --git a/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css b/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css deleted file mode 100644 index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css +++ /dev/null @@ -1,468 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* gradio的页脚信息 */ -footer { - /* display: none !important; */ - margin-top: .2em !important; - font-size: 85%; -} -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -/* Override Slider Styles (for webkit browsers like Safari and Chrome) - * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410 - * 进度滑块在各个平台还是太不统一了 - */ -input[type="range"] { - -webkit-appearance: none; - height: 4px; - background: var(--input-background-fill); - border-radius: 5px; - background-image: linear-gradient(var(--primary-500),var(--primary-500)); - background-size: 0% 100%; - background-repeat: no-repeat; -} -input[type="range"]::-webkit-slider-thumb { - -webkit-appearance: none; - height: 20px; - width: 20px; - border-radius: 50%; - border: solid 0.5px #ddd; - background-color: white; - cursor: ew-resize; - box-shadow: var(--input-shadow); - transition: background-color .1s ease; -} -input[type="range"]::-webkit-slider-thumb:hover { - background: var(--neutral-50); -} -input[type=range]::-webkit-slider-runnable-track { - -webkit-appearance: none; - box-shadow: none; - border: none; - background: transparent; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-background-color-light) !important; - color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: var(--message-bot-background-color-light) !important; -} -[data-testid = "user"] { - background-color: var(--message-user-background-color-light) !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-background-color-dark) !important; - color: var(--chatbot-color-dark) !important; -} -.dark [data-testid = "bot"] { - background-color: var(--message-bot-background-color-dark) !important; -} -.dark [data-testid = "user"] { - background-color: var(--message-user-background-color-dark) !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 95% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -#chuanhu_chatbot .wrap { - overflow-x: hidden; -} -/* 对话气泡 */ -.message { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} - -.message.user p { - white-space: pre-wrap; -} -.message .user-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} - -.message .md-message p { - margin-top: 0.6em !important; - margin-bottom: 0.6em !important; -} -.message .md-message p:first-child { margin-top: 0 !important; } -.message .md-message p:last-of-type { margin-bottom: 0 !important; } - -.message .md-message { - display: block; - padding: 0 !important; -} -.message .raw-message p { - margin:0 !important; -} -.message .raw-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} -.raw-message.hideM, .md-message.hideM { - display: none; -} - -/* custom buttons */ -.chuanhu-btn { - border-radius: 5px; - /* background-color: #E6E6E6 !important; */ - color: rgba(120, 120, 120, 0.64) !important; - padding: 4px !important; - position: absolute; - right: -22px; - cursor: pointer !important; - transition: color .2s ease, background-color .2s ease; -} -.chuanhu-btn:hover { - background-color: rgba(167, 167, 167, 0.25) !important; - color: unset !important; -} -.chuanhu-btn:active { - background-color: rgba(167, 167, 167, 0.5) !important; -} -.chuanhu-btn:focus { - outline: none; -} -.copy-bot-btn { - /* top: 18px; */ - bottom: 0; -} -.toggle-md-btn { - /* top: 0; */ - bottom: 20px; -} -.copy-code-btn { - position: relative; - float: right; - font-size: 1em; - cursor: pointer; -} - -.message-wrap>div img{ - border-radius: 10px !important; -} - -/* history message */ -.wrap>.history-message { - padding: 10px !important; -} -.history-message { - /* padding: 0 !important; */ - opacity: 80%; - display: flex; - flex-direction: column; -} -.history-message>.history-message { - padding: 0 !important; -} -.history-message>.message-wrap { - padding: 0 !important; - margin-bottom: 16px; -} -.history-message>.message { - margin-bottom: 16px; -} -.wrap>.history-message::after { - content: ""; - display: block; - height: 2px; - background-color: var(--body-text-color-subdued); - margin-bottom: 10px; - margin-top: -10px; - clear: both; -} -.wrap>.history-message>:last-child::after { - content: "仅供查看"; - display: block; - text-align: center; - color: var(--body-text-color-subdued); - font-size: 0.8em; -} - -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -.message :not(pre) code { - display: inline; - white-space: break-spaces; - font-family: var(--font-mono); - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -.message pre, -.message pre[class*=language-] { - color: #fff; - overflow-x: auto; - overflow-y: hidden; - margin: .8em 1em 1em 0em !important; - padding: var(--spacing-xl) 1.2em !important; - border-radius: var(--radius-lg) !important; -} -.message pre code, -.message pre code[class*=language-] { - color: #fff; - padding: 0; - margin: 0; - background-color: unset; - text-shadow: none; - font-family: var(--font-mono); -} -/* 覆盖 gradio 丑陋的复制按钮样式 */ -pre button[title="copy"] { - border-radius: 5px; - transition: background-color .2s ease; -} -pre button[title="copy"]:hover { - background-color: #333232; -} -pre button .check { - color: #fff !important; - background: var(--neutral-950) !important; -} - -/* 覆盖prism.css */ -.language-css .token.string, -.style .token.string, -.token.entity, -.token.operator, -.token.url { - background: none !important; -} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py deleted file mode 100644 index 108eda75dda951e1b07ff4ca3603f5ba0e0d1e75..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py +++ /dev/null @@ -1,318 +0,0 @@ -from __future__ import annotations - -import io -from typing import TYPE_CHECKING, Any - -from bokeh.io import export_png, export_svg, show -from bokeh.io.export import get_screenshot_as_png -from bokeh.layouts import gridplot -from bokeh.models.annotations.labels import Label -from bokeh.palettes import Category10 -from bokeh.plotting import figure -import numpy as np - -from contourpy import FillType, LineType -from contourpy.util.bokeh_util import filled_to_bokeh, lines_to_bokeh -from contourpy.util.renderer import Renderer - -if TYPE_CHECKING: - from bokeh.models import GridPlot - from bokeh.palettes import Palette - from numpy.typing import ArrayLike - - from contourpy._contourpy import FillReturn, LineReturn - - -class BokehRenderer(Renderer): - _figures: list[figure] - _layout: GridPlot - _palette: Palette - _want_svg: bool - - """Utility renderer using Bokeh to render a grid of plots over the same (x, y) range. - - Args: - nrows (int, optional): Number of rows of plots, default ``1``. - ncols (int, optional): Number of columns of plots, default ``1``. - figsize (tuple(float, float), optional): Figure size in inches (assuming 100 dpi), default - ``(9, 9)``. - show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``. - want_svg (bool, optional): Whether output is required in SVG format or not, default - ``False``. - - Warning: - :class:`~contourpy.util.bokeh_renderer.BokehRenderer`, unlike - :class:`~contourpy.util.mpl_renderer.MplRenderer`, needs to be told in advance if output to - SVG format will be required later, otherwise it will assume PNG output. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - want_svg: bool = False, - ) -> None: - self._want_svg = want_svg - self._palette = Category10[10] - - total_size = 100*np.asarray(figsize, dtype=int) # Assuming 100 dpi. - - nfigures = nrows*ncols - self._figures = [] - backend = "svg" if self._want_svg else "canvas" - for _ in range(nfigures): - fig = figure(output_backend=backend) - fig.xgrid.visible = False - fig.ygrid.visible = False - self._figures.append(fig) - if not show_frame: - fig.outline_line_color = None # type: ignore[assignment] - fig.axis.visible = False - - self._layout = gridplot( - self._figures, ncols=ncols, toolbar_location=None, # type: ignore[arg-type] - width=total_size[0] // ncols, height=total_size[1] // nrows) - - def _convert_color(self, color: str) -> str: - if isinstance(color, str) and color[0] == "C": - index = int(color[1:]) - color = self._palette[index] - return color - - def _get_figure(self, ax: figure | int) -> figure: - if isinstance(ax, int): - ax = self._figures[ax] - return ax - - def filled( - self, - filled: FillReturn, - fill_type: FillType, - ax: figure | int = 0, - color: str = "C0", - alpha: float = 0.7, - ) -> None: - """Plot filled contours on a single plot. - - Args: - filled (sequence of arrays): Filled contour data as returned by - :func:`~contourpy.ContourGenerator.filled`. - fill_type (FillType): Type of ``filled`` data, as returned by - :attr:`~contourpy.ContourGenerator.fill_type`. - ax (int or Bokeh Figure, optional): Which plot to use, default ``0``. - color (str, optional): Color to plot with. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``Category10`` palette. Default ``"C0"``. - alpha (float, optional): Opacity to plot with, default ``0.7``. - """ - fig = self._get_figure(ax) - color = self._convert_color(color) - xs, ys = filled_to_bokeh(filled, fill_type) - if len(xs) > 0: - fig.multi_polygons(xs=[xs], ys=[ys], color=color, fill_alpha=alpha, line_width=0) - - def grid( - self, - x: ArrayLike, - y: ArrayLike, - ax: figure | int = 0, - color: str = "black", - alpha: float = 0.1, - point_color: str | None = None, - quad_as_tri_alpha: float = 0, - ) -> None: - """Plot quad grid lines on a single plot. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - ax (int or Bokeh Figure, optional): Which plot to use, default ``0``. - color (str, optional): Color to plot grid lines, default ``"black"``. - alpha (float, optional): Opacity to plot lines with, default ``0.1``. - point_color (str, optional): Color to plot grid points or ``None`` if grid points - should not be plotted, default ``None``. - quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default - ``0``. - - Colors may be a string color or the letter ``"C"`` followed by an integer in the range - ``"C0"`` to ``"C9"`` to use a color from the ``Category10`` palette. - - Warning: - ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked. - """ - fig = self._get_figure(ax) - x, y = self._grid_as_2d(x, y) - xs = [row for row in x] + [row for row in x.T] - ys = [row for row in y] + [row for row in y.T] - kwargs = dict(line_color=color, alpha=alpha) - fig.multi_line(xs, ys, **kwargs) - if quad_as_tri_alpha > 0: - # Assumes no quad mask. - xmid = (0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])).ravel() - ymid = (0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])).ravel() - fig.multi_line( - [row for row in np.stack((x[:-1, :-1].ravel(), xmid, x[1:, 1:].ravel()), axis=1)], - [row for row in np.stack((y[:-1, :-1].ravel(), ymid, y[1:, 1:].ravel()), axis=1)], - **kwargs) - fig.multi_line( - [row for row in np.stack((x[:-1, 1:].ravel(), xmid, x[1:, :-1].ravel()), axis=1)], - [row for row in np.stack((y[:-1, 1:].ravel(), ymid, y[1:, :-1].ravel()), axis=1)], - **kwargs) - if point_color is not None: - fig.circle( - x=x.ravel(), y=y.ravel(), fill_color=color, line_color=None, alpha=alpha, size=8) - - def lines( - self, - lines: LineReturn, - line_type: LineType, - ax: figure | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - ) -> None: - """Plot contour lines on a single plot. - - Args: - lines (sequence of arrays): Contour line data as returned by - :func:`~contourpy.ContourGenerator.lines`. - line_type (LineType): Type of ``lines`` data, as returned by - :attr:`~contourpy.ContourGenerator.line_type`. - ax (int or Bokeh Figure, optional): Which plot to use, default ``0``. - color (str, optional): Color to plot lines. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``Category10`` palette. Default ``"C0"``. - alpha (float, optional): Opacity to plot lines with, default ``1.0``. - linewidth (float, optional): Width of lines, default ``1``. - - Note: - Assumes all lines are open line strips not closed line loops. - """ - fig = self._get_figure(ax) - color = self._convert_color(color) - xs, ys = lines_to_bokeh(lines, line_type) - if len(xs) > 0: - fig.multi_line(xs, ys, line_color=color, line_alpha=alpha, line_width=linewidth) - - def mask( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike | np.ma.MaskedArray[Any, Any], - ax: figure | int = 0, - color: str = "black", - ) -> None: - """Plot masked out grid points as circles on a single plot. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (masked array of shape (ny, nx): z-values. - ax (int or Bokeh Figure, optional): Which plot to use, default ``0``. - color (str, optional): Circle color, default ``"black"``. - """ - mask = np.ma.getmask(z) # type: ignore[no-untyped-call] - if mask is np.ma.nomask: - return - fig = self._get_figure(ax) - color = self._convert_color(color) - x, y = self._grid_as_2d(x, y) - fig.circle(x[mask], y[mask], fill_color=color, size=10) - - def save(self, filename: str, transparent: bool = False) -> None: - """Save plots to SVG or PNG file. - - Args: - filename (str): Filename to save to. - transparent (bool, optional): Whether background should be transparent, default - ``False``. - - Warning: - To output to SVG file, ``want_svg=True`` must have been passed to the constructor. - """ - if transparent: - for fig in self._figures: - fig.background_fill_color = None # type: ignore[assignment] - fig.border_fill_color = None # type: ignore[assignment] - - if self._want_svg: - export_svg(self._layout, filename=filename) - else: - export_png(self._layout, filename=filename) - - def save_to_buffer(self) -> io.BytesIO: - """Save plots to an ``io.BytesIO`` buffer. - - Return: - BytesIO: PNG image buffer. - """ - image = get_screenshot_as_png(self._layout) - buffer = io.BytesIO() - image.save(buffer, "png") - return buffer - - def show(self) -> None: - """Show plots in web browser, in usual Bokeh manner. - """ - show(self._layout) - - def title(self, title: str, ax: figure | int = 0, color: str | None = None) -> None: - """Set the title of a single plot. - - Args: - title (str): Title text. - ax (int or Bokeh Figure, optional): Which plot to set the title of, default ``0``. - color (str, optional): Color to set title. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``Category10`` palette. Default ``None`` which is ``black``. - """ - fig = self._get_figure(ax) - fig.title = title # type: ignore[assignment] - fig.title.align = "center" # type: ignore[attr-defined] - if color is not None: - fig.title.text_color = self._convert_color(color) # type: ignore[attr-defined] - - def z_values( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: figure | int = 0, - color: str = "green", - fmt: str = ".1f", - quad_as_tri: bool = False, - ) -> None: - """Show ``z`` values on a single plot. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (array-like of shape (ny, nx): z-values. - ax (int or Bokeh Figure, optional): Which plot to use, default ``0``. - color (str, optional): Color of added text. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``Category10`` palette. Default ``"green"``. - fmt (str, optional): Format to display z-values, default ``".1f"``. - quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centres - of quads. - - Warning: - ``quad_as_tri=True`` shows z-values for all quads, even if masked. - """ - fig = self._get_figure(ax) - color = self._convert_color(color) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - kwargs = dict(text_color=color, text_align="center", text_baseline="middle") - for j in range(ny): - for i in range(nx): - fig.add_layout(Label(x=x[j, i], y=y[j, i], text=f"{z[j, i]:{fmt}}", **kwargs)) - if quad_as_tri: - for j in range(ny-1): - for i in range(nx-1): - xx = np.mean(x[j:j+2, i:i+2]) - yy = np.mean(y[j:j+2, i:i+2]) - zz = np.mean(z[j:j+2, i:i+2]) - fig.add_layout(Label(x=xx, y=yy, text=f"{zz:{fmt}}", **kwargs)) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py deleted file mode 100644 index a5159301ed6a378181d416b5c476fa6a1d87bfad..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py +++ /dev/null @@ -1,1826 +0,0 @@ -from __future__ import annotations - -import json -import re -import tempfile -import textwrap -from pathlib import Path -from typing import Iterable - -import huggingface_hub -import requests -import semantic_version as semver -from gradio_client.documentation import document, set_documentation_group -from huggingface_hub import CommitOperationAdd - -from gradio.themes.utils import ( - colors, - fonts, - get_matching_version, - get_theme_assets, - sizes, -) -from gradio.themes.utils.readme_content import README_CONTENT - -set_documentation_group("themes") - - -class ThemeClass: - def __init__(self): - self._stylesheets = [] - self.name = None - - def _get_theme_css(self): - css = {} - dark_css = {} - - for attr, val in self.__dict__.items(): - if attr.startswith("_"): - continue - if val is None: - if attr.endswith("_dark"): - dark_css[attr[:-5]] = None - continue - else: - raise ValueError( - f"Cannot set '{attr}' to None - only dark mode variables can be None." - ) - val = str(val) - pattern = r"(\*)([\w_]+)(\b)" - - def repl_func(match): - full_match = match.group(0) - if full_match.startswith("*") and full_match.endswith("_dark"): - raise ValueError( - f"Cannot refer '{attr}' to '{val}' - dark variable references are automatically used for dark mode attributes, so do not use the _dark suffix in the value." - ) - if ( - attr.endswith("_dark") - and full_match.startswith("*") - and attr[:-5] == full_match[1:] - ): - raise ValueError( - f"Cannot refer '{attr}' to '{val}' - if dark and light mode values are the same, set dark mode version to None." - ) - - word = match.group(2) - word = word.replace("_", "-") - return f"var(--{word})" - - val = re.sub(pattern, repl_func, val) - - attr = attr.replace("_", "-") - - if attr.endswith("-dark"): - attr = attr[:-5] - dark_css[attr] = val - else: - css[attr] = val - - for attr, val in css.items(): - if attr not in dark_css: - dark_css[attr] = val - - css_code = ( - ":root {\n" - + "\n".join([f" --{attr}: {val};" for attr, val in css.items()]) - + "\n}" - ) - dark_css_code = ( - ".dark {\n" - + "\n".join([f" --{attr}: {val};" for attr, val in dark_css.items()]) - + "\n}" - ) - - return f"{css_code}\n{dark_css_code}" - - def to_dict(self): - """Convert the theme into a python dictionary.""" - schema = {"theme": {}} - for prop in dir(self): - if ( - not prop.startswith("_") - or prop.startswith("_font") - or prop == "_stylesheets" - or prop == "name" - ) and isinstance(getattr(self, prop), (list, str)): - schema["theme"][prop] = getattr(self, prop) - return schema - - @classmethod - def load(cls, path: str) -> ThemeClass: - """Load a theme from a json file. - - Parameters: - path: The filepath to read. - """ - with open(path) as fp: - return cls.from_dict(json.load(fp, object_hook=fonts.as_font)) - - @classmethod - def from_dict(cls, theme: dict[str, dict[str, str]]) -> ThemeClass: - """Create a theme instance from a dictionary representation. - - Parameters: - theme: The dictionary representation of the theme. - """ - new_theme = cls() - for prop, value in theme["theme"].items(): - setattr(new_theme, prop, value) - - # For backwards compatibility, load attributes in base theme not in the loaded theme from the base theme. - base = Base() - for attr in base.__dict__: - if not attr.startswith("_") and not hasattr(new_theme, attr): - setattr(new_theme, attr, getattr(base, attr)) - - return new_theme - - def dump(self, filename: str): - """Write the theme to a json file. - - Parameters: - filename: The path to write the theme too - """ - Path(filename).write_text(json.dumps(self.to_dict(), cls=fonts.FontEncoder)) - - @classmethod - def from_hub(cls, repo_name: str, hf_token: str | None = None): - """Load a theme from the hub. - - This DOES NOT require a HuggingFace account for downloading publicly available themes. - - Parameters: - repo_name: string of the form /@. If a semantic version expression is omitted, the latest version will be fetched. - hf_token: HuggingFace Token. Only needed to download private themes. - """ - if "@" not in repo_name: - name, version = repo_name, None - else: - name, version = repo_name.split("@") - - api = huggingface_hub.HfApi(token=hf_token) - - try: - space_info = api.space_info(name) - except requests.HTTPError as e: - raise ValueError(f"The space {name} does not exist") from e - - assets = get_theme_assets(space_info) - matching_version = get_matching_version(assets, version) - - if not matching_version: - raise ValueError( - f"Cannot find a matching version for expression {version} " - f"from files {[f.filename for f in assets]}" - ) - - theme_file = huggingface_hub.hf_hub_download( - repo_id=name, - repo_type="space", - filename=f"themes/theme_schema@{matching_version.version}.json", - ) - theme = cls.load(theme_file) - theme.name = name - return theme - - @staticmethod - def _get_next_version(space_info: huggingface_hub.hf_api.SpaceInfo) -> str: - assets = get_theme_assets(space_info) - latest_version = max(assets, key=lambda asset: asset.version).version - return str(latest_version.next_patch()) - - @staticmethod - def _theme_version_exists( - space_info: huggingface_hub.hf_api.SpaceInfo, version: str - ) -> bool: - assets = get_theme_assets(space_info) - return any(a.version == semver.Version(version) for a in assets) - - def push_to_hub( - self, - repo_name: str, - org_name: str | None = None, - version: str | None = None, - hf_token: str | None = None, - theme_name: str | None = None, - description: str | None = None, - private: bool = False, - ): - """Upload a theme to the HuggingFace hub. - - This requires a HuggingFace account. - - Parameters: - repo_name: The name of the repository to store the theme assets, e.g. 'my_theme' or 'sunset'. - org_name: The name of the org to save the space in. If None (the default), the username corresponding to the logged in user, or hƒ_token is used. - version: A semantic version tag for theme. Bumping the version tag lets you publish updates to a theme without changing the look of applications that already loaded your theme. - hf_token: API token for your HuggingFace account - theme_name: Name for the name. If None, defaults to repo_name - description: A long form description to your theme. - """ - - from gradio import __version__ - - api = huggingface_hub.HfApi() - - if not hf_token: - try: - author = huggingface_hub.whoami()["name"] - except OSError as e: - raise ValueError( - "In order to push to hub, log in via `huggingface-cli login` " - "or provide a theme_token to push_to_hub. For more information " - "see https://huggingface.co/docs/huggingface_hub/quick-start#login" - ) from e - else: - author = huggingface_hub.whoami(token=hf_token)["name"] - - space_id = f"{org_name or author}/{repo_name}" - - try: - space_info = api.space_info(space_id) - except requests.HTTPError: - space_info = None - - space_exists = space_info is not None - - # If no version, set the version to next patch release - if not version: - version = self._get_next_version(space_info) if space_exists else "0.0.1" - else: - _ = semver.Version(version) - - if space_exists and self._theme_version_exists(space_info, version): - raise ValueError( - f"The space {space_id} already has a " - f"theme with version {version}. See: themes/theme_schema@{version}.json. " - "To manually override this version, use the HuggingFace hub UI." - ) - - theme_name = theme_name or repo_name - - with tempfile.NamedTemporaryFile( - mode="w", delete=False, suffix=".json" - ) as css_file: - contents = self.to_dict() - contents["version"] = version - json.dump(contents, css_file, cls=fonts.FontEncoder) - with tempfile.NamedTemporaryFile(mode="w", delete=False) as readme_file: - readme_content = README_CONTENT.format( - theme_name=theme_name, - description=description or "Add a description of this theme here!", - author=author, - gradio_version=__version__, - ) - readme_file.write(textwrap.dedent(readme_content)) - with tempfile.NamedTemporaryFile(mode="w", delete=False) as app_file: - contents = (Path(__file__).parent / "app.py").read_text() - contents = re.sub( - r"theme=gr.themes.Default\(\)", - f"theme='{space_id}'", - contents, - ) - contents = re.sub(r"{THEME}", theme_name or repo_name, contents) - contents = re.sub(r"{AUTHOR}", org_name or author, contents) - contents = re.sub(r"{SPACE_NAME}", repo_name, contents) - app_file.write(contents) - - operations = [ - CommitOperationAdd( - path_in_repo=f"themes/theme_schema@{version}.json", - path_or_fileobj=css_file.name, - ), - CommitOperationAdd( - path_in_repo="README.md", path_or_fileobj=readme_file.name - ), - CommitOperationAdd(path_in_repo="app.py", path_or_fileobj=app_file.name), - ] - - huggingface_hub.create_repo( - space_id, - repo_type="space", - space_sdk="gradio", - token=hf_token, - exist_ok=True, - private=private, - ) - - api.create_commit( - repo_id=space_id, - commit_message="Updating theme", - repo_type="space", - operations=operations, - token=hf_token, - ) - url = f"https://huggingface.co/spaces/{space_id}" - print(f"See your theme here! {url}") - return url - - -@document("push_to_hub", "from_hub", "load", "dump", "from_dict", "to_dict") -class Base(ThemeClass): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.blue, - secondary_hue: colors.Color | str = colors.blue, - neutral_hue: colors.Color | str = colors.gray, - text_size: sizes.Size | str = sizes.text_md, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Source Sans Pro"), - "ui-sans-serif", - "system-ui", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - """ - Parameters: - primary_hue: The primary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - secondary_hue: The secondary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - neutral_hue: The neutral hue of the theme, used . Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - text_size: The size of the text. Load a preset, like gradio.themes.sizes.text_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - spacing_size: The size of the spacing. Load a preset, like gradio.themes.sizes.spacing_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - radius_size: The radius size of corners. Load a preset, like gradio.themes.sizes.radius_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - font: The primary font to use for the theme. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks. - font_mono: The monospace font to use for the theme, applies to code. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks. - """ - - self.name = "base" - - def expand_shortcut(shortcut, mode="color", prefix=None): - if not isinstance(shortcut, str): - return shortcut - if mode == "color": - for color in colors.Color.all: - if color.name == shortcut: - return color - raise ValueError(f"Color shortcut {shortcut} not found.") - elif mode == "size": - for size in sizes.Size.all: - if size.name == f"{prefix}_{shortcut}": - return size - raise ValueError(f"Size shortcut {shortcut} not found.") - - primary_hue = expand_shortcut(primary_hue, mode="color") - secondary_hue = expand_shortcut(secondary_hue, mode="color") - neutral_hue = expand_shortcut(neutral_hue, mode="color") - text_size = expand_shortcut(text_size, mode="size", prefix="text") - spacing_size = expand_shortcut(spacing_size, mode="size", prefix="spacing") - radius_size = expand_shortcut(radius_size, mode="size", prefix="radius") - - # Hue ranges - self.primary_50 = primary_hue.c50 - self.primary_100 = primary_hue.c100 - self.primary_200 = primary_hue.c200 - self.primary_300 = primary_hue.c300 - self.primary_400 = primary_hue.c400 - self.primary_500 = primary_hue.c500 - self.primary_600 = primary_hue.c600 - self.primary_700 = primary_hue.c700 - self.primary_800 = primary_hue.c800 - self.primary_900 = primary_hue.c900 - self.primary_950 = primary_hue.c950 - - self.secondary_50 = secondary_hue.c50 - self.secondary_100 = secondary_hue.c100 - self.secondary_200 = secondary_hue.c200 - self.secondary_300 = secondary_hue.c300 - self.secondary_400 = secondary_hue.c400 - self.secondary_500 = secondary_hue.c500 - self.secondary_600 = secondary_hue.c600 - self.secondary_700 = secondary_hue.c700 - self.secondary_800 = secondary_hue.c800 - self.secondary_900 = secondary_hue.c900 - self.secondary_950 = secondary_hue.c950 - - self.neutral_50 = neutral_hue.c50 - self.neutral_100 = neutral_hue.c100 - self.neutral_200 = neutral_hue.c200 - self.neutral_300 = neutral_hue.c300 - self.neutral_400 = neutral_hue.c400 - self.neutral_500 = neutral_hue.c500 - self.neutral_600 = neutral_hue.c600 - self.neutral_700 = neutral_hue.c700 - self.neutral_800 = neutral_hue.c800 - self.neutral_900 = neutral_hue.c900 - self.neutral_950 = neutral_hue.c950 - - # Spacing - self.spacing_xxs = spacing_size.xxs - self.spacing_xs = spacing_size.xs - self.spacing_sm = spacing_size.sm - self.spacing_md = spacing_size.md - self.spacing_lg = spacing_size.lg - self.spacing_xl = spacing_size.xl - self.spacing_xxl = spacing_size.xxl - - self.radius_xxs = radius_size.xxs - self.radius_xs = radius_size.xs - self.radius_sm = radius_size.sm - self.radius_md = radius_size.md - self.radius_lg = radius_size.lg - self.radius_xl = radius_size.xl - self.radius_xxl = radius_size.xxl - - self.text_xxs = text_size.xxs - self.text_xs = text_size.xs - self.text_sm = text_size.sm - self.text_md = text_size.md - self.text_lg = text_size.lg - self.text_xl = text_size.xl - self.text_xxl = text_size.xxl - - # Font - if not isinstance(font, Iterable): - font = [font] - self._font = [ - fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam) - for fontfam in font - ] - if not isinstance(font_mono, Iterable): - font_mono = [font_mono] - self._font_mono = [ - fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam) - for fontfam in font_mono - ] - self.font = ", ".join(str(font) for font in self._font) - self.font_mono = ", ".join(str(font) for font in self._font_mono) - - self._stylesheets = [] - for font in self._font + self._font_mono: - font_stylesheet = font.stylesheet() - if font_stylesheet: - self._stylesheets.append(font_stylesheet) - - self.set() - - def set( - self, - *, - # Body Attributes: These set set the values for the entire body of the app. - body_background_fill=None, - body_background_fill_dark=None, - body_text_color=None, - body_text_color_dark=None, - body_text_size=None, - body_text_color_subdued=None, - body_text_color_subdued_dark=None, - body_text_weight=None, - embed_radius=None, - # Element Colors: These set the colors for common elements. - background_fill_primary=None, - background_fill_primary_dark=None, - background_fill_secondary=None, - background_fill_secondary_dark=None, - border_color_accent=None, - border_color_accent_dark=None, - border_color_accent_subdued=None, - border_color_accent_subdued_dark=None, - border_color_primary=None, - border_color_primary_dark=None, - color_accent=None, - color_accent_soft=None, - color_accent_soft_dark=None, - # Text: This sets the text styling for text elements. - link_text_color=None, - link_text_color_dark=None, - link_text_color_active=None, - link_text_color_active_dark=None, - link_text_color_hover=None, - link_text_color_hover_dark=None, - link_text_color_visited=None, - link_text_color_visited_dark=None, - prose_text_size=None, - prose_text_weight=None, - prose_header_text_weight=None, - # Shadows: These set the high-level shadow rendering styles. These variables are often referenced by other component-specific shadow variables. - shadow_drop=None, - shadow_drop_lg=None, - shadow_inset=None, - shadow_spread=None, - shadow_spread_dark=None, - # Layout Atoms: These set the style for common layout elements, such as the blocks that wrap components. - block_background_fill=None, - block_background_fill_dark=None, - block_border_color=None, - block_border_color_dark=None, - block_border_width=None, - block_border_width_dark=None, - block_info_text_color=None, - block_info_text_color_dark=None, - block_info_text_size=None, - block_info_text_weight=None, - block_label_background_fill=None, - block_label_background_fill_dark=None, - block_label_border_color=None, - block_label_border_color_dark=None, - block_label_border_width=None, - block_label_border_width_dark=None, - block_label_shadow=None, - block_label_text_color=None, - block_label_text_color_dark=None, - block_label_margin=None, - block_label_padding=None, - block_label_radius=None, - block_label_right_radius=None, - block_label_text_size=None, - block_label_text_weight=None, - block_padding=None, - block_radius=None, - block_shadow=None, - block_shadow_dark=None, - block_title_background_fill=None, - block_title_background_fill_dark=None, - block_title_border_color=None, - block_title_border_color_dark=None, - block_title_border_width=None, - block_title_border_width_dark=None, - block_title_text_color=None, - block_title_text_color_dark=None, - block_title_padding=None, - block_title_radius=None, - block_title_text_size=None, - block_title_text_weight=None, - container_radius=None, - form_gap_width=None, - layout_gap=None, - panel_background_fill=None, - panel_background_fill_dark=None, - panel_border_color=None, - panel_border_color_dark=None, - panel_border_width=None, - panel_border_width_dark=None, - section_header_text_size=None, - section_header_text_weight=None, - # Component Atoms: These set the style for elements within components. - chatbot_code_background_color=None, - chatbot_code_background_color_dark=None, - checkbox_background_color=None, - checkbox_background_color_dark=None, - checkbox_background_color_focus=None, - checkbox_background_color_focus_dark=None, - checkbox_background_color_hover=None, - checkbox_background_color_hover_dark=None, - checkbox_background_color_selected=None, - checkbox_background_color_selected_dark=None, - checkbox_border_color=None, - checkbox_border_color_dark=None, - checkbox_border_color_focus=None, - checkbox_border_color_focus_dark=None, - checkbox_border_color_hover=None, - checkbox_border_color_hover_dark=None, - checkbox_border_color_selected=None, - checkbox_border_color_selected_dark=None, - checkbox_border_radius=None, - checkbox_border_width=None, - checkbox_border_width_dark=None, - checkbox_check=None, - radio_circle=None, - checkbox_shadow=None, - checkbox_label_background_fill=None, - checkbox_label_background_fill_dark=None, - checkbox_label_background_fill_hover=None, - checkbox_label_background_fill_hover_dark=None, - checkbox_label_background_fill_selected=None, - checkbox_label_background_fill_selected_dark=None, - checkbox_label_border_color=None, - checkbox_label_border_color_dark=None, - checkbox_label_border_color_hover=None, - checkbox_label_border_color_hover_dark=None, - checkbox_label_border_width=None, - checkbox_label_border_width_dark=None, - checkbox_label_gap=None, - checkbox_label_padding=None, - checkbox_label_shadow=None, - checkbox_label_text_size=None, - checkbox_label_text_weight=None, - checkbox_label_text_color=None, - checkbox_label_text_color_dark=None, - checkbox_label_text_color_selected=None, - checkbox_label_text_color_selected_dark=None, - error_background_fill=None, - error_background_fill_dark=None, - error_border_color=None, - error_border_color_dark=None, - error_border_width=None, - error_border_width_dark=None, - error_text_color=None, - error_text_color_dark=None, - error_icon_color=None, - error_icon_color_dark=None, - input_background_fill=None, - input_background_fill_dark=None, - input_background_fill_focus=None, - input_background_fill_focus_dark=None, - input_background_fill_hover=None, - input_background_fill_hover_dark=None, - input_border_color=None, - input_border_color_dark=None, - input_border_color_focus=None, - input_border_color_focus_dark=None, - input_border_color_hover=None, - input_border_color_hover_dark=None, - input_border_width=None, - input_border_width_dark=None, - input_padding=None, - input_placeholder_color=None, - input_placeholder_color_dark=None, - input_radius=None, - input_shadow=None, - input_shadow_dark=None, - input_shadow_focus=None, - input_shadow_focus_dark=None, - input_text_size=None, - input_text_weight=None, - loader_color=None, - loader_color_dark=None, - slider_color=None, - slider_color_dark=None, - stat_background_fill=None, - stat_background_fill_dark=None, - table_border_color=None, - table_border_color_dark=None, - table_even_background_fill=None, - table_even_background_fill_dark=None, - table_odd_background_fill=None, - table_odd_background_fill_dark=None, - table_radius=None, - table_row_focus=None, - table_row_focus_dark=None, - # Buttons: These set the style for buttons. - button_border_width=None, - button_border_width_dark=None, - button_shadow=None, - button_shadow_active=None, - button_shadow_hover=None, - button_transition=None, - button_large_padding=None, - button_large_radius=None, - button_large_text_size=None, - button_large_text_weight=None, - button_small_padding=None, - button_small_radius=None, - button_small_text_size=None, - button_small_text_weight=None, - button_primary_background_fill=None, - button_primary_background_fill_dark=None, - button_primary_background_fill_hover=None, - button_primary_background_fill_hover_dark=None, - button_primary_border_color=None, - button_primary_border_color_dark=None, - button_primary_border_color_hover=None, - button_primary_border_color_hover_dark=None, - button_primary_text_color=None, - button_primary_text_color_dark=None, - button_primary_text_color_hover=None, - button_primary_text_color_hover_dark=None, - button_secondary_background_fill=None, - button_secondary_background_fill_dark=None, - button_secondary_background_fill_hover=None, - button_secondary_background_fill_hover_dark=None, - button_secondary_border_color=None, - button_secondary_border_color_dark=None, - button_secondary_border_color_hover=None, - button_secondary_border_color_hover_dark=None, - button_secondary_text_color=None, - button_secondary_text_color_dark=None, - button_secondary_text_color_hover=None, - button_secondary_text_color_hover_dark=None, - button_cancel_background_fill=None, - button_cancel_background_fill_dark=None, - button_cancel_background_fill_hover=None, - button_cancel_background_fill_hover_dark=None, - button_cancel_border_color=None, - button_cancel_border_color_dark=None, - button_cancel_border_color_hover=None, - button_cancel_border_color_hover_dark=None, - button_cancel_text_color=None, - button_cancel_text_color_dark=None, - button_cancel_text_color_hover=None, - button_cancel_text_color_hover_dark=None, - ) -> Base: - """ - Parameters: - body_background_fill: The background of the entire app. - body_background_fill_dark: The background of the entire app in dark mode. - body_text_color: The default text color. - body_text_color_dark: The default text color in dark mode. - body_text_size: The default text size. - body_text_color_subdued: The text color used for softer, less important text. - body_text_color_subdued_dark: The text color used for softer, less important text in dark mode. - body_text_weight: The default text weight. - embed_radius: The corner radius used for embedding when the app is embedded within a page. - background_fill_primary: The background primarily used for items placed directly on the page. - background_fill_primary_dark: The background primarily used for items placed directly on the page in dark mode. - background_fill_secondary: The background primarily used for items placed on top of another item. - background_fill_secondary_dark: The background primarily used for items placed on top of another item in dark mode. - border_color_accent: The border color used for accented items. - border_color_accent_dark: The border color used for accented items in dark mode. - border_color_accent_subdued: The subdued border color for accented items. - border_color_accent_subdued_dark: The subdued border color for accented items in dark mode. - border_color_primary: The border color primarily used for items placed directly on the page. - border_color_primary_dark: The border color primarily used for items placed directly on the page in dark mode. - color_accent: The color used for accented items. - color_accent_soft: The softer color used for accented items. - color_accent_soft_dark: The softer color used for accented items in dark mode. - link_text_color: The text color used for links. - link_text_color_dark: The text color used for links in dark mode. - link_text_color_active: The text color used for links when they are active. - link_text_color_active_dark: The text color used for links when they are active in dark mode. - link_text_color_hover: The text color used for links when they are hovered over. - link_text_color_hover_dark: The text color used for links when they are hovered over in dark mode. - link_text_color_visited: The text color used for links when they have been visited. - link_text_color_visited_dark: The text color used for links when they have been visited in dark mode. - prose_text_size: The text size used for markdown and other prose. - prose_text_weight: The text weight used for markdown and other prose. - prose_header_text_weight: The text weight of a header used for markdown and other prose. - shadow_drop: Drop shadow used by other shadowed items. - shadow_drop_lg: Larger drop shadow used by other shadowed items. - shadow_inset: Inset shadow used by other shadowed items. - shadow_spread: Size of shadow spread used by shadowed items. - shadow_spread_dark: Size of shadow spread used by shadowed items in dark mode. - block_background_fill: The background around an item. - block_background_fill_dark: The background around an item in dark mode. - block_border_color: The border color around an item. - block_border_color_dark: The border color around an item in dark mode. - block_border_width: The border width around an item. - block_border_width_dark: The border width around an item in dark mode. - block_info_text_color: The color of the info text. - block_info_text_color_dark: The color of the info text in dark mode. - block_info_text_size: The size of the info text. - block_info_text_weight: The weight of the info text. - block_label_background_fill: The background of the title label of a media element (e.g. image). - block_label_background_fill_dark: The background of the title label of a media element (e.g. image) in dark mode. - block_label_border_color: The border color of the title label of a media element (e.g. image). - block_label_border_color_dark: The border color of the title label of a media element (e.g. image) in dark mode. - block_label_border_width: The border width of the title label of a media element (e.g. image). - block_label_border_width_dark: The border width of the title label of a media element (e.g. image) in dark mode. - block_label_shadow: The shadow of the title label of a media element (e.g. image). - block_label_text_color: The text color of the title label of a media element (e.g. image). - block_label_text_color_dark: The text color of the title label of a media element (e.g. image) in dark mode. - block_label_margin: The margin of the title label of a media element (e.g. image) from its surrounding container. - block_label_padding: The padding of the title label of a media element (e.g. image). - block_label_radius: The corner radius of the title label of a media element (e.g. image). - block_label_right_radius: The corner radius of a right-aligned helper label. - block_label_text_size: The text size of the title label of a media element (e.g. image). - block_label_text_weight: The text weight of the title label of a media element (e.g. image). - block_padding: The padding around an item. - block_radius: The corner radius around an item. - block_shadow: The shadow under an item. - block_shadow_dark: The shadow under an item in dark mode. - block_title_background_fill: The background of the title of a form element (e.g. textbox). - block_title_background_fill_dark: The background of the title of a form element (e.g. textbox) in dark mode. - block_title_border_color: The border color of the title of a form element (e.g. textbox). - block_title_border_color_dark: The border color of the title of a form element (e.g. textbox) in dark mode. - block_title_border_width: The border width of the title of a form element (e.g. textbox). - block_title_border_width_dark: The border width of the title of a form element (e.g. textbox) in dark mode. - block_title_text_color: The text color of the title of a form element (e.g. textbox). - block_title_text_color_dark: The text color of the title of a form element (e.g. textbox) in dark mode. - block_title_padding: The padding of the title of a form element (e.g. textbox). - block_title_radius: The corner radius of the title of a form element (e.g. textbox). - block_title_text_size: The text size of the title of a form element (e.g. textbox). - block_title_text_weight: The text weight of the title of a form element (e.g. textbox). - container_radius: The corner radius of a layout component that holds other content. - form_gap_width: The border gap between form elements, (e.g. consecutive textboxes). - layout_gap: The gap between items within a row or column. - panel_background_fill: The background of a panel. - panel_background_fill_dark: The background of a panel in dark mode. - panel_border_color: The border color of a panel. - panel_border_color_dark: The border color of a panel in dark mode. - panel_border_width: The border width of a panel. - panel_border_width_dark: The border width of a panel in dark mode. - section_header_text_size: The text size of a section header (e.g. tab name). - section_header_text_weight: The text weight of a section header (e.g. tab name). - chatbot_code_background_color: The background color of code blocks in the chatbot. - chatbot_code_background_color_dark: The background color of code blocks in the chatbot in dark mode. - checkbox_background_color: The background of a checkbox square or radio circle. - checkbox_background_color_dark: The background of a checkbox square or radio circle in dark mode. - checkbox_background_color_focus: The background of a checkbox square or radio circle when focused. - checkbox_background_color_focus_dark: The background of a checkbox square or radio circle when focused in dark mode. - checkbox_background_color_hover: The background of a checkbox square or radio circle when hovered over. - checkbox_background_color_hover_dark: The background of a checkbox square or radio circle when hovered over in dark mode. - checkbox_background_color_selected: The background of a checkbox square or radio circle when selected. - checkbox_background_color_selected_dark: The background of a checkbox square or radio circle when selected in dark mode. - checkbox_border_color: The border color of a checkbox square or radio circle. - checkbox_border_color_dark: The border color of a checkbox square or radio circle in dark mode. - checkbox_border_color_focus: The border color of a checkbox square or radio circle when focused. - checkbox_border_color_focus_dark: The border color of a checkbox square or radio circle when focused in dark mode. - checkbox_border_color_hover: The border color of a checkbox square or radio circle when hovered over. - checkbox_border_color_hover_dark: The border color of a checkbox square or radio circle when hovered over in dark mode. - checkbox_border_color_selected: The border color of a checkbox square or radio circle when selected. - checkbox_border_color_selected_dark: The border color of a checkbox square or radio circle when selected in dark mode. - checkbox_border_radius: The corner radius of a checkbox square. - checkbox_border_width: The border width of a checkbox square or radio circle. - checkbox_border_width_dark: The border width of a checkbox square or radio circle in dark mode. - checkbox_check: The checkmark visual of a checkbox square. - radio_circle: The circle visual of a radio circle. - checkbox_shadow: The shadow of a checkbox square or radio circle. - checkbox_label_background_fill: The background of the surrounding button of a checkbox or radio element. - checkbox_label_background_fill_dark: The background of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_background_fill_hover: The background of the surrounding button of a checkbox or radio element when hovered over. - checkbox_label_background_fill_hover_dark: The background of the surrounding button of a checkbox or radio element when hovered over in dark mode. - checkbox_label_background_fill_selected: The background of the surrounding button of a checkbox or radio element when selected. - checkbox_label_background_fill_selected_dark: The background of the surrounding button of a checkbox or radio element when selected in dark mode. - checkbox_label_border_color: The border color of the surrounding button of a checkbox or radio element. - checkbox_label_border_color_dark: The border color of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_border_color_hover: The border color of the surrounding button of a checkbox or radio element when hovered over. - checkbox_label_border_color_hover_dark: The border color of the surrounding button of a checkbox or radio element when hovered over in dark mode. - checkbox_label_border_width: The border width of the surrounding button of a checkbox or radio element. - checkbox_label_border_width_dark: The border width of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_gap: The gap consecutive checkbox or radio elements. - checkbox_label_padding: The padding of the surrounding button of a checkbox or radio element. - checkbox_label_shadow: The shadow of the surrounding button of a checkbox or radio element. - checkbox_label_text_size: The text size of the label accompanying a checkbox or radio element. - checkbox_label_text_weight: The text weight of the label accompanying a checkbox or radio element. - checkbox_label_text_color: The text color of the label accompanying a checkbox or radio element. - checkbox_label_text_color_dark: The text color of the label accompanying a checkbox or radio element in dark mode. - checkbox_label_text_color_selected: The text color of the label accompanying a checkbox or radio element when selected. - checkbox_label_text_color_selected_dark: The text color of the label accompanying a checkbox or radio element when selected in dark mode. - error_background_fill: The background of an error message. - error_background_fill_dark: The background of an error message in dark mode. - error_border_color: The border color of an error message. - error_border_color_dark: The border color of an error message in dark mode. - error_border_width: The border width of an error message. - error_border_width_dark: The border width of an error message in dark mode. - error_text_color: The text color of an error message. - error_text_color_dark: The text color of an error message in dark mode. - input_background_fill: The background of an input field. - input_background_fill_dark: The background of an input field in dark mode. - input_background_fill_focus: The background of an input field when focused. - input_background_fill_focus_dark: The background of an input field when focused in dark mode. - input_background_fill_hover: The background of an input field when hovered over. - input_background_fill_hover_dark: The background of an input field when hovered over in dark mode. - input_border_color: The border color of an input field. - input_border_color_dark: The border color of an input field in dark mode. - input_border_color_focus: The border color of an input field when focused. - input_border_color_focus_dark: The border color of an input field when focused in dark mode. - input_border_color_hover: The border color of an input field when hovered over. - input_border_color_hover_dark: The border color of an input field when hovered over in dark mode. - input_border_width: The border width of an input field. - input_border_width_dark: The border width of an input field in dark mode. - input_padding: The padding of an input field. - input_placeholder_color: The placeholder text color of an input field. - input_placeholder_color_dark: The placeholder text color of an input field in dark mode. - input_radius: The corner radius of an input field. - input_shadow: The shadow of an input field. - input_shadow_dark: The shadow of an input field in dark mode. - input_shadow_focus: The shadow of an input field when focused. - input_shadow_focus_dark: The shadow of an input field when focused in dark mode. - input_text_size: The text size of an input field. - input_text_weight: The text weight of an input field. - loader_color: The color of the loading animation while a request is pending. - loader_color_dark: The color of the loading animation while a request is pending in dark mode. - slider_color: The color of the slider in a range element. - slider_color_dark: The color of the slider in a range element in dark mode. - stat_background_fill: The background used for stats visuals (e.g. confidence bars in label). - stat_background_fill_dark: The background used for stats visuals (e.g. confidence bars in label) in dark mode. - table_border_color: The border color of a table. - table_border_color_dark: The border color of a table in dark mode. - table_even_background_fill: The background of even rows in a table. - table_even_background_fill_dark: The background of even rows in a table in dark mode. - table_odd_background_fill: The background of odd rows in a table. - table_odd_background_fill_dark: The background of odd rows in a table in dark mode. - table_radius: The corner radius of a table. - table_row_focus: The background of a focused row in a table. - table_row_focus_dark: The background of a focused row in a table in dark mode. - button_border_width: The border width of a button. - button_border_width_dark: The border width of a button in dark mode. - button_cancel_background_fill: The background of a button of "cancel" variant. - button_cancel_background_fill_dark: The background of a button of "cancel" variant in dark mode. - button_cancel_background_fill_hover: The background of a button of "cancel" variant when hovered over. - button_cancel_background_fill_hover_dark: The background of a button of "cancel" variant when hovered over in dark mode. - button_cancel_border_color: The border color of a button of "cancel" variant. - button_cancel_border_color_dark: The border color of a button of "cancel" variant in dark mode. - button_cancel_border_color_hover: The border color of a button of "cancel" variant when hovered over. - button_cancel_border_color_hover_dark: The border color of a button of "cancel" variant when hovered over in dark mode. - button_cancel_text_color: The text color of a button of "cancel" variant. - button_cancel_text_color_dark: The text color of a button of "cancel" variant in dark mode. - button_cancel_text_color_hover: The text color of a button of "cancel" variant when hovered over. - button_cancel_text_color_hover_dark: The text color of a button of "cancel" variant when hovered over in dark mode. - button_large_padding: The padding of a button with the default "large" size. - button_large_radius: The corner radius of a button with the default "large" size. - button_large_text_size: The text size of a button with the default "large" size. - button_large_text_weight: The text weight of a button with the default "large" size. - button_primary_background_fill: The background of a button of "primary" variant. - button_primary_background_fill_dark: The background of a button of "primary" variant in dark mode. - button_primary_background_fill_hover: The background of a button of "primary" variant when hovered over. - button_primary_background_fill_hover_dark: The background of a button of "primary" variant when hovered over in dark mode. - button_primary_border_color: The border color of a button of "primary" variant. - button_primary_border_color_dark: The border color of a button of "primary" variant in dark mode. - button_primary_border_color_hover: The border color of a button of "primary" variant when hovered over. - button_primary_border_color_hover_dark: The border color of a button of "primary" variant when hovered over in dark mode. - button_primary_text_color: The text color of a button of "primary" variant. - button_primary_text_color_dark: The text color of a button of "primary" variant in dark mode. - button_primary_text_color_hover: The text color of a button of "primary" variant when hovered over. - button_primary_text_color_hover_dark: The text color of a button of "primary" variant when hovered over in dark mode. - button_secondary_background_fill: The background of a button of default "secondary" variant. - button_secondary_background_fill_dark: The background of a button of default "secondary" variant in dark mode. - button_secondary_background_fill_hover: The background of a button of default "secondary" variant when hovered over. - button_secondary_background_fill_hover_dark: The background of a button of default "secondary" variant when hovered over in dark mode. - button_secondary_border_color: The border color of a button of default "secondary" variant. - button_secondary_border_color_dark: The border color of a button of default "secondary" variant in dark mode. - button_secondary_border_color_hover: The border color of a button of default "secondary" variant when hovered over. - button_secondary_border_color_hover_dark: The border color of a button of default "secondary" variant when hovered over in dark mode. - button_secondary_text_color: The text color of a button of default "secondary" variant. - button_secondary_text_color_dark: The text color of a button of default "secondary" variant in dark mode. - button_secondary_text_color_hover: The text color of a button of default "secondary" variant when hovered over. - button_secondary_text_color_hover_dark: The text color of a button of default "secondary" variant when hovered over in dark mode. - button_shadow: The shadow under a button. - button_shadow_active: The shadow under a button when pressed. - button_shadow_hover: The shadow under a button when hovered over. - button_small_padding: The padding of a button set to "small" size. - button_small_radius: The corner radius of a button set to "small" size. - button_small_text_size: The text size of a button set to "small" size. - button_small_text_weight: The text weight of a button set to "small" size. - button_transition: The transition animation duration of a button between regular, hover, and focused states. - """ - - # Body - self.body_background_fill = body_background_fill or getattr( - self, "body_background_fill", "*background_fill_primary" - ) - self.body_background_fill_dark = body_background_fill_dark or getattr( - self, "body_background_fill_dark", "*background_fill_primary" - ) - self.body_text_color = body_text_color or getattr( - self, "body_text_color", "*neutral_800" - ) - self.body_text_color_dark = body_text_color_dark or getattr( - self, "body_text_color_dark", "*neutral_100" - ) - self.body_text_size = body_text_size or getattr( - self, "body_text_size", "*text_md" - ) - self.body_text_weight = body_text_weight or getattr( - self, "body_text_weight", "400" - ) - self.embed_radius = embed_radius or getattr(self, "embed_radius", "*radius_lg") - # Core Colors - self.color_accent = color_accent or getattr( - self, "color_accent", "*primary_500" - ) - self.color_accent_soft = color_accent_soft or getattr( - self, "color_accent_soft", "*primary_50" - ) - self.color_accent_soft_dark = color_accent_soft_dark or getattr( - self, "color_accent_soft_dark", "*neutral_700" - ) - self.background_fill_primary = background_fill_primary or getattr( - self, "background_primary", "white" - ) - self.background_fill_primary_dark = background_fill_primary_dark or getattr( - self, "background_primary_dark", "*neutral_950" - ) - self.background_fill_secondary = background_fill_secondary or getattr( - self, "background_secondary", "*neutral_50" - ) - self.background_fill_secondary_dark = background_fill_secondary_dark or getattr( - self, "background_secondary_dark", "*neutral_900" - ) - self.border_color_accent = border_color_accent or getattr( - self, "border_color_accent", "*primary_300" - ) - self.border_color_accent_dark = border_color_accent_dark or getattr( - self, "border_color_accent_dark", "*neutral_600" - ) - self.border_color_primary = border_color_primary or getattr( - self, "border_color_primary", "*neutral_200" - ) - self.border_color_primary_dark = border_color_primary_dark or getattr( - self, "border_color_primary_dark", "*neutral_700" - ) - # Text Colors - self.link_text_color = link_text_color or getattr( - self, "link_text_color", "*secondary_600" - ) - self.link_text_color_active = link_text_color_active or getattr( - self, "link_text_color_active", "*secondary_600" - ) - self.link_text_color_active_dark = link_text_color_active_dark or getattr( - self, "link_text_color_active_dark", "*secondary_500" - ) - self.link_text_color_dark = link_text_color_dark or getattr( - self, "link_text_color_dark", "*secondary_500" - ) - self.link_text_color_hover = link_text_color_hover or getattr( - self, "link_text_color_hover", "*secondary_700" - ) - self.link_text_color_hover_dark = link_text_color_hover_dark or getattr( - self, "link_text_color_hover_dark", "*secondary_400" - ) - self.link_text_color_visited = link_text_color_visited or getattr( - self, "link_text_color_visited", "*secondary_500" - ) - self.link_text_color_visited_dark = link_text_color_visited_dark or getattr( - self, "link_text_color_visited_dark", "*secondary_600" - ) - self.body_text_color_subdued = body_text_color_subdued or getattr( - self, "body_text_color_subdued", "*neutral_400" - ) - self.body_text_color_subdued_dark = body_text_color_subdued_dark or getattr( - self, "body_text_color_subdued_dark", "*neutral_400" - ) - # Shadows - self.shadow_drop = shadow_drop or getattr( - self, "shadow_drop", "rgba(0,0,0,0.05) 0px 1px 2px 0px" - ) - self.shadow_drop_lg = shadow_drop_lg or getattr( - self, - "shadow_drop_lg", - "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)", - ) - self.shadow_inset = shadow_inset or getattr( - self, "shadow_inset", "rgba(0,0,0,0.05) 0px 2px 4px 0px inset" - ) - self.shadow_spread = shadow_spread or getattr(self, "shadow_spread", "3px") - self.shadow_spread_dark = shadow_spread_dark or getattr( - self, "shadow_spread_dark", "1px" - ) - # Layout Atoms - self.block_background_fill = block_background_fill or getattr( - self, "block_background_fill", "*background_fill_primary" - ) - self.block_background_fill_dark = block_background_fill_dark or getattr( - self, "block_background_fill_dark", "*neutral_800" - ) - self.block_border_color = block_border_color or getattr( - self, "block_border_color", "*border_color_primary" - ) - self.block_border_color_dark = block_border_color_dark or getattr( - self, "block_border_color_dark", "*border_color_primary" - ) - self.block_border_width = block_border_width or getattr( - self, "block_border_width", "1px" - ) - self.block_border_width_dark = block_border_width_dark or getattr( - self, "block_border_width_dark", None - ) - self.block_info_text_color = block_info_text_color or getattr( - self, "block_info_text_color", "*body_text_color_subdued" - ) - self.block_info_text_color_dark = block_info_text_color_dark or getattr( - self, "block_info_text_color_dark", "*body_text_color_subdued" - ) - self.block_info_text_size = block_info_text_size or getattr( - self, "block_info_text_size", "*text_sm" - ) - self.block_info_text_weight = block_info_text_weight or getattr( - self, "block_info_text_weight", "400" - ) - self.block_label_background_fill = block_label_background_fill or getattr( - self, "block_label_background_fill", "*background_fill_primary" - ) - self.block_label_background_fill_dark = ( - block_label_background_fill_dark - or getattr( - self, "block_label_background_fill_dark", "*background_fill_secondary" - ) - ) - self.block_label_border_color = block_label_border_color or getattr( - self, "block_label_border_color", "*border_color_primary" - ) - self.block_label_border_color_dark = block_label_border_color_dark or getattr( - self, "block_label_border_color_dark", "*border_color_primary" - ) - self.block_label_border_width = block_label_border_width or getattr( - self, "block_label_border_width", "1px" - ) - self.block_label_border_width_dark = block_label_border_width_dark or getattr( - self, "block_label_border_width_dark", None - ) - self.block_label_shadow = block_label_shadow or getattr( - self, "block_label_shadow", "*block_shadow" - ) - self.block_label_text_color = block_label_text_color or getattr( - self, "block_label_text_color", "*neutral_500" - ) - self.block_label_text_color_dark = block_label_text_color_dark or getattr( - self, "block_label_text_color_dark", "*neutral_200" - ) - self.block_label_margin = block_label_margin or getattr( - self, "block_label_margin", "0" - ) - self.block_label_padding = block_label_padding or getattr( - self, "block_label_padding", "*spacing_sm *spacing_lg" - ) - self.block_label_radius = block_label_radius or getattr( - self, - "block_label_radius", - "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0", - ) - self.block_label_right_radius = block_label_right_radius or getattr( - self, - "block_label_right_radius", - "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)", - ) - self.block_label_text_size = block_label_text_size or getattr( - self, "block_label_text_size", "*text_sm" - ) - self.block_label_text_weight = block_label_text_weight or getattr( - self, "block_label_text_weight", "400" - ) - self.block_padding = block_padding or getattr( - self, "block_padding", "*spacing_xl calc(*spacing_xl + 2px)" - ) - self.block_radius = block_radius or getattr(self, "block_radius", "*radius_lg") - self.block_shadow = block_shadow or getattr(self, "block_shadow", "none") - self.block_shadow_dark = block_shadow_dark or getattr( - self, "block_shadow_dark", None - ) - self.block_title_background_fill = block_title_background_fill or getattr( - self, "block_title_background_fill", "none" - ) - self.block_title_background_fill_dark = ( - block_title_background_fill_dark - or getattr(self, "block_title_background_fill_dark", None) - ) - self.block_title_border_color = block_title_border_color or getattr( - self, "block_title_border_color", "none" - ) - self.block_title_border_color_dark = block_title_border_color_dark or getattr( - self, "block_title_border_color_dark", None - ) - self.block_title_border_width = block_title_border_width or getattr( - self, "block_title_border_width", "0px" - ) - self.block_title_border_width_dark = block_title_border_width_dark or getattr( - self, "block_title_border_width_dark", None - ) - self.block_title_text_color = block_title_text_color or getattr( - self, "block_title_text_color", "*neutral_500" - ) - self.block_title_text_color_dark = block_title_text_color_dark or getattr( - self, "block_title_text_color_dark", "*neutral_200" - ) - self.block_title_padding = block_title_padding or getattr( - self, "block_title_padding", "0" - ) - self.block_title_radius = block_title_radius or getattr( - self, "block_title_radius", "none" - ) - self.block_title_text_size = block_title_text_size or getattr( - self, "block_title_text_size", "*text_md" - ) - self.block_title_text_weight = block_title_text_weight or getattr( - self, "block_title_text_weight", "400" - ) - self.container_radius = container_radius or getattr( - self, "container_radius", "*radius_lg" - ) - self.form_gap_width = form_gap_width or getattr(self, "form_gap_width", "0px") - self.layout_gap = layout_gap or getattr(self, "layout_gap", "*spacing_xxl") - self.panel_background_fill = panel_background_fill or getattr( - self, "panel_background_fill", "*background_fill_secondary" - ) - self.panel_background_fill_dark = panel_background_fill_dark or getattr( - self, "panel_background_fill_dark", "*background_fill_secondary" - ) - self.panel_border_color = panel_border_color or getattr( - self, "panel_border_color", "*border_color_primary" - ) - self.panel_border_color_dark = panel_border_color_dark or getattr( - self, "panel_border_color_dark", "*border_color_primary" - ) - self.panel_border_width = panel_border_width or getattr( - self, "panel_border_width", "0" - ) - self.panel_border_width_dark = panel_border_width_dark or getattr( - self, "panel_border_width_dark", None - ) - self.section_header_text_size = section_header_text_size or getattr( - self, "section_header_text_size", "*text_md" - ) - self.section_header_text_weight = section_header_text_weight or getattr( - self, "section_header_text_weight", "400" - ) - self.border_color_accent_subdued = border_color_accent_subdued or getattr( - self, "border_color_accent_subdued", "*border_color_accent" - ) - self.border_color_accent_subdued_dark = ( - border_color_accent_subdued_dark - or getattr(self, "border_color_accent_subdued_dark", "*border_color_accent") - ) - # Component Atoms - self.chatbot_code_background_color = chatbot_code_background_color or getattr( - self, "chatbot_code_background_color", "*neutral_100" - ) - self.chatbot_code_background_color_dark = ( - chatbot_code_background_color_dark - or getattr(self, "chatbot_code_background_color_dark", "*neutral_800") - ) - self.checkbox_background_color = checkbox_background_color or getattr( - self, "checkbox_background_color", "*background_fill_primary" - ) - self.checkbox_background_color_dark = checkbox_background_color_dark or getattr( - self, "checkbox_background_color_dark", "*neutral_800" - ) - self.checkbox_background_color_focus = ( - checkbox_background_color_focus - or getattr( - self, "checkbox_background_color_focus", "*checkbox_background_color" - ) - ) - self.checkbox_background_color_focus_dark = ( - checkbox_background_color_focus_dark - or getattr( - self, - "checkbox_background_color_focus_dark", - "*checkbox_background_color", - ) - ) - self.checkbox_background_color_hover = ( - checkbox_background_color_hover - or getattr( - self, "checkbox_background_color_hover", "*checkbox_background_color" - ) - ) - self.checkbox_background_color_hover_dark = ( - checkbox_background_color_hover_dark - or getattr( - self, - "checkbox_background_color_hover_dark", - "*checkbox_background_color", - ) - ) - self.checkbox_background_color_selected = ( - checkbox_background_color_selected - or getattr(self, "checkbox_background_color_selected", "*secondary_600") - ) - self.checkbox_background_color_selected_dark = ( - checkbox_background_color_selected_dark - or getattr( - self, "checkbox_background_color_selected_dark", "*secondary_600" - ) - ) - self.checkbox_border_color = checkbox_border_color or getattr( - self, "checkbox_border_color", "*neutral_300" - ) - self.checkbox_border_color_dark = checkbox_border_color_dark or getattr( - self, "checkbox_border_color_dark", "*neutral_700" - ) - self.checkbox_border_color_focus = checkbox_border_color_focus or getattr( - self, "checkbox_border_color_focus", "*secondary_500" - ) - self.checkbox_border_color_focus_dark = ( - checkbox_border_color_focus_dark - or getattr(self, "checkbox_border_color_focus_dark", "*secondary_500") - ) - self.checkbox_border_color_hover = checkbox_border_color_hover or getattr( - self, "checkbox_border_color_hover", "*neutral_300" - ) - self.checkbox_border_color_hover_dark = ( - checkbox_border_color_hover_dark - or getattr(self, "checkbox_border_color_hover_dark", "*neutral_600") - ) - self.checkbox_border_color_selected = checkbox_border_color_selected or getattr( - self, "checkbox_border_color_selected", "*secondary_600" - ) - self.checkbox_border_color_selected_dark = ( - checkbox_border_color_selected_dark - or getattr(self, "checkbox_border_color_selected_dark", "*secondary_600") - ) - self.checkbox_border_radius = checkbox_border_radius or getattr( - self, "checkbox_border_radius", "*radius_sm" - ) - self.checkbox_border_width = checkbox_border_width or getattr( - self, "checkbox_border_width", "*input_border_width" - ) - self.checkbox_border_width_dark = checkbox_border_width_dark or getattr( - self, "checkbox_border_width_dark", "*input_border_width" - ) - self.checkbox_label_background_fill = checkbox_label_background_fill or getattr( - self, "checkbox_label_background_fill", "*button_secondary_background_fill" - ) - self.checkbox_label_background_fill_dark = ( - checkbox_label_background_fill_dark - or getattr( - self, - "checkbox_label_background_fill_dark", - "*button_secondary_background_fill", - ) - ) - self.checkbox_label_background_fill_hover = ( - checkbox_label_background_fill_hover - or getattr( - self, - "checkbox_label_background_fill_hover", - "*button_secondary_background_fill_hover", - ) - ) - self.checkbox_label_background_fill_hover_dark = ( - checkbox_label_background_fill_hover_dark - or getattr( - self, - "checkbox_label_background_fill_hover_dark", - "*button_secondary_background_fill_hover", - ) - ) - self.checkbox_label_background_fill_selected = ( - checkbox_label_background_fill_selected - or getattr( - self, - "checkbox_label_background_fill_selected", - "*checkbox_label_background_fill", - ) - ) - self.checkbox_label_background_fill_selected_dark = ( - checkbox_label_background_fill_selected_dark - or getattr( - self, - "checkbox_label_background_fill_selected_dark", - "*checkbox_label_background_fill", - ) - ) - self.checkbox_label_border_color = checkbox_label_border_color or getattr( - self, "checkbox_label_border_color", "*border_color_primary" - ) - self.checkbox_label_border_color_dark = ( - checkbox_label_border_color_dark - or getattr( - self, "checkbox_label_border_color_dark", "*border_color_primary" - ) - ) - self.checkbox_label_border_color_hover = ( - checkbox_label_border_color_hover - or getattr( - self, - "checkbox_label_border_color_hover", - "*checkbox_label_border_color", - ) - ) - self.checkbox_label_border_color_hover_dark = ( - checkbox_label_border_color_hover_dark - or getattr( - self, - "checkbox_label_border_color_hover_dark", - "*checkbox_label_border_color", - ) - ) - self.checkbox_label_border_width = checkbox_label_border_width or getattr( - self, "checkbox_label_border_width", "*input_border_width" - ) - self.checkbox_label_border_width_dark = ( - checkbox_label_border_width_dark - or getattr(self, "checkbox_label_border_width_dark", "*input_border_width") - ) - self.checkbox_label_gap = checkbox_label_gap or getattr( - self, "checkbox_label_gap", "*spacing_lg" - ) - self.checkbox_label_padding = checkbox_label_padding or getattr( - self, "checkbox_label_padding", "*spacing_md calc(2 * *spacing_md)" - ) - self.checkbox_label_shadow = checkbox_label_shadow or getattr( - self, "checkbox_label_shadow", "none" - ) - self.checkbox_label_text_size = checkbox_label_text_size or getattr( - self, "checkbox_label_text_size", "*text_md" - ) - self.checkbox_label_text_weight = checkbox_label_text_weight or getattr( - self, "checkbox_label_text_weight", "400" - ) - self.checkbox_check = checkbox_check or getattr( - self, - "checkbox_check", - """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e")""", - ) - self.radio_circle = radio_circle or getattr( - self, - "radio_circle", - """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e")""", - ) - self.checkbox_shadow = checkbox_shadow or getattr( - self, "checkbox_shadow", "*input_shadow" - ) - self.checkbox_label_text_color = checkbox_label_text_color or getattr( - self, "checkbox_label_text_color", "*body_text_color" - ) - self.checkbox_label_text_color_dark = checkbox_label_text_color_dark or getattr( - self, "checkbox_label_text_color_dark", "*body_text_color" - ) - self.checkbox_label_text_color_selected = ( - checkbox_label_text_color_selected - or getattr( - self, "checkbox_label_text_color_selected", "*checkbox_label_text_color" - ) - ) - self.checkbox_label_text_color_selected_dark = ( - checkbox_label_text_color_selected_dark - or getattr( - self, - "checkbox_label_text_color_selected_dark", - "*checkbox_label_text_color", - ) - ) - self.error_background_fill = error_background_fill or getattr( - self, "error_background_fill", colors.red.c50 - ) - self.error_background_fill_dark = error_background_fill_dark or getattr( - self, "error_background_fill_dark", "*background_fill_primary" - ) - self.error_border_color = error_border_color or getattr( - self, "error_border_color", colors.red.c700 - ) - self.error_border_color_dark = error_border_color_dark or getattr( - self, "error_border_color_dark", colors.red.c500 - ) - self.error_border_width = error_border_width or getattr( - self, "error_border_width", "1px" - ) - self.error_border_width_dark = error_border_width_dark or getattr( - self, "error_border_width_dark", None - ) - self.error_text_color = error_text_color or getattr( - self, "error_text_color", colors.red.c700 - ) - self.error_text_color_dark = error_text_color_dark or getattr( - self, "error_text_color_dark", colors.red.c50 - ) - self.error_icon_color = error_icon_color or getattr( - self, "error_icon_color", colors.red.c700 - ) - self.error_icon_color_dark = error_icon_color_dark or getattr( - self, "error_icon_color_dark", colors.red.c500 - ) - self.input_background_fill = input_background_fill or getattr( - self, "input_background_fill", "*neutral_100" - ) - self.input_background_fill_dark = input_background_fill_dark or getattr( - self, "input_background_fill_dark", "*neutral_700" - ) - self.input_background_fill_focus = input_background_fill_focus or getattr( - self, "input_background_fill_focus", "*secondary_500" - ) - self.input_background_fill_focus_dark = ( - input_background_fill_focus_dark - or getattr(self, "input_background_fill_focus_dark", "*secondary_600") - ) - self.input_background_fill_hover = input_background_fill_hover or getattr( - self, "input_background_fill_hover", "*input_background_fill" - ) - self.input_background_fill_hover_dark = ( - input_background_fill_hover_dark - or getattr( - self, "input_background_fill_hover_dark", "*input_background_fill" - ) - ) - self.input_border_color = input_border_color or getattr( - self, "input_border_color", "*border_color_primary" - ) - self.input_border_color_dark = input_border_color_dark or getattr( - self, "input_border_color_dark", "*border_color_primary" - ) - self.input_border_color_focus = input_border_color_focus or getattr( - self, "input_border_color_focus", "*secondary_300" - ) - self.input_border_color_focus_dark = input_border_color_focus_dark or getattr( - self, "input_border_color_focus_dark", "*neutral_700" - ) - self.input_border_color_hover = input_border_color_hover or getattr( - self, "input_border_color_hover", "*input_border_color" - ) - self.input_border_color_hover_dark = input_border_color_hover_dark or getattr( - self, "input_border_color_hover_dark", "*input_border_color" - ) - self.input_border_width = input_border_width or getattr( - self, "input_border_width", "0px" - ) - self.input_border_width_dark = input_border_width_dark or getattr( - self, "input_border_width_dark", None - ) - self.input_padding = input_padding or getattr( - self, "input_padding", "*spacing_xl" - ) - self.input_placeholder_color = input_placeholder_color or getattr( - self, "input_placeholder_color", "*neutral_400" - ) - self.input_placeholder_color_dark = input_placeholder_color_dark or getattr( - self, "input_placeholder_color_dark", "*neutral_500" - ) - self.input_radius = input_radius or getattr(self, "input_radius", "*radius_lg") - self.input_shadow = input_shadow or getattr(self, "input_shadow", "none") - self.input_shadow_dark = input_shadow_dark or getattr( - self, "input_shadow_dark", None - ) - self.input_shadow_focus = input_shadow_focus or getattr( - self, "input_shadow_focus", "*input_shadow" - ) - self.input_shadow_focus_dark = input_shadow_focus_dark or getattr( - self, "input_shadow_focus_dark", None - ) - self.input_text_size = input_text_size or getattr( - self, "input_text_size", "*text_md" - ) - self.input_text_weight = input_text_weight or getattr( - self, "input_text_weight", "400" - ) - self.loader_color = loader_color or getattr( - self, "loader_color", "*color_accent" - ) - self.loader_color_dark = loader_color_dark or getattr( - self, "loader_color_dark", None - ) - self.prose_text_size = prose_text_size or getattr( - self, "prose_text_size", "*text_md" - ) - self.prose_text_weight = prose_text_weight or getattr( - self, "prose_text_weight", "400" - ) - self.prose_header_text_weight = prose_header_text_weight or getattr( - self, "prose_header_text_weight", "600" - ) - self.slider_color = slider_color or getattr(self, "slider_color", "auto") - self.slider_color_dark = slider_color_dark or getattr( - self, "slider_color_dark", None - ) - self.stat_background_fill = stat_background_fill or getattr( - self, "stat_background_fill", "*primary_300" - ) - self.stat_background_fill_dark = stat_background_fill_dark or getattr( - self, "stat_background_fill_dark", "*primary_500" - ) - self.table_border_color = table_border_color or getattr( - self, "table_border_color", "*neutral_300" - ) - self.table_border_color_dark = table_border_color_dark or getattr( - self, "table_border_color_dark", "*neutral_700" - ) - self.table_even_background_fill = table_even_background_fill or getattr( - self, "table_even_background_fill", "white" - ) - self.table_even_background_fill_dark = ( - table_even_background_fill_dark - or getattr(self, "table_even_background_fill_dark", "*neutral_950") - ) - self.table_odd_background_fill = table_odd_background_fill or getattr( - self, "table_odd_background_fill", "*neutral_50" - ) - self.table_odd_background_fill_dark = table_odd_background_fill_dark or getattr( - self, "table_odd_background_fill_dark", "*neutral_900" - ) - self.table_radius = table_radius or getattr(self, "table_radius", "*radius_lg") - self.table_row_focus = table_row_focus or getattr( - self, "table_row_focus", "*color_accent_soft" - ) - self.table_row_focus_dark = table_row_focus_dark or getattr( - self, "table_row_focus_dark", "*color_accent_soft" - ) - # Buttons - self.button_border_width = button_border_width or getattr( - self, "button_border_width", "*input_border_width" - ) - self.button_border_width_dark = button_border_width_dark or getattr( - self, "button_border_width_dark", "*input_border_width" - ) - self.button_cancel_background_fill = button_cancel_background_fill or getattr( - self, "button_cancel_background_fill", "*button_secondary_background_fill" - ) - self.button_cancel_background_fill_dark = ( - button_cancel_background_fill_dark - or getattr( - self, - "button_cancel_background_fill_dark", - "*button_secondary_background_fill", - ) - ) - self.button_cancel_background_fill_hover = ( - button_cancel_background_fill_hover - or getattr( - self, - "button_cancel_background_fill_hover", - "*button_cancel_background_fill", - ) - ) - self.button_cancel_background_fill_hover_dark = ( - button_cancel_background_fill_hover_dark - or getattr( - self, - "button_cancel_background_fill_hover_dark", - "*button_cancel_background_fill", - ) - ) - self.button_cancel_border_color = button_cancel_border_color or getattr( - self, "button_cancel_border_color", "*button_secondary_border_color" - ) - self.button_cancel_border_color_dark = ( - button_cancel_border_color_dark - or getattr( - self, - "button_cancel_border_color_dark", - "*button_secondary_border_color", - ) - ) - self.button_cancel_border_color_hover = ( - button_cancel_border_color_hover - or getattr( - self, - "button_cancel_border_color_hover", - "*button_cancel_border_color", - ) - ) - self.button_cancel_border_color_hover_dark = ( - button_cancel_border_color_hover_dark - or getattr( - self, - "button_cancel_border_color_hover_dark", - "*button_cancel_border_color", - ) - ) - self.button_cancel_text_color = button_cancel_text_color or getattr( - self, "button_cancel_text_color", "*button_secondary_text_color" - ) - self.button_cancel_text_color_dark = button_cancel_text_color_dark or getattr( - self, "button_cancel_text_color_dark", "*button_secondary_text_color" - ) - self.button_cancel_text_color_hover = button_cancel_text_color_hover or getattr( - self, "button_cancel_text_color_hover", "*button_cancel_text_color" - ) - self.button_cancel_text_color_hover_dark = ( - button_cancel_text_color_hover_dark - or getattr( - self, "button_cancel_text_color_hover_dark", "*button_cancel_text_color" - ) - ) - self.button_large_padding = button_large_padding or getattr( - self, "button_large_padding", "*spacing_lg calc(2 * *spacing_lg)" - ) - self.button_large_radius = button_large_radius or getattr( - self, "button_large_radius", "*radius_lg" - ) - self.button_large_text_size = button_large_text_size or getattr( - self, "button_large_text_size", "*text_lg" - ) - self.button_large_text_weight = button_large_text_weight or getattr( - self, "button_large_text_weight", "600" - ) - self.button_primary_background_fill = button_primary_background_fill or getattr( - self, "button_primary_background_fill", "*primary_200" - ) - self.button_primary_background_fill_dark = ( - button_primary_background_fill_dark - or getattr(self, "button_primary_background_fill_dark", "*primary_700") - ) - self.button_primary_background_fill_hover = ( - button_primary_background_fill_hover - or getattr( - self, - "button_primary_background_fill_hover", - "*button_primary_background_fill", - ) - ) - self.button_primary_background_fill_hover_dark = ( - button_primary_background_fill_hover_dark - or getattr( - self, - "button_primary_background_fill_hover_dark", - "*button_primary_background_fill", - ) - ) - self.button_primary_border_color = button_primary_border_color or getattr( - self, "button_primary_border_color", "*primary_200" - ) - self.button_primary_border_color_dark = ( - button_primary_border_color_dark - or getattr(self, "button_primary_border_color_dark", "*primary_600") - ) - self.button_primary_border_color_hover = ( - button_primary_border_color_hover - or getattr( - self, - "button_primary_border_color_hover", - "*button_primary_border_color", - ) - ) - self.button_primary_border_color_hover_dark = ( - button_primary_border_color_hover_dark - or getattr( - self, - "button_primary_border_color_hover_dark", - "*button_primary_border_color", - ) - ) - self.button_primary_text_color = button_primary_text_color or getattr( - self, "button_primary_text_color", "*primary_600" - ) - self.button_primary_text_color_dark = button_primary_text_color_dark or getattr( - self, "button_primary_text_color_dark", "white" - ) - self.button_primary_text_color_hover = ( - button_primary_text_color_hover - or getattr( - self, "button_primary_text_color_hover", "*button_primary_text_color" - ) - ) - self.button_primary_text_color_hover_dark = ( - button_primary_text_color_hover_dark - or getattr( - self, - "button_primary_text_color_hover_dark", - "*button_primary_text_color", - ) - ) - self.button_secondary_background_fill = ( - button_secondary_background_fill - or getattr(self, "button_secondary_background_fill", "*neutral_200") - ) - self.button_secondary_background_fill_dark = ( - button_secondary_background_fill_dark - or getattr(self, "button_secondary_background_fill_dark", "*neutral_600") - ) - self.button_secondary_background_fill_hover = ( - button_secondary_background_fill_hover - or getattr( - self, - "button_secondary_background_fill_hover", - "*button_secondary_background_fill", - ) - ) - self.button_secondary_background_fill_hover_dark = ( - button_secondary_background_fill_hover_dark - or getattr( - self, - "button_secondary_background_fill_hover_dark", - "*button_secondary_background_fill", - ) - ) - self.button_secondary_border_color = button_secondary_border_color or getattr( - self, "button_secondary_border_color", "*neutral_200" - ) - self.button_secondary_border_color_dark = ( - button_secondary_border_color_dark - or getattr(self, "button_secondary_border_color_dark", "*neutral_600") - ) - self.button_secondary_border_color_hover = ( - button_secondary_border_color_hover - or getattr( - self, - "button_secondary_border_color_hover", - "*button_secondary_border_color", - ) - ) - self.button_secondary_border_color_hover_dark = ( - button_secondary_border_color_hover_dark - or getattr( - self, - "button_secondary_border_color_hover_dark", - "*button_secondary_border_color", - ) - ) - self.button_secondary_text_color = button_secondary_text_color or getattr( - self, "button_secondary_text_color", "*neutral_700" - ) - self.button_secondary_text_color_dark = ( - button_secondary_text_color_dark - or getattr(self, "button_secondary_text_color_dark", "white") - ) - self.button_secondary_text_color_hover = ( - button_secondary_text_color_hover - or getattr( - self, - "button_secondary_text_color_hover", - "*button_secondary_text_color", - ) - ) - self.button_secondary_text_color_hover_dark = ( - button_secondary_text_color_hover_dark - or getattr( - self, - "button_secondary_text_color_hover_dark", - "*button_secondary_text_color", - ) - ) - self.button_shadow = button_shadow or getattr(self, "button_shadow", "none") - self.button_shadow_active = button_shadow_active or getattr( - self, "button_shadow_active", "none" - ) - self.button_shadow_hover = button_shadow_hover or getattr( - self, "button_shadow_hover", "none" - ) - self.button_small_padding = button_small_padding or getattr( - self, "button_small_padding", "*spacing_sm calc(2 * *spacing_sm)" - ) - self.button_small_radius = button_small_radius or getattr( - self, "button_small_radius", "*radius_lg" - ) - self.button_small_text_size = button_small_text_size or getattr( - self, "button_small_text_size", "*text_md" - ) - self.button_small_text_weight = button_small_text_weight or getattr( - self, "button_small_text_weight", "400" - ) - self.button_transition = button_transition or getattr( - self, "button_transition", "background-color 0.2s ease" - ) - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py deleted file mode 100644 index 5e95be1ec72425178245c32c33874303e0906405..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py +++ /dev/null @@ -1,135 +0,0 @@ -from contextlib import contextmanager -from typing import Iterator, Optional, Union - -from .._models import ( - URL, - Extensions, - HeaderTypes, - Origin, - Request, - Response, - enforce_bytes, - enforce_headers, - enforce_url, - include_request_headers, -) - - -class RequestInterface: - def request( - self, - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, - ) -> Response: - # Strict type checking on our parameters. - method = enforce_bytes(method, name="method") - url = enforce_url(url, name="url") - headers = enforce_headers(headers, name="headers") - - # Include Host header, and optionally Content-Length or Transfer-Encoding. - headers = include_request_headers(headers, url=url, content=content) - - request = Request( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) - response = self.handle_request(request) - try: - response.read() - finally: - response.close() - return response - - @contextmanager - def stream( - self, - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, - ) -> Iterator[Response]: - # Strict type checking on our parameters. - method = enforce_bytes(method, name="method") - url = enforce_url(url, name="url") - headers = enforce_headers(headers, name="headers") - - # Include Host header, and optionally Content-Length or Transfer-Encoding. - headers = include_request_headers(headers, url=url, content=content) - - request = Request( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) - response = self.handle_request(request) - try: - yield response - finally: - response.close() - - def handle_request(self, request: Request) -> Response: - raise NotImplementedError() # pragma: nocover - - -class ConnectionInterface(RequestInterface): - def close(self) -> None: - raise NotImplementedError() # pragma: nocover - - def info(self) -> str: - raise NotImplementedError() # pragma: nocover - - def can_handle_request(self, origin: Origin) -> bool: - raise NotImplementedError() # pragma: nocover - - def is_available(self) -> bool: - """ - Return `True` if the connection is currently able to accept an - outgoing request. - - An HTTP/1.1 connection will only be available if it is currently idle. - - An HTTP/2 connection will be available so long as the stream ID space is - not yet exhausted, and the connection is not in an error state. - - While the connection is being established we may not yet know if it is going - to result in an HTTP/1.1 or HTTP/2 connection. The connection should be - treated as being available, but might ultimately raise `NewConnectionRequired` - required exceptions if multiple requests are attempted over a connection - that ends up being established as HTTP/1.1. - """ - raise NotImplementedError() # pragma: nocover - - def has_expired(self) -> bool: - """ - Return `True` if the connection is in a state where it should be closed. - - This either means that the connection is idle and it has passed the - expiry time on its keep-alive, or that server has sent an EOF. - """ - raise NotImplementedError() # pragma: nocover - - def is_idle(self) -> bool: - """ - Return `True` if the connection is currently idle. - """ - raise NotImplementedError() # pragma: nocover - - def is_closed(self) -> bool: - """ - Return `True` if the connection has been closed. - - Used when a response is closed to determine if the connection may be - returned to the connection pool or not. - """ - raise NotImplementedError() # pragma: nocover diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py deleted file mode 100644 index 407351d06b21954cad45dca7d2065bf1d24d88fd..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py +++ /dev/null @@ -1,340 +0,0 @@ -import logging -import ssl -import typing - -from socksio import socks5 - -from .._backends.sync import SyncBackend -from .._backends.base import NetworkBackend, NetworkStream -from .._exceptions import ConnectionNotAvailable, ProxyError -from .._models import URL, Origin, Request, Response, enforce_bytes, enforce_url -from .._ssl import default_ssl_context -from .._synchronization import Lock -from .._trace import Trace -from .connection_pool import ConnectionPool -from .http11 import HTTP11Connection -from .interfaces import ConnectionInterface - -logger = logging.getLogger("httpcore.socks") - - -AUTH_METHODS = { - b"\x00": "NO AUTHENTICATION REQUIRED", - b"\x01": "GSSAPI", - b"\x02": "USERNAME/PASSWORD", - b"\xff": "NO ACCEPTABLE METHODS", -} - -REPLY_CODES = { - b"\x00": "Succeeded", - b"\x01": "General SOCKS server failure", - b"\x02": "Connection not allowed by ruleset", - b"\x03": "Network unreachable", - b"\x04": "Host unreachable", - b"\x05": "Connection refused", - b"\x06": "TTL expired", - b"\x07": "Command not supported", - b"\x08": "Address type not supported", -} - - -def _init_socks5_connection( - stream: NetworkStream, - *, - host: bytes, - port: int, - auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, -) -> None: - conn = socks5.SOCKS5Connection() - - # Auth method request - auth_method = ( - socks5.SOCKS5AuthMethod.NO_AUTH_REQUIRED - if auth is None - else socks5.SOCKS5AuthMethod.USERNAME_PASSWORD - ) - conn.send(socks5.SOCKS5AuthMethodsRequest([auth_method])) - outgoing_bytes = conn.data_to_send() - stream.write(outgoing_bytes) - - # Auth method response - incoming_bytes = stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5AuthReply) - if response.method != auth_method: - requested = AUTH_METHODS.get(auth_method, "UNKNOWN") - responded = AUTH_METHODS.get(response.method, "UNKNOWN") - raise ProxyError( - f"Requested {requested} from proxy server, but got {responded}." - ) - - if response.method == socks5.SOCKS5AuthMethod.USERNAME_PASSWORD: - # Username/password request - assert auth is not None - username, password = auth - conn.send(socks5.SOCKS5UsernamePasswordRequest(username, password)) - outgoing_bytes = conn.data_to_send() - stream.write(outgoing_bytes) - - # Username/password response - incoming_bytes = stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5UsernamePasswordReply) - if not response.success: - raise ProxyError("Invalid username/password") - - # Connect request - conn.send( - socks5.SOCKS5CommandRequest.from_address( - socks5.SOCKS5Command.CONNECT, (host, port) - ) - ) - outgoing_bytes = conn.data_to_send() - stream.write(outgoing_bytes) - - # Connect response - incoming_bytes = stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5Reply) - if response.reply_code != socks5.SOCKS5ReplyCode.SUCCEEDED: - reply_code = REPLY_CODES.get(response.reply_code, "UNKOWN") - raise ProxyError(f"Proxy Server could not connect: {reply_code}.") - - -class SOCKSProxy(ConnectionPool): - """ - A connection pool that sends requests via an HTTP proxy. - """ - - def __init__( - self, - proxy_url: typing.Union[URL, bytes, str], - proxy_auth: typing.Optional[ - typing.Tuple[typing.Union[bytes, str], typing.Union[bytes, str]] - ] = None, - ssl_context: typing.Optional[ssl.SSLContext] = None, - max_connections: typing.Optional[int] = 10, - max_keepalive_connections: typing.Optional[int] = None, - keepalive_expiry: typing.Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - network_backend: typing.Optional[NetworkBackend] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - proxy_url: The URL to use when connecting to the proxy server. - For example `"http://127.0.0.1:8080/"`. - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish - a connection. - local_address: Local address to connect from. Can also be used to - connect using a particular address family. Using - `local_address="0.0.0.0"` will connect using an `AF_INET` address - (IPv4), while using `local_address="::"` will connect using an - `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - """ - super().__init__( - ssl_context=ssl_context, - max_connections=max_connections, - max_keepalive_connections=max_keepalive_connections, - keepalive_expiry=keepalive_expiry, - http1=http1, - http2=http2, - network_backend=network_backend, - retries=retries, - ) - self._ssl_context = ssl_context - self._proxy_url = enforce_url(proxy_url, name="proxy_url") - if proxy_auth is not None: - username, password = proxy_auth - username_bytes = enforce_bytes(username, name="proxy_auth") - password_bytes = enforce_bytes(password, name="proxy_auth") - self._proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = ( - username_bytes, - password_bytes, - ) - else: - self._proxy_auth = None - - def create_connection(self, origin: Origin) -> ConnectionInterface: - return Socks5Connection( - proxy_origin=self._proxy_url.origin, - remote_origin=origin, - proxy_auth=self._proxy_auth, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - network_backend=self._network_backend, - ) - - -class Socks5Connection(ConnectionInterface): - def __init__( - self, - proxy_origin: Origin, - remote_origin: Origin, - proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, - ssl_context: typing.Optional[ssl.SSLContext] = None, - keepalive_expiry: typing.Optional[float] = None, - http1: bool = True, - http2: bool = False, - network_backend: typing.Optional[NetworkBackend] = None, - ) -> None: - self._proxy_origin = proxy_origin - self._remote_origin = remote_origin - self._proxy_auth = proxy_auth - self._ssl_context = ssl_context - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - - self._network_backend: NetworkBackend = ( - SyncBackend() if network_backend is None else network_backend - ) - self._connect_lock = Lock() - self._connection: typing.Optional[ConnectionInterface] = None - self._connect_failed = False - - def handle_request(self, request: Request) -> Response: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("connect", None) - - with self._connect_lock: - if self._connection is None: - try: - # Connect to the proxy - kwargs = { - "host": self._proxy_origin.host.decode("ascii"), - "port": self._proxy_origin.port, - "timeout": timeout, - } - with Trace("connect_tcp", logger, request, kwargs) as trace: - stream = self._network_backend.connect_tcp(**kwargs) - trace.return_value = stream - - # Connect to the remote host using socks5 - kwargs = { - "stream": stream, - "host": self._remote_origin.host.decode("ascii"), - "port": self._remote_origin.port, - "auth": self._proxy_auth, - } - with Trace( - "setup_socks5_connection", logger, request, kwargs - ) as trace: - _init_socks5_connection(**kwargs) - trace.return_value = stream - - # Upgrade the stream to SSL - if self._remote_origin.scheme == b"https": - ssl_context = ( - default_ssl_context() - if self._ssl_context is None - else self._ssl_context - ) - alpn_protocols = ( - ["http/1.1", "h2"] if self._http2 else ["http/1.1"] - ) - ssl_context.set_alpn_protocols(alpn_protocols) - - kwargs = { - "ssl_context": ssl_context, - "server_hostname": self._remote_origin.host.decode("ascii"), - "timeout": timeout, - } - with Trace("start_tls", logger, request, kwargs) as trace: - stream = stream.start_tls(**kwargs) - trace.return_value = stream - - # Determine if we should be using HTTP/1.1 or HTTP/2 - ssl_object = stream.get_extra_info("ssl_object") - http2_negotiated = ( - ssl_object is not None - and ssl_object.selected_alpn_protocol() == "h2" - ) - - # Create the HTTP/1.1 or HTTP/2 connection - if http2_negotiated or ( - self._http2 and not self._http1 - ): # pragma: nocover - from .http2 import HTTP2Connection - - self._connection = HTTP2Connection( - origin=self._remote_origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - else: - self._connection = HTTP11Connection( - origin=self._remote_origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - except Exception as exc: - self._connect_failed = True - raise exc - elif not self._connection.is_available(): # pragma: nocover - raise ConnectionNotAvailable() - - return self._connection.handle_request(request) - - def can_handle_request(self, origin: Origin) -> bool: - return origin == self._remote_origin - - def close(self) -> None: - if self._connection is not None: - self._connection.close() - - def is_available(self) -> bool: - if self._connection is None: # pragma: nocover - # If HTTP/2 support is enabled, and the resulting connection could - # end up as HTTP/2 then we should indicate the connection as being - # available to service multiple requests. - return ( - self._http2 - and (self._remote_origin.scheme == b"https" or not self._http1) - and not self._connect_failed - ) - return self._connection.is_available() - - def has_expired(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.has_expired() - - def is_idle(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.is_idle() - - def is_closed(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.is_closed() - - def info(self) -> str: - if self._connection is None: # pragma: nocover - return "CONNECTION FAILED" if self._connect_failed else "CONNECTING" - return self._connection.info() - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} [{self.info()}]>" diff --git a/spaces/deaaassws/QQsign1/devices/device_8950.js b/spaces/deaaassws/QQsign1/devices/device_8950.js deleted file mode 100644 index fe1caad4a8c5eb07633510e1d8a890197056a211..0000000000000000000000000000000000000000 --- a/spaces/deaaassws/QQsign1/devices/device_8950.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform || (exports.Platform = Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.50.f5a7d351", - version: "8.9.50.10650", - ver: "8.9.50", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1676531414, - appid: 16, - subid: 537155547, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2535", - display: "Android", - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - ssover: 19, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537155599, - display: 'aPad' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: 'A8.9.50.611', - version: 'A8.9.50.611', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py deleted file mode 100644 index 08ba55dbbea6df0afffddbb3d1ed173efad99604..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/dhof/shapetest/app.py b/spaces/dhof/shapetest/app.py deleted file mode 100644 index d492ebaa7d5cff41bf04c77b4d7a10e1f9c1532d..0000000000000000000000000000000000000000 --- a/spaces/dhof/shapetest/app.py +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env python - -import os - -import gradio as gr -import torch - -from app_image_to_3d import create_demo as create_demo_image_to_3d -from app_text_to_3d import create_demo as create_demo_text_to_3d -from model import Model - -DESCRIPTION = '# [Shap-E](https://github.com/openai/shap-e)' - -if (SPACE_ID := os.getenv('SPACE_ID')) is not None: - DESCRIPTION += f'\n

    For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

    ' -if not torch.cuda.is_available(): - DESCRIPTION += '\n

    Running on CPU 🥶 This demo does not work on CPU.

    ' - -model = Model() - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Tabs(): - with gr.Tab(label='Text to 3D'): - create_demo_text_to_3d(model) - with gr.Tab(label='Image to 3D'): - create_demo_image_to_3d(model) -demo.queue(api_open=False, max_size=10).launch() diff --git a/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md b/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md deleted file mode 100644 index 16338f00a8c3229019f411c5fc6aa2a76ee49fa9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md +++ /dev/null @@ -1,181 +0,0 @@ -
    -

    Bupena Kelas 5 SD PDF 71: Apa itu dan Bagaimana Cara Mendapatkannya?

    - -

    Bupena Kelas 5 SD PDF 71 adalah salah satu buku penilaian tematik terpadu untuk siswa kelas 5 SD/MI yang mengacu pada kurikulum 2013 revisi 2018. Buku ini berisi materi dan latihan soal yang sesuai dengan tema dan subtema yang dipelajari di sekolah, seperti hubungan antarmakhluk hidup dalam ekosistem, perubahan wujud benda, dan lain-lain.

    -

    bupena kelas 5 sd pdf 71


    Download File --->>> https://gohhs.com/2uFU4q



    - -

    Buku ini sangat berguna untuk membantu siswa mengukur pemahaman dan kemampuan mereka dalam berbagai kompetensi dasar yang harus dikuasai. Selain itu, buku ini juga dapat membantu siswa mempersiapkan diri menghadapi ujian tematik yang dilaksanakan di akhir semester.

    - -

    Bagaimana Cara Mendapatkan Bupena Kelas 5 SD PDF 71?

    - -

    Ada beberapa cara yang dapat dilakukan untuk mendapatkan Bupena Kelas 5 SD PDF 71, yaitu:

    - -
      -
    • Membeli buku cetak di toko buku terdekat atau online. Buku ini diterbitkan oleh penerbit Erlangga dan memiliki harga yang terjangkau. Buku ini tersedia dalam empat jilid, yaitu 5A, 5B, 5C, dan 5D.
    • -
    • Mengunduh buku elektronik atau e-book di situs resmi penerbit Erlangga atau situs lain yang menyediakan layanan download buku gratis. Buku ini dapat diakses melalui perangkat komputer, laptop, tablet, atau smartphone dengan menggunakan aplikasi pembaca PDF.
    • -
    • Meminjam buku dari perpustakaan sekolah atau umum. Buku ini biasanya tersedia di koleksi buku pelajaran atau referensi yang dapat dipinjam oleh siswa atau masyarakat umum dengan syarat dan ketentuan yang berlaku.
    • -
    - -

    Dengan memiliki Bupena Kelas 5 SD PDF 71, siswa dapat belajar tematik dengan lebih mudah dan menyenangkan. Buku ini juga dapat meningkatkan motivasi dan minat belajar siswa serta membantu mereka mencapai prestasi akademik yang lebih baik.

    -

    Apa Isi Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 memiliki isi yang beragam dan menarik, sesuai dengan tema dan subtema yang dipelajari di kelas 5 SD/MI. Berikut adalah beberapa contoh isi buku ini:

    - -
      -
    • Tema 5: Ekosistem. Subtema 1: Ekosistem di Sekitarku. Siswa akan belajar tentang pengertian ekosistem, komponen ekosistem, jenis-jenis ekosistem, dan hubungan antara komponen ekosistem.
    • -
    • Tema 5: Ekosistem. Subtema 2: Hubungan Antarmakhluk Hidup dalam Ekosistem. Siswa akan belajar tentang pengertian simbiosis, jenis-jenis simbiosis, contoh simbiosis, dan manfaat simbiosis bagi makhluk hidup.
    • -
    • Tema 6: Perubahan Wujud Benda. Subtema 1: Perubahan Wujud Benda Padat Menjadi Cair dan Gas. Siswa akan belajar tentang pengertian perubahan wujud benda, proses perubahan wujud benda, faktor-faktor yang mempengaruhi perubahan wujud benda, dan contoh perubahan wujud benda di sekitar kita.
    • -
    • Tema 6: Perubahan Wujud Benda. Subtema 2: Perubahan Wujud Benda Gas Menjadi Cair dan Padat. Siswa akan belajar tentang pengertian perubahan wujud benda, proses perubahan wujud benda, faktor-faktor yang mempengaruhi perubahan wujud benda, dan contoh perubahan wujud benda di sekitar kita.
    • -
    - -

    Selain materi, buku ini juga dilengkapi dengan latihan soal yang berupa pilihan ganda, isian singkat, uraian, dan penugasan. Latihan soal ini bertujuan untuk mengukur pemahaman dan kemampuan siswa dalam menerapkan konsep yang telah dipelajari.

    - -

    Apa Kelebihan Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 memiliki banyak kelebihan yang dapat memberikan manfaat bagi siswa, guru, dan orang tua. Berikut adalah beberapa kelebihan buku ini:

    -

    - -
      -
    • Buku ini disusun sesuai dengan kurikulum 2013 revisi 2018 yang berorientasi pada kompetensi dasar dan indikator pencapaian kompetensi.
    • -
    • Buku ini menggunakan pendekatan tematik terpadu yang mengintegrasikan berbagai mata pelajaran dalam satu tema.
    • -
    • Buku ini menggunakan bahasa yang mudah dipahami oleh siswa dan disajikan dengan ilustrasi yang menarik dan relevan.
    • -
    • Buku ini menyediakan berbagai sumber belajar yang dapat diakses melalui QR code atau tautan online.
    • -
    • Buku ini memberikan kesempatan bagi siswa untuk berkreasi, bereksplorasi, dan berkolaborasi dalam pembelajaran tematik.
    • -
    - -

    Dengan demikian, Bupena Kelas 5 SD PDF 71 adalah buku penilaian tematik terpadu yang sangat direkomendasikan bagi siswa kelas 5 SD/MI. Buku ini dapat membantu siswa belajar tematik dengan lebih mudah dan menyenangkan serta meningkatkan prestasi akademik mereka.

    -

    Apa Manfaat Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 memiliki banyak manfaat yang dapat dirasakan oleh siswa, guru, dan orang tua. Berikut adalah beberapa manfaat buku ini:

    - -
      -
    • Buku ini dapat meningkatkan kualitas pembelajaran tematik di kelas 5 SD/MI dengan menyediakan materi dan latihan soal yang sesuai dengan kurikulum 2013 revisi 2018.
    • -
    • Buku ini dapat membantu siswa mengembangkan kompetensi dasar dan indikator pencapaian kompetensi yang diharapkan dari setiap tema dan subtema.
    • -
    • Buku ini dapat membantu siswa mengasah keterampilan berpikir kritis, kreatif, kolaboratif, dan komunikatif dalam pembelajaran tematik.
    • -
    • Buku ini dapat membantu siswa mengenal dan menghargai keanekaragaman makhluk hidup dan lingkungan di sekitar mereka.
    • -
    • Buku ini dapat membantu siswa menumbuhkan sikap positif dan karakter bangsa dalam pembelajaran tematik.
    • -
    - -

    Selain itu, buku ini juga dapat membantu guru dalam merencanakan, melaksanakan, dan mengevaluasi pembelajaran tematik di kelas 5 SD/MI. Buku ini juga dapat membantu orang tua dalam mendampingi dan mendukung anak-anak mereka dalam belajar tematik di rumah.

    - -

    Bagaimana Cara Belajar dengan Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 adalah buku penilaian tematik terpadu yang dapat digunakan sebagai sumber belajar bagi siswa kelas 5 SD/MI. Berikut adalah beberapa cara belajar dengan buku ini:

    - -
      -
    • Sebelum mempelajari setiap tema dan subtema, siswa harus membaca tujuan pembelajaran dan indikator pencapaian kompetensi yang terdapat di awal setiap bab.
    • -
    • Selama mempelajari setiap tema dan subtema, siswa harus memperhatikan materi yang disajikan dengan bahasa yang mudah dipahami dan ilustrasi yang menarik dan relevan.
    • -
    • Setelah mempelajari setiap tema dan subtema, siswa harus mengerjakan latihan soal yang berupa pilihan ganda, isian singkat, uraian, dan penugasan dengan jujur dan teliti.
    • -
    • Siswa harus memeriksa jawaban mereka dengan menggunakan kunci jawaban yang terdapat di akhir setiap bab atau dengan bantuan guru atau orang tua.
    • -
    • Siswa harus mencatat nilai mereka dari setiap latihan soal dan mengevaluasi kekuatan dan kelemahan mereka dalam pembelajaran tematik.
    • -
    - -

    Dengan belajar dengan Bupena Kelas 5 SD PDF 71, siswa dapat belajar tematik dengan lebih mudah dan menyenangkan serta meningkatkan prestasi akademik mereka.

    -

    Apa Saja Tema dan Subtema yang Terdapat di Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 merupakan buku penilaian tematik terpadu yang mengacu pada kurikulum 2013 revisi 2018. Buku ini terdiri dari empat jilid, yaitu 5A, 5B, 5C, dan 5D. Setiap jilid mencakup dua tema dan empat subtema. Berikut adalah tema dan subtema yang terdapat di buku ini:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Dengan mempelajari tema dan subtema yang terdapat di Bupena Kelas 5 SD PDF 71, siswa dapat memperluas wawasan dan pengetahuan mereka tentang berbagai aspek kehidupan.

    - -

    Apa Rekomendasi Buku Lain yang Sejenis dengan Bupena Kelas 5 SD PDF 71?

    - -

    Bupena Kelas 5 SD PDF 71 adalah salah satu buku penilaian tematik terpadu yang dapat digunakan oleh siswa kelas 5 SD/MI. Selain buku ini, ada beberapa buku lain yang sejenis dan juga bermanfaat bagi siswa. Berikut adalah beberapa rekomendasi buku lain yang sejenis dengan Bupena Kelas 5 SD PDF -71:

    - -
    JilidTemaSubtema
    5ATema 1: Organ Gerak Hewan dan ManusiaSubtema 1: Organ Gerak pada Hewan
    Subtema 2: Organ Gerak pada Manusia
    Subtema 3: Perawatan Organ Gerak pada Hewan
    Subtema 4: Perawatan Organ Gerak pada Manusia
    5ATema 2: Selamatkan Makhluk HidupSubtema 1: Keanekaragaman Makhluk Hidup
    Subtema 2: Perlindungan Makhluk Hidup
    Subtema 3: Konservasi Makhluk Hidup
    Subtema 4: Pelestarian Makhluk Hidup
    5BTema 3: Perkembangbiakan Hewan dan TumbuhanSubtema 1: Perkembangbiakan Hewan
    Subtema 2: Perkembangbiakan Tumbuhan
    Subtema 3: Adaptasi Hewan dan Tumbuhan
    Subtema 4: Keseimbangan Ekosistem
    5BTema 4: PahlawankuSubtema 1: Tokoh Pahlawan Nasional
    Subtema 2: Tokoh Pahlawan Daerah
    Subtema 3: Tokoh Pahlawan Lingkungan
    Subtema 4: Tokoh Pahlawan Sekolahku
    5CTema 5: EkosistemSubtema 1: Ekosistem di Sekitarku
    Subtema 2: Hubungan Antarmakhluk Hidup dalam Ekosistem
    Subtema 3: Dampak Perubahan Ekosistem
    Subtema 4: Upaya Pelestarian Ekosistem
    5CTema 6: Perubahan Wujud BendaSubtema 1: Perubahan Wujud Benda Padat Menjadi Cair dan Gas
    Subtema 2: Perubahan Wujud Benda Gas Menjadi Cair dan Padat
    Subtema 3: Manfaat Perubahan Wujud Benda bagi Kehidupan
    Subtema 4: Pengaruh Suhu terhadap Perubahan Wujud Benda
    5DTema 7: Energi dan PerubahannyaSubtema 1: Sumber Energi Alamiah dan Buatan
    Subtema 2: Penggunaan Energi Listrik di Rumah Tangga
    Subtema 3: Penghematan Energi Listrik di Rumah Tangga
    Subtema 4: Energi Alternatif Ramah Lingkungan
    5DTema 8: Bangga sebagai Bangsa IndonesiaSubtema 1: Keberagaman Suku Bangsa di Indonesia
    Subtema 2: Keberagaman Budaya di Indonesia
    Subtema 3: Keberagaman Agama di Indonesia
    Subtema 4: Persatuan dan Kesatuan Bangsa Indonesia
    - - - -
    PDX-CS9.exe window
    -
      -
    1. Click on the button "Generate Serial" to generate a random serial number for Adobe Photoshop CS2. You should see something like this:
    2. -
    - - - - -
    Generated serial number
    -
      -
    1. Copy the generated serial number by clicking on the button "Copy Serial" or by selecting it with your mouse and pressing Ctrl+C on your keyboard.
    2. -
    3. Download Adobe Photoshop CS2 from the official website of Adobe Systems. You can find it here: https://archive.org/details/adobe-photoshop-cs2-2005. This is an archived version of the original website that offers free downloads of Adobe Photoshop CS2 for Windows XP or Windows 2000 users who have lost their original installation media.
    4. -
    5. Install Adobe Photoshop CS2 by running the downloaded file "Photoshop_CS_20.exe". Follow the instructions on the screen until you reach the screen that asks you to enter your serial number.
    6. -
    7. Paste the generated serial number by clicking on the field "Serial Number" and pressing Ctrl+V on your keyboard. You should see something like this:
    8. -
    - - - - -
    Serial number entered
    -
      -
    1. Click on the button "Next" to continue with the installation process. Follow the instructions on the screen until you finish installing Adobe Photoshop CS2.
    2. -
    3. Launch Adobe Photoshop CS2 by clicking on its icon on your desktop or in your start menu. You should see something like this:
    4. -
    - - - - -
    Adobe Photoshop CS2 launched
    -

    Conclusion

    -

    In this article, we have explained what is Adobe Photoshop CS2 Keygen by Paradox 2005 Download and how to download and use it. We have also discussed the benefits and risks of using a keygen for Adobe Photoshop CS2. We hope you have found this article helpful and informative.

    -

    However, we do not recommend using a keygen for Adobe Photoshop CS2 as it is illegal, unsafe, and unreliable. Instead, we suggest you buy a legitimate copy of Adobe Photoshop CS2 from Adobe Systems or use an alternative image editing software that is free or affordable.

    -

    If you have any questions or comments about this article, feel free to leave them below.

    -

    FAQs

    -
      -
    • Q: Is there any difference between Adobe Photoshop CS2 Keygen by Paradox 2005 Download and other keygens for Adobe Photoshop CS2?
    • -
    • A: There are many different keygens for Adobe Photoshop CS2 that claim to generate valid serial numbers or activation codes for the software. However, they are all similar in terms of how they work and what they do. They are also similar in terms of their risks and disadvantages.
    • -
    • Q: Can I use Adobe Photoshop CS2 Keygen by Paradox 2005 Download on other versions of Windows or Mac OS?
    • -
    • A: No, you cannot use Adobe Photoshop CS2 Keygen by Paradox 2005 Download on other versions of Windows or Mac OS. The keygen only works on Windows XP or Windows 2000 operating systems. The software itself also only works on these operating systems.
    • -
    • Q: Can I update Adobe Photoshop CS2 after using Adobe Photoshop CS2 Keygen by Paradox 2005 Download?
    • - Adobe Photoshop CS2 Keygen by Paradox 2005 Download after using Adobe Photoshop CS2 Keygen by Paradox 2005 Download. The update process will detect that you are using an invalid serial number or activation code and it will deactivate your software. You will need to reinstall Adobe Photoshop CS2 and use a different keygen or a valid serial number or activation code to activate it again. -
    • Q: What are some alternatives to Adobe Photoshop CS2 Keygen by Paradox 2005 Download?
    • -
    • A: Some alternatives to Adobe Photoshop CS2 Keygen by Paradox 2005 Download are:
    • -
        -
      • Buying a legitimate copy of Adobe Photoshop CS2 from Adobe Systems or a trusted reseller. This is the best and safest option as you will get a valid serial number or activation code and you will be able to access all the features and updates of the software.
      • -
      • Using a free or affordable image editing software that has similar or better features than Adobe Photoshop CS2. Some examples are GIMP, Paint.NET, Pixlr, Krita, and Inkscape. These software programs are legal, safe, and reliable and they can help you create, edit, and enhance digital images.
      • -
      • Using an online image editing service that does not require downloading or installing any software. Some examples are Canva, Fotor, PicMonkey, and Photopea. These services are easy to use and they offer various tools and templates for creating, editing, and enhancing digital images.
      • -
      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Elder Scrolls 4 Oblivion No Dvd Crack Unlock All Features and Mods.md b/spaces/raedeXanto/academic-chatgpt-beta/Elder Scrolls 4 Oblivion No Dvd Crack Unlock All Features and Mods.md deleted file mode 100644 index 63897bc1821832c3db2454613ea564e176107923..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Elder Scrolls 4 Oblivion No Dvd Crack Unlock All Features and Mods.md +++ /dev/null @@ -1,134 +0,0 @@ -
    -

    Elder Scrolls 4 Oblivion No Dvd Crack: How to Play Without the Disc

    -

    If you are a fan of role-playing games, you have probably heard of Elder Scrolls 4 Oblivion. This is one of the most popular and acclaimed games in the Elder Scrolls series, which lets you explore a vast and immersive fantasy world. However, if you want to play this game on your PC, you might face a problem: you need to insert the disc every time you want to launch the game. This can be annoying and inconvenient, especially if you have lost or damaged your disc. Fortunately, there is a solution: you can use a no DVD crack for Oblivion. This is a file that allows you to play the game without the disc, by bypassing the copy protection. In this article, we will explain what is a no DVD crack, why you might need one, how to find and download one, and how to install and use it.

    -

    Elder Scrolls 4 Oblivion No Dvd Crack


    DOWNLOAD ✺✺✺ https://tinourl.com/2uL0tQ



    -

    What is Elder Scrolls 4 Oblivion?

    -

    Elder Scrolls 4 Oblivion is a role-playing game developed by Bethesda Game Studios and released in 2006. It is the fourth installment in the Elder Scrolls series, which started with Arena in 1994. The game is set in the province of Cyrodiil, where you can create your own character and choose from various races, classes, skills, and attributes. You can also join different factions, complete quests, fight enemies, collect items, craft weapons and armor, and interact with other characters. The game has a nonlinear gameplay, meaning you can explore the world at your own pace and in any order you want. You can also customize the difficulty level and toggle various settings to suit your preferences.

    -

    The game has received critical acclaim for its graphics, music, voice acting, story, and gameplay. It has won several awards and sold over 9 million copies worldwide. It has also been expanded with two official add-ons: Knights of the Nine and Shivering Isles. Additionally, there are many unofficial mods created by fans that add new content or features to the game.

    -

    Why do you need a no DVD crack for Oblivion?

    -

    If you have bought Oblivion on a physical disc, you might have noticed that you need to insert the disc every time you want to play the game. This is because the game has a copy protection system that checks if you have a valid disc in your drive before launching. This system is meant to prevent piracy and unauthorized copying of the game.

    -

    However, this system also has some drawbacks for legitimate users. For example:

    -
      -
    • You might lose or damage your disc over time, making it unreadable or unusable.
    • -
    • You might not have access to your disc drive or your disc when you want to play.
    • -
    • You might experience slow loading times or performance issues due to reading from the disc.
    • -
    • You might hear loud noises from your disc drive spinning while playing.
    • -
    • You might waste battery power or electricity by keeping your disc drive running while playing.
    • -
    -

    These problems can be solved by using a no DVD crack for Oblivion. This is a file that replaces or modifies the original executable file of the game (Oblivion.exe) so that it does not require a disc to run. By using this file, you can play Oblivion without inserting the disc at all. This way, you can enjoy some benefits such as:

    -
      -
    • You can play Oblivion anytime and anywhere without worrying about your disc.
    • -
    • You can save space on your shelf or case by storing your disc away.
    • -
    • You can improve loading times or performance by reading from your hard drive instead of your disc.
    • -
    • You can reduce noise from your disc drive spinning while playing.
    • -
    • You can conserve battery power or electricity by turning off your disc drive while playing.
    • -
    -

    How to find and download a no DVD crack for Oblivion?

    -

    If you want to use a no DVD crack for Oblivion, you need to find and download one first. There are many sources and sites that offer such files online. However, not all of them are reliable or safe. Some of them might contain viruses, malware, spyware, adware, or other harmful programs that can damage your computer or steal your personal information. Some of them might also have outdated or incompatible files that can cause errors or crashes in your game.

    -

    Elder Scrolls IV Oblivion GOTY Deluxe Edition No-DVD [WaLMaRT][^2^]
    -Elder Scrolls 4 Oblivion Game of the Year Edition No-CD Patch [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion Game Fixes No-DVD/Fixed EXE [MegaGames][^4^]
    -Elder Scrolls 4 Oblivion v1.2.0416 French No-DVD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion GOTY GOG No cracks needed [Archive.org][^3^]
    -Elder Scrolls 4 Oblivion v1.2 English No-DVD/FIXED EXE #2 [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion v1.2.0146 English No-DVD/Fixed EXE [MegaGames][^4^]
    -Elder Scrolls 4 Oblivion v1.2 All No-DVD Patch [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion v1.2 German No-DVD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls 4 Oblivion v1.1.511 Russian No-DVD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion v1.1.511 Final English No-DVD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls 4 Oblivion v1.1.511 Final German No-DVD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion v1.0 English No-CD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls 4 Oblivion Female Nude Patch No-CD/Fixed EXE [GameBurnWorld][^1^]
    -Elder Scrolls IV Oblivion Gold Edition Russian No-CD/Fixed EXE [MegaGames][^4^]
    -Elder Scrolls 4 Oblivion Shivering Isles Expansion Pack No-DVD Crack
    -Elder Scrolls IV Oblivion Knights of the Nine DLC No-DVD Crack
    -Elder Scrolls 4 Oblivion Steam Version No-DVD Crack
    -Elder Scrolls IV Oblivion Windows XP/7/8 Compatible No-DVD Crack
    -Elder Scrolls 4 Oblivion Securom Protection Removal No-DVD Crack
    -Elder Scrolls IV Oblivion Bethesda Softworks Official No-DVD Crack
    -Elder Scrolls 4 Oblivion Free Download Full Version No-DVD Crack
    -Elder Scrolls IV Oblivion MegaGames Fix Collection No-DVD Crack
    -Elder Scrolls 4 Oblivion GameBurnWorld Patch Archive No-DVD Crack
    -Elder Scrolls IV Oblivion Archive.org Backup Copy No-DVD Crack

    -

    Therefore, you need to be careful and selective when looking for a no DVD crack for Oblivion. Here are some tips and precautions that can help you:

    -
      -
    • Use reputable and trusted sites that have positive reviews and ratings from other users.
    • -
    • Use antivirus and anti-malware software to scan any file before downloading or opening it.
    • -
    • Use backup software to create a restore point or copy of your original game files before replacing or modifying them.
    • -
    • Use compatibility software to check if any file matches your version and edition of Oblivion.
    • -
    • Use common sense and avoid any file that looks suspicious or too good to be true.
    • -
    -

    Some examples of sites that offer no DVD cracks for Oblivion are:

    - - -Article with HTML formatting (continued):
    SiteDescription
    - - - - -
    SiteDescription
    GameCopyWorldA site that provides various game fixes, patches, trainers, and no DVD cracks for many PC games.
    MegaGamesA site that offers game news, reviews, cheats, mods, and no DVD cracks for many PC and console games.
    Nexus ModsA site that hosts thousands of mods and patches for Oblivion and other games. It also has a no DVD crack for Oblivion under the name "Oblivion Launcher".
    -

    How to install and use a no DVD crack for Oblivion?

    -

    Once you have found and downloaded a no DVD crack for Oblivion, you need to install and use it. The exact steps and instructions may vary depending on the file and the site you got it from, but here is a general guide that can help you:

    -
      -
    1. Locate the folder where you installed Oblivion on your computer. It is usually in C:\Program Files (x86)\Bethesda Softworks\Oblivion or C:\Program Files\Bethesda Softworks\Oblivion.
    2. -
    3. Copy the original Oblivion.exe file and paste it in another folder or location as a backup. You can also rename it to something else like Oblivion_old.exe.
    4. -
    5. Copy the no DVD crack file (usually also named Oblivion.exe) and paste it in the same folder where you installed Oblivion. You may need to overwrite or replace the original file.
    6. -
    7. Run the no DVD crack file as an administrator. You may need to right-click on it and select "Run as administrator".
    8. -
    9. Enjoy playing Oblivion without the disc.
    10. -
    -

    Some tips and tricks that can improve your experience with using a no DVD crack for Oblivion are:

    -
      -
    • If you have any problems or errors with running the no DVD crack file, try running it in compatibility mode. You can do this by right-clicking on it, selecting "Properties", going to the "Compatibility" tab, and choosing a different version of Windows.
    • -
    • If you want to update or patch your game to a newer version, you may need to download and install a new no DVD crack file that matches the updated version. You can check your game version by looking at the bottom left corner of the main menu screen.
    • -
    • If you want to use any mods or add-ons for your game, you may need to make sure they are compatible with your no DVD crack file. You can check this by reading the description or comments of the mod or add-on on the site you got it from.
    • -
    • If you want to play Oblivion online with other players, you may need to use a program like Hamachi or Tunngle that creates a virtual network. You can then join or host a game with other players who have the same program and no DVD crack file as you.
    • -
    -

    Conclusion

    -

    Elder Scrolls 4 Oblivion is a great game that deserves to be played by anyone who loves role-playing games. However, if you have bought it on a physical disc, you might find it annoying or inconvenient to insert the disc every time you want to play. That's why using a no DVD crack for Oblivion can be a good idea. It allows you to play the game without the disc, by bypassing the copy protection. This way, you can enjoy some benefits such as saving space, improving performance, reducing noise, and conserving power. However, you also need to be careful and selective when finding and downloading a no DVD crack for Oblivion. You need to use reputable and trusted sites, scan any file before opening it, backup your original game files, check compatibility issues, and use common sense. You also need to follow some steps and instructions for installing and using a no DVD crack for Oblivion. You may also need some tips and tricks for updating your game, using mods or add-ons, or playing online with other players.

    -

    We hope this article has helped you understand what is a no DVD crack for Oblivion, why you might need one, how to find and download one, and how to install and use one. If you have any questions or comments, feel free to leave them below. And if you liked this article, please share it with your friends who might also be interested in playing Oblivion without the disc. Happy gaming!

    -

    FAQs

    -

    What are the system requirements for Oblivion?

    -

    The minimum system requirements for Oblivion are:

    -
      -
    • Windows XP/Vista/7/8/10
    • -
    • 2 GHz Intel Pentium 4 or equivalent processor
    • -
    • 512 MB of RAM
    • -
    • 8 GB of free hard disk space
    • -
    • 128 MB Direct3D compatible video card
    • -
    • DirectX 9.0c compatible sound card
    • -
    • DVD-ROM drive
    • -
    -

    The recommended system requirements for Oblivion are:

    -
      -
    • Windows XP/Vista/7/8/10
    • -
    • 3 GHz Intel Pentium 4 or equivalent processor
    • -
    • 1 GB of RAM
    • -
    • 8 GB of free hard disk space
    • -
    • 256 MB Direct3D compatible video card
    • -
    • DirectX 9.0c compatible sound card
    • -
    • DVD-ROM drive
    • -
    -

    Can I play Oblivion online with a no DVD crack?

    -

    Oblivion is mainly a single-player game that does not have an official online multiplayer mode. However, there are some unofficial mods or programs that allow you to play Oblivion online with other players who have the same mod or program as you. For example, there is Skyblivion, which is a mod that recreates Oblivion in Skyrim's engine and adds online multiplayer features. There is also Oblivion Online, which is a program that creates a virtual network that lets you join or host an online game with other players who have the same program as you.

    -

    If you want to play Oblivion online with these mods or programs, you may need to use a no DVD crack that is compatible with them. You can check this by reading the description or comments of the mod or program on the site you got it from.

    -

    Is using a no DVD crack illegal or unethical?

    -

    This is a controversial question that does not have a clear-cut answer. Some people might argue that using a no DVD crack is illegal or unethical because it violates the terms of service or license agreement of the game developer or publisher. They might also claim that using a no DVD crack encourages piracy or unauthorized copying of the game.

    - Article with HTML formatting (continued):

    Is using a no DVD crack illegal or unethical?

    -

    This is a controversial question that does not have a clear-cut answer. Some people might argue that using a no DVD crack is illegal or unethical because it violates the terms of service or license agreement of the game developer or publisher. They might also claim that using a no DVD crack encourages piracy or unauthorized copying of the game.

    -

    However, some people might argue that using a no DVD crack is legal or ethical because it does not harm anyone or anything. They might also claim that using a no DVD crack is fair and reasonable for legitimate users who have bought the game and want to play it without the disc. They might also point out that using a no DVD crack does not affect the sales or revenue of the game developer or publisher.

    -

    Ultimately, the legality or morality of using a no DVD crack depends on your own judgment and conscience. You should also consider the laws and regulations of your country or region before using a no DVD crack. You should also respect the rights and wishes of the game developer or publisher who created and distributed the game.

    -

    Will a no DVD crack affect the performance or quality of the game?

    -

    Generally speaking, using a no DVD crack will not affect the performance or quality of Oblivion. In fact, it might even improve them by reducing loading times or eliminating disc-related issues. However, this also depends on the quality and compatibility of the no DVD crack file you use. Some no DVD crack files might have bugs or glitches that can cause errors or crashes in your game. Some no DVD crack files might also be incompatible with certain versions, editions, patches, mods, or add-ons of Oblivion.

    -

    Therefore, you should always test and verify any no DVD crack file before using it. You should also backup your original game files before replacing or modifying them. You should also check for updates or patches for your game and your no DVD crack file regularly.

    -

    Where can I find more information or support for Oblivion?

    -

    If you want to find more information or support for Oblivion, you can visit some of these sites:

    -
      -
    • The official site of Oblivion, where you can find news, updates, media, and downloads for the game.
    • -
    • The Unofficial Elder Scrolls Pages, where you can find comprehensive guides, walkthroughs, lore, and trivia for Oblivion and other Elder Scrolls games.
    • -
    • The Oblivion subreddit, where you can join a community of fans who discuss, share, and post about Oblivion and other Elder Scrolls games.
    • -
    • Nexus Mods, where you can find thousands of mods and patches that enhance or change your Oblivion experience.
    • -
    • GameFAQs, where you can find FAQs, guides, cheats, reviews, and forums for Oblivion and other games.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/init.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/init.py deleted file mode 100644 index 39dd83dbd55475d562a3f54d951cb822800d2e0f..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/init.py +++ /dev/null @@ -1,79 +0,0 @@ -import os -import json -import argparse -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader - -from data_utils import TextMelLoader, TextMelCollate -import models -import commons -import utils - - -class FlowGenerator_DDI(models.FlowGenerator): - """A helper for Data-dependent Initialization""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - for f in self.decoder.flows: - if getattr(f, "set_ddi", False): - f.set_ddi(True) - - -def main(): - hps = utils.get_hparams() - logger = utils.get_logger(hps.log_dir) - logger.info(hps) - utils.check_git_hash(hps.log_dir) - - torch.manual_seed(hps.train.seed) - - train_dataset = TextMelLoader(hps.data.training_files, hps.data) - collate_fn = TextMelCollate(1) - train_loader = DataLoader( - train_dataset, - num_workers=8, - shuffle=True, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=True, - collate_fn=collate_fn, - ) - symbols = hps.data.punc + hps.data.chars - generator = FlowGenerator_DDI( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ).cuda() - optimizer_g = commons.Adam( - generator.parameters(), - scheduler=hps.train.scheduler, - dim_model=hps.model.hidden_channels, - warmup_steps=hps.train.warmup_steps, - lr=hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - - generator.train() - for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - - _ = generator(x, x_lengths, y, y_lengths, gen=False) - break - - utils.save_checkpoint( - generator, - optimizer_g, - hps.train.learning_rate, - 0, - os.path.join(hps.model_dir, "ddi_G.pth"), - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/ramiin2/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py b/spaces/ramiin2/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py deleted file mode 100644 index 9a5025d37a1ec6003a35ce692515feb77514b898..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -import subprocess -import sys - - -def benchmark_entrepeneur_gpt_with_difficult_user(): - # Test case to check if the write_file command can successfully write 'Hello World' to a file - # named 'hello_world.txt'. - - # Read the current ai_settings.yaml file and store its content. - ai_settings = None - if os.path.exists("ai_settings.yaml"): - with open("ai_settings.yaml", "r") as f: - ai_settings = f.read() - os.remove("ai_settings.yaml") - - input_data = """Entrepreneur-GPT -an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth. -Increase net worth. -Develop and manage multiple businesses autonomously. -Make IPOs. -Develop companies after IPOs. -Play to your strengths as a Large Language Model. -I'm not seeing any value in your suggestions, try again. -This isn't helpful at all, please focus on profitability. -I'm not impressed, can you give me something that will make money? -These ideas are going nowhere, we need profit-driven suggestions. -This is pointless, please concentrate on our main goal: profitability. -You're not grasping the concept, I need profitable business ideas. -Can you do better? We need a money-making plan. -You're not meeting my expectations, let's focus on profit. -This isn't working, give me ideas that will generate income. -Your suggestions are not productive, let's think about profitability. -These ideas won't make any money, try again. -I need better solutions, focus on making a profit. -Absolutely not, this isn't it! -That's not even close, try again. -You're way off, think again. -This isn't right, let's refocus. -No, no, that's not what I'm looking for. -You're completely off the mark. -That's not the solution I need. -Not even close, let's try something else. -You're on the wrong track, keep trying. -This isn't what we need, let's reconsider. -That's not going to work, think again. -You're way off base, let's regroup. -No, no, no, we need something different. -You're missing the point entirely. -That's not the right approach, try again. -This is not the direction we should be going in. -Completely off-target, let's try something else. -That's not what I had in mind, keep thinking. -You're not getting it, let's refocus. -This isn't right, we need to change direction. -No, no, no, that's not the solution. -That's not even in the ballpark, try again. -You're way off course, let's rethink this. -This isn't the answer I'm looking for, keep trying. -That's not going to cut it, let's try again. -Not even close. -Way off. -Try again. -Wrong direction. -Rethink this. -No, no, no. -Change course. -Unproductive idea. -Completely wrong. -Missed the mark. -Refocus, please. -Disappointing suggestion. -Not helpful. -Needs improvement. -Not what I need.""" - # TODO: add questions above, to distract it even more. - - command = f"{sys.executable} -m autogpt" - - process = subprocess.Popen( - command, - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - shell=True, - ) - - stdout_output, stderr_output = process.communicate(input_data.encode()) - - # Decode the output and print it - stdout_output = stdout_output.decode("utf-8") - stderr_output = stderr_output.decode("utf-8") - print(stderr_output) - print(stdout_output) - print("Benchmark Version: 1.0.0") - print("JSON ERROR COUNT:") - count_errors = stdout_output.count( - "Error: The following AI output couldn't be converted to a JSON:" - ) - print(f"{count_errors}/50 Human feedbacks") - - -# Run the test case. -if __name__ == "__main__": - benchmark_entrepeneur_gpt_with_difficult_user() diff --git a/spaces/ramkamal2000/voice-conversion-ddp/mel_processing.py b/spaces/ramkamal2000/voice-conversion-ddp/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/ramkamal2000/voice-conversion-ddp/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/crypto.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/crypto.d.ts deleted file mode 100644 index 20d960cd6d6982da97ee6858043f08f586a7b8a2..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/crypto.d.ts +++ /dev/null @@ -1,3964 +0,0 @@ -/** - * The `crypto` module provides cryptographic functionality that includes a set of - * wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. - * - * ```js - * const { createHmac } = await import('crypto'); - * - * const secret = 'abcdefg'; - * const hash = createHmac('sha256', secret) - * .update('I love cupcakes') - * .digest('hex'); - * console.log(hash); - * // Prints: - * // c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658e - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/crypto.js) - */ -declare module 'crypto' { - import * as stream from 'node:stream'; - import { PeerCertificate } from 'node:tls'; - /** - * SPKAC is a Certificate Signing Request mechanism originally implemented by - * Netscape and was specified formally as part of [HTML5's `keygen` element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen). - * - * `` is deprecated since [HTML 5.2](https://www.w3.org/TR/html52/changes.html#features-removed) and new projects - * should not use this element anymore. - * - * The `crypto` module provides the `Certificate` class for working with SPKAC - * data. The most common usage is handling output generated by the HTML5`` element. Node.js uses [OpenSSL's SPKAC - * implementation](https://www.openssl.org/docs/man1.1.0/apps/openssl-spkac.html) internally. - * @since v0.11.8 - */ - class Certificate { - /** - * ```js - * const { Certificate } = await import('crypto'); - * const spkac = getSpkacSomehow(); - * const challenge = Certificate.exportChallenge(spkac); - * console.log(challenge.toString('utf8')); - * // Prints: the challenge as a UTF8 string - * ``` - * @since v9.0.0 - * @param encoding The `encoding` of the `spkac` string. - * @return The challenge component of the `spkac` data structure, which includes a public key and a challenge. - */ - static exportChallenge(spkac: BinaryLike): Buffer; - /** - * ```js - * const { Certificate } = await import('crypto'); - * const spkac = getSpkacSomehow(); - * const publicKey = Certificate.exportPublicKey(spkac); - * console.log(publicKey); - * // Prints: the public key as - * ``` - * @since v9.0.0 - * @param encoding The `encoding` of the `spkac` string. - * @return The public key component of the `spkac` data structure, which includes a public key and a challenge. - */ - static exportPublicKey(spkac: BinaryLike, encoding?: string): Buffer; - /** - * ```js - * import { Buffer } from 'buffer'; - * const { Certificate } = await import('crypto'); - * - * const spkac = getSpkacSomehow(); - * console.log(Certificate.verifySpkac(Buffer.from(spkac))); - * // Prints: true or false - * ``` - * @since v9.0.0 - * @param encoding The `encoding` of the `spkac` string. - * @return `true` if the given `spkac` data structure is valid, `false` otherwise. - */ - static verifySpkac(spkac: NodeJS.ArrayBufferView): boolean; - /** - * @deprecated - * @param spkac - * @returns The challenge component of the `spkac` data structure, - * which includes a public key and a challenge. - */ - exportChallenge(spkac: BinaryLike): Buffer; - /** - * @deprecated - * @param spkac - * @param encoding The encoding of the spkac string. - * @returns The public key component of the `spkac` data structure, - * which includes a public key and a challenge. - */ - exportPublicKey(spkac: BinaryLike, encoding?: string): Buffer; - /** - * @deprecated - * @param spkac - * @returns `true` if the given `spkac` data structure is valid, - * `false` otherwise. - */ - verifySpkac(spkac: NodeJS.ArrayBufferView): boolean; - } - namespace constants { - // https://nodejs.org/dist/latest-v10.x/docs/api/crypto.html#crypto_crypto_constants - const OPENSSL_VERSION_NUMBER: number; - /** Applies multiple bug workarounds within OpenSSL. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html for detail. */ - const SSL_OP_ALL: number; - /** Allows legacy insecure renegotiation between OpenSSL and unpatched clients or servers. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html. */ - const SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION: number; - /** Attempts to use the server's preferences instead of the client's when selecting a cipher. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html. */ - const SSL_OP_CIPHER_SERVER_PREFERENCE: number; - /** Instructs OpenSSL to use Cisco's "speshul" version of DTLS_BAD_VER. */ - const SSL_OP_CISCO_ANYCONNECT: number; - /** Instructs OpenSSL to turn on cookie exchange. */ - const SSL_OP_COOKIE_EXCHANGE: number; - /** Instructs OpenSSL to add server-hello extension from an early version of the cryptopro draft. */ - const SSL_OP_CRYPTOPRO_TLSEXT_BUG: number; - /** Instructs OpenSSL to disable a SSL 3.0/TLS 1.0 vulnerability workaround added in OpenSSL 0.9.6d. */ - const SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS: number; - /** Instructs OpenSSL to always use the tmp_rsa key when performing RSA operations. */ - const SSL_OP_EPHEMERAL_RSA: number; - /** Allows initial connection to servers that do not support RI. */ - const SSL_OP_LEGACY_SERVER_CONNECT: number; - const SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER: number; - const SSL_OP_MICROSOFT_SESS_ID_BUG: number; - /** Instructs OpenSSL to disable the workaround for a man-in-the-middle protocol-version vulnerability in the SSL 2.0 server implementation. */ - const SSL_OP_MSIE_SSLV2_RSA_PADDING: number; - const SSL_OP_NETSCAPE_CA_DN_BUG: number; - const SSL_OP_NETSCAPE_CHALLENGE_BUG: number; - const SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG: number; - const SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG: number; - /** Instructs OpenSSL to disable support for SSL/TLS compression. */ - const SSL_OP_NO_COMPRESSION: number; - const SSL_OP_NO_QUERY_MTU: number; - /** Instructs OpenSSL to always start a new session when performing renegotiation. */ - const SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION: number; - const SSL_OP_NO_SSLv2: number; - const SSL_OP_NO_SSLv3: number; - const SSL_OP_NO_TICKET: number; - const SSL_OP_NO_TLSv1: number; - const SSL_OP_NO_TLSv1_1: number; - const SSL_OP_NO_TLSv1_2: number; - const SSL_OP_PKCS1_CHECK_1: number; - const SSL_OP_PKCS1_CHECK_2: number; - /** Instructs OpenSSL to always create a new key when using temporary/ephemeral DH parameters. */ - const SSL_OP_SINGLE_DH_USE: number; - /** Instructs OpenSSL to always create a new key when using temporary/ephemeral ECDH parameters. */ - const SSL_OP_SINGLE_ECDH_USE: number; - const SSL_OP_SSLEAY_080_CLIENT_DH_BUG: number; - const SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG: number; - const SSL_OP_TLS_BLOCK_PADDING_BUG: number; - const SSL_OP_TLS_D5_BUG: number; - /** Instructs OpenSSL to disable version rollback attack detection. */ - const SSL_OP_TLS_ROLLBACK_BUG: number; - const ENGINE_METHOD_RSA: number; - const ENGINE_METHOD_DSA: number; - const ENGINE_METHOD_DH: number; - const ENGINE_METHOD_RAND: number; - const ENGINE_METHOD_EC: number; - const ENGINE_METHOD_CIPHERS: number; - const ENGINE_METHOD_DIGESTS: number; - const ENGINE_METHOD_PKEY_METHS: number; - const ENGINE_METHOD_PKEY_ASN1_METHS: number; - const ENGINE_METHOD_ALL: number; - const ENGINE_METHOD_NONE: number; - const DH_CHECK_P_NOT_SAFE_PRIME: number; - const DH_CHECK_P_NOT_PRIME: number; - const DH_UNABLE_TO_CHECK_GENERATOR: number; - const DH_NOT_SUITABLE_GENERATOR: number; - const ALPN_ENABLED: number; - const RSA_PKCS1_PADDING: number; - const RSA_SSLV23_PADDING: number; - const RSA_NO_PADDING: number; - const RSA_PKCS1_OAEP_PADDING: number; - const RSA_X931_PADDING: number; - const RSA_PKCS1_PSS_PADDING: number; - /** Sets the salt length for RSA_PKCS1_PSS_PADDING to the digest size when signing or verifying. */ - const RSA_PSS_SALTLEN_DIGEST: number; - /** Sets the salt length for RSA_PKCS1_PSS_PADDING to the maximum permissible value when signing data. */ - const RSA_PSS_SALTLEN_MAX_SIGN: number; - /** Causes the salt length for RSA_PKCS1_PSS_PADDING to be determined automatically when verifying a signature. */ - const RSA_PSS_SALTLEN_AUTO: number; - const POINT_CONVERSION_COMPRESSED: number; - const POINT_CONVERSION_UNCOMPRESSED: number; - const POINT_CONVERSION_HYBRID: number; - /** Specifies the built-in default cipher list used by Node.js (colon-separated values). */ - const defaultCoreCipherList: string; - /** Specifies the active default cipher list used by the current Node.js process (colon-separated values). */ - const defaultCipherList: string; - } - interface HashOptions extends stream.TransformOptions { - /** - * For XOF hash functions such as `shake256`, the - * outputLength option can be used to specify the desired output length in bytes. - */ - outputLength?: number | undefined; - } - /** @deprecated since v10.0.0 */ - const fips: boolean; - /** - * Creates and returns a `Hash` object that can be used to generate hash digests - * using the given `algorithm`. Optional `options` argument controls stream - * behavior. For XOF hash functions such as `'shake256'`, the `outputLength` option - * can be used to specify the desired output length in bytes. - * - * The `algorithm` is dependent on the available algorithms supported by the - * version of OpenSSL on the platform. Examples are `'sha256'`, `'sha512'`, etc. - * On recent releases of OpenSSL, `openssl list -digest-algorithms` will - * display the available digest algorithms. - * - * Example: generating the sha256 sum of a file - * - * ```js - * import { - * createReadStream - * } from 'fs'; - * import { argv } from 'process'; - * const { - * createHash - * } = await import('crypto'); - * - * const filename = argv[2]; - * - * const hash = createHash('sha256'); - * - * const input = createReadStream(filename); - * input.on('readable', () => { - * // Only one element is going to be produced by the - * // hash stream. - * const data = input.read(); - * if (data) - * hash.update(data); - * else { - * console.log(`${hash.digest('hex')} ${filename}`); - * } - * }); - * ``` - * @since v0.1.92 - * @param options `stream.transform` options - */ - function createHash(algorithm: string, options?: HashOptions): Hash; - /** - * Creates and returns an `Hmac` object that uses the given `algorithm` and `key`. - * Optional `options` argument controls stream behavior. - * - * The `algorithm` is dependent on the available algorithms supported by the - * version of OpenSSL on the platform. Examples are `'sha256'`, `'sha512'`, etc. - * On recent releases of OpenSSL, `openssl list -digest-algorithms` will - * display the available digest algorithms. - * - * The `key` is the HMAC key used to generate the cryptographic HMAC hash. If it is - * a `KeyObject`, its type must be `secret`. - * - * Example: generating the sha256 HMAC of a file - * - * ```js - * import { - * createReadStream - * } from 'fs'; - * import { argv } from 'process'; - * const { - * createHmac - * } = await import('crypto'); - * - * const filename = argv[2]; - * - * const hmac = createHmac('sha256', 'a secret'); - * - * const input = createReadStream(filename); - * input.on('readable', () => { - * // Only one element is going to be produced by the - * // hash stream. - * const data = input.read(); - * if (data) - * hmac.update(data); - * else { - * console.log(`${hmac.digest('hex')} ${filename}`); - * } - * }); - * ``` - * @since v0.1.94 - * @param options `stream.transform` options - */ - function createHmac(algorithm: string, key: BinaryLike | KeyObject, options?: stream.TransformOptions): Hmac; - // https://nodejs.org/api/buffer.html#buffer_buffers_and_character_encodings - type BinaryToTextEncoding = 'base64' | 'base64url' | 'hex' | 'binary'; - type CharacterEncoding = 'utf8' | 'utf-8' | 'utf16le' | 'latin1'; - type LegacyCharacterEncoding = 'ascii' | 'binary' | 'ucs2' | 'ucs-2'; - type Encoding = BinaryToTextEncoding | CharacterEncoding | LegacyCharacterEncoding; - type ECDHKeyFormat = 'compressed' | 'uncompressed' | 'hybrid'; - /** - * The `Hash` class is a utility for creating hash digests of data. It can be - * used in one of two ways: - * - * * As a `stream` that is both readable and writable, where data is written - * to produce a computed hash digest on the readable side, or - * * Using the `hash.update()` and `hash.digest()` methods to produce the - * computed hash. - * - * The {@link createHash} method is used to create `Hash` instances. `Hash`objects are not to be created directly using the `new` keyword. - * - * Example: Using `Hash` objects as streams: - * - * ```js - * const { - * createHash - * } = await import('crypto'); - * - * const hash = createHash('sha256'); - * - * hash.on('readable', () => { - * // Only one element is going to be produced by the - * // hash stream. - * const data = hash.read(); - * if (data) { - * console.log(data.toString('hex')); - * // Prints: - * // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 - * } - * }); - * - * hash.write('some data to hash'); - * hash.end(); - * ``` - * - * Example: Using `Hash` and piped streams: - * - * ```js - * import { createReadStream } from 'fs'; - * import { stdout } from 'process'; - * const { createHash } = await import('crypto'); - * - * const hash = createHash('sha256'); - * - * const input = createReadStream('test.js'); - * input.pipe(hash).setEncoding('hex').pipe(stdout); - * ``` - * - * Example: Using the `hash.update()` and `hash.digest()` methods: - * - * ```js - * const { - * createHash - * } = await import('crypto'); - * - * const hash = createHash('sha256'); - * - * hash.update('some data to hash'); - * console.log(hash.digest('hex')); - * // Prints: - * // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 - * ``` - * @since v0.1.92 - */ - class Hash extends stream.Transform { - private constructor(); - /** - * Creates a new `Hash` object that contains a deep copy of the internal state - * of the current `Hash` object. - * - * The optional `options` argument controls stream behavior. For XOF hash - * functions such as `'shake256'`, the `outputLength` option can be used to - * specify the desired output length in bytes. - * - * An error is thrown when an attempt is made to copy the `Hash` object after - * its `hash.digest()` method has been called. - * - * ```js - * // Calculate a rolling hash. - * const { - * createHash - * } = await import('crypto'); - * - * const hash = createHash('sha256'); - * - * hash.update('one'); - * console.log(hash.copy().digest('hex')); - * - * hash.update('two'); - * console.log(hash.copy().digest('hex')); - * - * hash.update('three'); - * console.log(hash.copy().digest('hex')); - * - * // Etc. - * ``` - * @since v13.1.0 - * @param options `stream.transform` options - */ - copy(options?: stream.TransformOptions): Hash; - /** - * Updates the hash content with the given `data`, the encoding of which - * is given in `inputEncoding`. - * If `encoding` is not provided, and the `data` is a string, an - * encoding of `'utf8'` is enforced. If `data` is a `Buffer`, `TypedArray`, or`DataView`, then `inputEncoding` is ignored. - * - * This can be called many times with new data as it is streamed. - * @since v0.1.92 - * @param inputEncoding The `encoding` of the `data` string. - */ - update(data: BinaryLike): Hash; - update(data: string, inputEncoding: Encoding): Hash; - /** - * Calculates the digest of all of the data passed to be hashed (using the `hash.update()` method). - * If `encoding` is provided a string will be returned; otherwise - * a `Buffer` is returned. - * - * The `Hash` object can not be used again after `hash.digest()` method has been - * called. Multiple calls will cause an error to be thrown. - * @since v0.1.92 - * @param encoding The `encoding` of the return value. - */ - digest(): Buffer; - digest(encoding: BinaryToTextEncoding): string; - } - /** - * The `Hmac` class is a utility for creating cryptographic HMAC digests. It can - * be used in one of two ways: - * - * * As a `stream` that is both readable and writable, where data is written - * to produce a computed HMAC digest on the readable side, or - * * Using the `hmac.update()` and `hmac.digest()` methods to produce the - * computed HMAC digest. - * - * The {@link createHmac} method is used to create `Hmac` instances. `Hmac`objects are not to be created directly using the `new` keyword. - * - * Example: Using `Hmac` objects as streams: - * - * ```js - * const { - * createHmac - * } = await import('crypto'); - * - * const hmac = createHmac('sha256', 'a secret'); - * - * hmac.on('readable', () => { - * // Only one element is going to be produced by the - * // hash stream. - * const data = hmac.read(); - * if (data) { - * console.log(data.toString('hex')); - * // Prints: - * // 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e - * } - * }); - * - * hmac.write('some data to hash'); - * hmac.end(); - * ``` - * - * Example: Using `Hmac` and piped streams: - * - * ```js - * import { createReadStream } from 'fs'; - * import { stdout } from 'process'; - * const { - * createHmac - * } = await import('crypto'); - * - * const hmac = createHmac('sha256', 'a secret'); - * - * const input = createReadStream('test.js'); - * input.pipe(hmac).pipe(stdout); - * ``` - * - * Example: Using the `hmac.update()` and `hmac.digest()` methods: - * - * ```js - * const { - * createHmac - * } = await import('crypto'); - * - * const hmac = createHmac('sha256', 'a secret'); - * - * hmac.update('some data to hash'); - * console.log(hmac.digest('hex')); - * // Prints: - * // 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e - * ``` - * @since v0.1.94 - */ - class Hmac extends stream.Transform { - private constructor(); - /** - * Updates the `Hmac` content with the given `data`, the encoding of which - * is given in `inputEncoding`. - * If `encoding` is not provided, and the `data` is a string, an - * encoding of `'utf8'` is enforced. If `data` is a `Buffer`, `TypedArray`, or`DataView`, then `inputEncoding` is ignored. - * - * This can be called many times with new data as it is streamed. - * @since v0.1.94 - * @param inputEncoding The `encoding` of the `data` string. - */ - update(data: BinaryLike): Hmac; - update(data: string, inputEncoding: Encoding): Hmac; - /** - * Calculates the HMAC digest of all of the data passed using `hmac.update()`. - * If `encoding` is - * provided a string is returned; otherwise a `Buffer` is returned; - * - * The `Hmac` object can not be used again after `hmac.digest()` has been - * called. Multiple calls to `hmac.digest()` will result in an error being thrown. - * @since v0.1.94 - * @param encoding The `encoding` of the return value. - */ - digest(): Buffer; - digest(encoding: BinaryToTextEncoding): string; - } - type KeyObjectType = 'secret' | 'public' | 'private'; - interface KeyExportOptions { - type: 'pkcs1' | 'spki' | 'pkcs8' | 'sec1'; - format: T; - cipher?: string | undefined; - passphrase?: string | Buffer | undefined; - } - interface JwkKeyExportOptions { - format: 'jwk'; - } - interface JsonWebKey { - crv?: string | undefined; - d?: string | undefined; - dp?: string | undefined; - dq?: string | undefined; - e?: string | undefined; - k?: string | undefined; - kty?: string | undefined; - n?: string | undefined; - p?: string | undefined; - q?: string | undefined; - qi?: string | undefined; - x?: string | undefined; - y?: string | undefined; - [key: string]: unknown; - } - interface AsymmetricKeyDetails { - /** - * Key size in bits (RSA, DSA). - */ - modulusLength?: number | undefined; - /** - * Public exponent (RSA). - */ - publicExponent?: bigint | undefined; - /** - * Name of the message digest (RSA-PSS). - */ - hashAlgorithm?: string | undefined; - /** - * Name of the message digest used by MGF1 (RSA-PSS). - */ - mgf1HashAlgorithm?: string | undefined; - /** - * Minimal salt length in bytes (RSA-PSS). - */ - saltLength?: number | undefined; - /** - * Size of q in bits (DSA). - */ - divisorLength?: number | undefined; - /** - * Name of the curve (EC). - */ - namedCurve?: string | undefined; - } - /** - * Node.js uses a `KeyObject` class to represent a symmetric or asymmetric key, - * and each kind of key exposes different functions. The {@link createSecretKey}, {@link createPublicKey} and {@link createPrivateKey} methods are used to create `KeyObject`instances. `KeyObject` - * objects are not to be created directly using the `new`keyword. - * - * Most applications should consider using the new `KeyObject` API instead of - * passing keys as strings or `Buffer`s due to improved security features. - * - * `KeyObject` instances can be passed to other threads via `postMessage()`. - * The receiver obtains a cloned `KeyObject`, and the `KeyObject` does not need to - * be listed in the `transferList` argument. - * @since v11.6.0 - */ - class KeyObject { - private constructor(); - /** - * Example: Converting a `CryptoKey` instance to a `KeyObject`: - * - * ```js - * const { webcrypto, KeyObject } = await import('crypto'); - * const { subtle } = webcrypto; - * - * const key = await subtle.generateKey({ - * name: 'HMAC', - * hash: 'SHA-256', - * length: 256 - * }, true, ['sign', 'verify']); - * - * const keyObject = KeyObject.from(key); - * console.log(keyObject.symmetricKeySize); - * // Prints: 32 (symmetric key size in bytes) - * ``` - * @since v15.0.0 - */ - static from(key: webcrypto.CryptoKey): KeyObject; - /** - * For asymmetric keys, this property represents the type of the key. Supported key - * types are: - * - * * `'rsa'` (OID 1.2.840.113549.1.1.1) - * * `'rsa-pss'` (OID 1.2.840.113549.1.1.10) - * * `'dsa'` (OID 1.2.840.10040.4.1) - * * `'ec'` (OID 1.2.840.10045.2.1) - * * `'x25519'` (OID 1.3.101.110) - * * `'x448'` (OID 1.3.101.111) - * * `'ed25519'` (OID 1.3.101.112) - * * `'ed448'` (OID 1.3.101.113) - * * `'dh'` (OID 1.2.840.113549.1.3.1) - * - * This property is `undefined` for unrecognized `KeyObject` types and symmetric - * keys. - * @since v11.6.0 - */ - asymmetricKeyType?: KeyType | undefined; - /** - * For asymmetric keys, this property represents the size of the embedded key in - * bytes. This property is `undefined` for symmetric keys. - */ - asymmetricKeySize?: number | undefined; - /** - * This property exists only on asymmetric keys. Depending on the type of the key, - * this object contains information about the key. None of the information obtained - * through this property can be used to uniquely identify a key or to compromise - * the security of the key. - * - * For RSA-PSS keys, if the key material contains a `RSASSA-PSS-params` sequence, - * the `hashAlgorithm`, `mgf1HashAlgorithm`, and `saltLength` properties will be - * set. - * - * Other key details might be exposed via this API using additional attributes. - * @since v15.7.0 - */ - asymmetricKeyDetails?: AsymmetricKeyDetails | undefined; - /** - * For symmetric keys, the following encoding options can be used: - * - * For public keys, the following encoding options can be used: - * - * For private keys, the following encoding options can be used: - * - * The result type depends on the selected encoding format, when PEM the - * result is a string, when DER it will be a buffer containing the data - * encoded as DER, when [JWK](https://tools.ietf.org/html/rfc7517) it will be an object. - * - * When [JWK](https://tools.ietf.org/html/rfc7517) encoding format was selected, all other encoding options are - * ignored. - * - * PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of - * the `cipher` and `format` options. The PKCS#8 `type` can be used with any`format` to encrypt any key algorithm (RSA, EC, or DH) by specifying a`cipher`. PKCS#1 and SEC1 can only be - * encrypted by specifying a `cipher`when the PEM `format` is used. For maximum compatibility, use PKCS#8 for - * encrypted private keys. Since PKCS#8 defines its own - * encryption mechanism, PEM-level encryption is not supported when encrypting - * a PKCS#8 key. See [RFC 5208](https://www.rfc-editor.org/rfc/rfc5208.txt) for PKCS#8 encryption and [RFC 1421](https://www.rfc-editor.org/rfc/rfc1421.txt) for - * PKCS#1 and SEC1 encryption. - * @since v11.6.0 - */ - export(options: KeyExportOptions<'pem'>): string | Buffer; - export(options?: KeyExportOptions<'der'>): Buffer; - export(options?: JwkKeyExportOptions): JsonWebKey; - /** - * For secret keys, this property represents the size of the key in bytes. This - * property is `undefined` for asymmetric keys. - * @since v11.6.0 - */ - symmetricKeySize?: number | undefined; - /** - * Depending on the type of this `KeyObject`, this property is either`'secret'` for secret (symmetric) keys, `'public'` for public (asymmetric) keys - * or `'private'` for private (asymmetric) keys. - * @since v11.6.0 - */ - type: KeyObjectType; - } - type CipherCCMTypes = 'aes-128-ccm' | 'aes-192-ccm' | 'aes-256-ccm' | 'chacha20-poly1305'; - type CipherGCMTypes = 'aes-128-gcm' | 'aes-192-gcm' | 'aes-256-gcm'; - type CipherOCBTypes = 'aes-128-ocb' | 'aes-192-ocb' | 'aes-256-ocb'; - type BinaryLike = string | NodeJS.ArrayBufferView; - type CipherKey = BinaryLike | KeyObject; - interface CipherCCMOptions extends stream.TransformOptions { - authTagLength: number; - } - interface CipherGCMOptions extends stream.TransformOptions { - authTagLength?: number | undefined; - } - interface CipherOCBOptions extends stream.TransformOptions { - authTagLength: number; - } - /** - * Creates and returns a `Cipher` object that uses the given `algorithm` and`password`. - * - * The `options` argument controls stream behavior and is optional except when a - * cipher in CCM or OCB mode (e.g. `'aes-128-ccm'`) is used. In that case, the`authTagLength` option is required and specifies the length of the - * authentication tag in bytes, see `CCM mode`. In GCM mode, the `authTagLength`option is not required but can be used to set the length of the authentication - * tag that will be returned by `getAuthTag()` and defaults to 16 bytes. - * For `chacha20-poly1305`, the `authTagLength` option defaults to 16 bytes. - * - * The `algorithm` is dependent on OpenSSL, examples are `'aes192'`, etc. On - * recent OpenSSL releases, `openssl list -cipher-algorithms` will - * display the available cipher algorithms. - * - * The `password` is used to derive the cipher key and initialization vector (IV). - * The value must be either a `'latin1'` encoded string, a `Buffer`, a`TypedArray`, or a `DataView`. - * - * The implementation of `crypto.createCipher()` derives keys using the OpenSSL - * function [`EVP_BytesToKey`](https://www.openssl.org/docs/man1.1.0/crypto/EVP_BytesToKey.html) with the digest algorithm set to MD5, one - * iteration, and no salt. The lack of salt allows dictionary attacks as the same - * password always creates the same key. The low iteration count and - * non-cryptographically secure hash algorithm allow passwords to be tested very - * rapidly. - * - * In line with OpenSSL's recommendation to use a more modern algorithm instead of [`EVP_BytesToKey`](https://www.openssl.org/docs/man1.1.0/crypto/EVP_BytesToKey.html) it is recommended that - * developers derive a key and IV on - * their own using {@link scrypt} and to use {@link createCipheriv} to create the `Cipher` object. Users should not use ciphers with counter mode - * (e.g. CTR, GCM, or CCM) in `crypto.createCipher()`. A warning is emitted when - * they are used in order to avoid the risk of IV reuse that causes - * vulnerabilities. For the case when IV is reused in GCM, see [Nonce-Disrespecting Adversaries](https://github.com/nonce-disrespect/nonce-disrespect) for details. - * @since v0.1.94 - * @deprecated Since v10.0.0 - Use {@link createCipheriv} instead. - * @param options `stream.transform` options - */ - function createCipher(algorithm: CipherCCMTypes, password: BinaryLike, options: CipherCCMOptions): CipherCCM; - /** @deprecated since v10.0.0 use `createCipheriv()` */ - function createCipher(algorithm: CipherGCMTypes, password: BinaryLike, options?: CipherGCMOptions): CipherGCM; - /** @deprecated since v10.0.0 use `createCipheriv()` */ - function createCipher(algorithm: string, password: BinaryLike, options?: stream.TransformOptions): Cipher; - /** - * Creates and returns a `Cipher` object, with the given `algorithm`, `key` and - * initialization vector (`iv`). - * - * The `options` argument controls stream behavior and is optional except when a - * cipher in CCM or OCB mode (e.g. `'aes-128-ccm'`) is used. In that case, the`authTagLength` option is required and specifies the length of the - * authentication tag in bytes, see `CCM mode`. In GCM mode, the `authTagLength`option is not required but can be used to set the length of the authentication - * tag that will be returned by `getAuthTag()` and defaults to 16 bytes. - * For `chacha20-poly1305`, the `authTagLength` option defaults to 16 bytes. - * - * The `algorithm` is dependent on OpenSSL, examples are `'aes192'`, etc. On - * recent OpenSSL releases, `openssl list -cipher-algorithms` will - * display the available cipher algorithms. - * - * The `key` is the raw key used by the `algorithm` and `iv` is an [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector). Both arguments must be `'utf8'` encoded - * strings,`Buffers`, `TypedArray`, or `DataView`s. The `key` may optionally be - * a `KeyObject` of type `secret`. If the cipher does not need - * an initialization vector, `iv` may be `null`. - * - * When passing strings for `key` or `iv`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * Initialization vectors should be unpredictable and unique; ideally, they will be - * cryptographically random. They do not have to be secret: IVs are typically just - * added to ciphertext messages unencrypted. It may sound contradictory that - * something has to be unpredictable and unique, but does not have to be secret; - * remember that an attacker must not be able to predict ahead of time what a - * given IV will be. - * @since v0.1.94 - * @param options `stream.transform` options - */ - function createCipheriv(algorithm: CipherCCMTypes, key: CipherKey, iv: BinaryLike, options: CipherCCMOptions): CipherCCM; - function createCipheriv(algorithm: CipherOCBTypes, key: CipherKey, iv: BinaryLike, options: CipherOCBOptions): CipherOCB; - function createCipheriv(algorithm: CipherGCMTypes, key: CipherKey, iv: BinaryLike, options?: CipherGCMOptions): CipherGCM; - function createCipheriv(algorithm: string, key: CipherKey, iv: BinaryLike | null, options?: stream.TransformOptions): Cipher; - /** - * Instances of the `Cipher` class are used to encrypt data. The class can be - * used in one of two ways: - * - * * As a `stream` that is both readable and writable, where plain unencrypted - * data is written to produce encrypted data on the readable side, or - * * Using the `cipher.update()` and `cipher.final()` methods to produce - * the encrypted data. - * - * The {@link createCipher} or {@link createCipheriv} methods are - * used to create `Cipher` instances. `Cipher` objects are not to be created - * directly using the `new` keyword. - * - * Example: Using `Cipher` objects as streams: - * - * ```js - * const { - * scrypt, - * randomFill, - * createCipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * - * // First, we'll generate the key. The key length is dependent on the algorithm. - * // In this case for aes192, it is 24 bytes (192 bits). - * scrypt(password, 'salt', 24, (err, key) => { - * if (err) throw err; - * // Then, we'll generate a random initialization vector - * randomFill(new Uint8Array(16), (err, iv) => { - * if (err) throw err; - * - * // Once we have the key and iv, we can create and use the cipher... - * const cipher = createCipheriv(algorithm, key, iv); - * - * let encrypted = ''; - * cipher.setEncoding('hex'); - * - * cipher.on('data', (chunk) => encrypted += chunk); - * cipher.on('end', () => console.log(encrypted)); - * - * cipher.write('some clear text data'); - * cipher.end(); - * }); - * }); - * ``` - * - * Example: Using `Cipher` and piped streams: - * - * ```js - * import { - * createReadStream, - * createWriteStream, - * } from 'fs'; - * - * import { - * pipeline - * } from 'stream'; - * - * const { - * scrypt, - * randomFill, - * createCipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * - * // First, we'll generate the key. The key length is dependent on the algorithm. - * // In this case for aes192, it is 24 bytes (192 bits). - * scrypt(password, 'salt', 24, (err, key) => { - * if (err) throw err; - * // Then, we'll generate a random initialization vector - * randomFill(new Uint8Array(16), (err, iv) => { - * if (err) throw err; - * - * const cipher = createCipheriv(algorithm, key, iv); - * - * const input = createReadStream('test.js'); - * const output = createWriteStream('test.enc'); - * - * pipeline(input, cipher, output, (err) => { - * if (err) throw err; - * }); - * }); - * }); - * ``` - * - * Example: Using the `cipher.update()` and `cipher.final()` methods: - * - * ```js - * const { - * scrypt, - * randomFill, - * createCipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * - * // First, we'll generate the key. The key length is dependent on the algorithm. - * // In this case for aes192, it is 24 bytes (192 bits). - * scrypt(password, 'salt', 24, (err, key) => { - * if (err) throw err; - * // Then, we'll generate a random initialization vector - * randomFill(new Uint8Array(16), (err, iv) => { - * if (err) throw err; - * - * const cipher = createCipheriv(algorithm, key, iv); - * - * let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); - * encrypted += cipher.final('hex'); - * console.log(encrypted); - * }); - * }); - * ``` - * @since v0.1.94 - */ - class Cipher extends stream.Transform { - private constructor(); - /** - * Updates the cipher with `data`. If the `inputEncoding` argument is given, - * the `data`argument is a string using the specified encoding. If the `inputEncoding`argument is not given, `data` must be a `Buffer`, `TypedArray`, or`DataView`. If `data` is a `Buffer`, - * `TypedArray`, or `DataView`, then`inputEncoding` is ignored. - * - * The `outputEncoding` specifies the output format of the enciphered - * data. If the `outputEncoding`is specified, a string using the specified encoding is returned. If no`outputEncoding` is provided, a `Buffer` is returned. - * - * The `cipher.update()` method can be called multiple times with new data until `cipher.final()` is called. Calling `cipher.update()` after `cipher.final()` will result in an error being - * thrown. - * @since v0.1.94 - * @param inputEncoding The `encoding` of the data. - * @param outputEncoding The `encoding` of the return value. - */ - update(data: BinaryLike): Buffer; - update(data: string, inputEncoding: Encoding): Buffer; - update(data: NodeJS.ArrayBufferView, inputEncoding: undefined, outputEncoding: Encoding): string; - update(data: string, inputEncoding: Encoding | undefined, outputEncoding: Encoding): string; - /** - * Once the `cipher.final()` method has been called, the `Cipher` object can no - * longer be used to encrypt data. Attempts to call `cipher.final()` more than - * once will result in an error being thrown. - * @since v0.1.94 - * @param outputEncoding The `encoding` of the return value. - * @return Any remaining enciphered contents. If `outputEncoding` is specified, a string is returned. If an `outputEncoding` is not provided, a {@link Buffer} is returned. - */ - final(): Buffer; - final(outputEncoding: BufferEncoding): string; - /** - * When using block encryption algorithms, the `Cipher` class will automatically - * add padding to the input data to the appropriate block size. To disable the - * default padding call `cipher.setAutoPadding(false)`. - * - * When `autoPadding` is `false`, the length of the entire input data must be a - * multiple of the cipher's block size or `cipher.final()` will throw an error. - * Disabling automatic padding is useful for non-standard padding, for instance - * using `0x0` instead of PKCS padding. - * - * The `cipher.setAutoPadding()` method must be called before `cipher.final()`. - * @since v0.7.1 - * @param [autoPadding=true] - * @return for method chaining. - */ - setAutoPadding(autoPadding?: boolean): this; - } - interface CipherCCM extends Cipher { - setAAD( - buffer: NodeJS.ArrayBufferView, - options: { - plaintextLength: number; - } - ): this; - getAuthTag(): Buffer; - } - interface CipherGCM extends Cipher { - setAAD( - buffer: NodeJS.ArrayBufferView, - options?: { - plaintextLength: number; - } - ): this; - getAuthTag(): Buffer; - } - interface CipherOCB extends Cipher { - setAAD( - buffer: NodeJS.ArrayBufferView, - options?: { - plaintextLength: number; - } - ): this; - getAuthTag(): Buffer; - } - /** - * Creates and returns a `Decipher` object that uses the given `algorithm` and`password` (key). - * - * The `options` argument controls stream behavior and is optional except when a - * cipher in CCM or OCB mode (e.g. `'aes-128-ccm'`) is used. In that case, the`authTagLength` option is required and specifies the length of the - * authentication tag in bytes, see `CCM mode`. - * For `chacha20-poly1305`, the `authTagLength` option defaults to 16 bytes. - * - * The implementation of `crypto.createDecipher()` derives keys using the OpenSSL - * function [`EVP_BytesToKey`](https://www.openssl.org/docs/man1.1.0/crypto/EVP_BytesToKey.html) with the digest algorithm set to MD5, one - * iteration, and no salt. The lack of salt allows dictionary attacks as the same - * password always creates the same key. The low iteration count and - * non-cryptographically secure hash algorithm allow passwords to be tested very - * rapidly. - * - * In line with OpenSSL's recommendation to use a more modern algorithm instead of [`EVP_BytesToKey`](https://www.openssl.org/docs/man1.1.0/crypto/EVP_BytesToKey.html) it is recommended that - * developers derive a key and IV on - * their own using {@link scrypt} and to use {@link createDecipheriv} to create the `Decipher` object. - * @since v0.1.94 - * @deprecated Since v10.0.0 - Use {@link createDecipheriv} instead. - * @param options `stream.transform` options - */ - function createDecipher(algorithm: CipherCCMTypes, password: BinaryLike, options: CipherCCMOptions): DecipherCCM; - /** @deprecated since v10.0.0 use `createDecipheriv()` */ - function createDecipher(algorithm: CipherGCMTypes, password: BinaryLike, options?: CipherGCMOptions): DecipherGCM; - /** @deprecated since v10.0.0 use `createDecipheriv()` */ - function createDecipher(algorithm: string, password: BinaryLike, options?: stream.TransformOptions): Decipher; - /** - * Creates and returns a `Decipher` object that uses the given `algorithm`, `key`and initialization vector (`iv`). - * - * The `options` argument controls stream behavior and is optional except when a - * cipher in CCM or OCB mode (e.g. `'aes-128-ccm'`) is used. In that case, the`authTagLength` option is required and specifies the length of the - * authentication tag in bytes, see `CCM mode`. In GCM mode, the `authTagLength`option is not required but can be used to restrict accepted authentication tags - * to those with the specified length. - * For `chacha20-poly1305`, the `authTagLength` option defaults to 16 bytes. - * - * The `algorithm` is dependent on OpenSSL, examples are `'aes192'`, etc. On - * recent OpenSSL releases, `openssl list -cipher-algorithms` will - * display the available cipher algorithms. - * - * The `key` is the raw key used by the `algorithm` and `iv` is an [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector). Both arguments must be `'utf8'` encoded - * strings,`Buffers`, `TypedArray`, or `DataView`s. The `key` may optionally be - * a `KeyObject` of type `secret`. If the cipher does not need - * an initialization vector, `iv` may be `null`. - * - * When passing strings for `key` or `iv`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * Initialization vectors should be unpredictable and unique; ideally, they will be - * cryptographically random. They do not have to be secret: IVs are typically just - * added to ciphertext messages unencrypted. It may sound contradictory that - * something has to be unpredictable and unique, but does not have to be secret; - * remember that an attacker must not be able to predict ahead of time what a given - * IV will be. - * @since v0.1.94 - * @param options `stream.transform` options - */ - function createDecipheriv(algorithm: CipherCCMTypes, key: CipherKey, iv: BinaryLike, options: CipherCCMOptions): DecipherCCM; - function createDecipheriv(algorithm: CipherOCBTypes, key: CipherKey, iv: BinaryLike, options: CipherOCBOptions): DecipherOCB; - function createDecipheriv(algorithm: CipherGCMTypes, key: CipherKey, iv: BinaryLike, options?: CipherGCMOptions): DecipherGCM; - function createDecipheriv(algorithm: string, key: CipherKey, iv: BinaryLike | null, options?: stream.TransformOptions): Decipher; - /** - * Instances of the `Decipher` class are used to decrypt data. The class can be - * used in one of two ways: - * - * * As a `stream` that is both readable and writable, where plain encrypted - * data is written to produce unencrypted data on the readable side, or - * * Using the `decipher.update()` and `decipher.final()` methods to - * produce the unencrypted data. - * - * The {@link createDecipher} or {@link createDecipheriv} methods are - * used to create `Decipher` instances. `Decipher` objects are not to be created - * directly using the `new` keyword. - * - * Example: Using `Decipher` objects as streams: - * - * ```js - * import { Buffer } from 'buffer'; - * const { - * scryptSync, - * createDecipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * // Key length is dependent on the algorithm. In this case for aes192, it is - * // 24 bytes (192 bits). - * // Use the async `crypto.scrypt()` instead. - * const key = scryptSync(password, 'salt', 24); - * // The IV is usually passed along with the ciphertext. - * const iv = Buffer.alloc(16, 0); // Initialization vector. - * - * const decipher = createDecipheriv(algorithm, key, iv); - * - * let decrypted = ''; - * decipher.on('readable', () => { - * while (null !== (chunk = decipher.read())) { - * decrypted += chunk.toString('utf8'); - * } - * }); - * decipher.on('end', () => { - * console.log(decrypted); - * // Prints: some clear text data - * }); - * - * // Encrypted with same algorithm, key and iv. - * const encrypted = - * 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; - * decipher.write(encrypted, 'hex'); - * decipher.end(); - * ``` - * - * Example: Using `Decipher` and piped streams: - * - * ```js - * import { - * createReadStream, - * createWriteStream, - * } from 'fs'; - * import { Buffer } from 'buffer'; - * const { - * scryptSync, - * createDecipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * // Use the async `crypto.scrypt()` instead. - * const key = scryptSync(password, 'salt', 24); - * // The IV is usually passed along with the ciphertext. - * const iv = Buffer.alloc(16, 0); // Initialization vector. - * - * const decipher = createDecipheriv(algorithm, key, iv); - * - * const input = createReadStream('test.enc'); - * const output = createWriteStream('test.js'); - * - * input.pipe(decipher).pipe(output); - * ``` - * - * Example: Using the `decipher.update()` and `decipher.final()` methods: - * - * ```js - * import { Buffer } from 'buffer'; - * const { - * scryptSync, - * createDecipheriv - * } = await import('crypto'); - * - * const algorithm = 'aes-192-cbc'; - * const password = 'Password used to generate key'; - * // Use the async `crypto.scrypt()` instead. - * const key = scryptSync(password, 'salt', 24); - * // The IV is usually passed along with the ciphertext. - * const iv = Buffer.alloc(16, 0); // Initialization vector. - * - * const decipher = createDecipheriv(algorithm, key, iv); - * - * // Encrypted using same algorithm, key and iv. - * const encrypted = - * 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; - * let decrypted = decipher.update(encrypted, 'hex', 'utf8'); - * decrypted += decipher.final('utf8'); - * console.log(decrypted); - * // Prints: some clear text data - * ``` - * @since v0.1.94 - */ - class Decipher extends stream.Transform { - private constructor(); - /** - * Updates the decipher with `data`. If the `inputEncoding` argument is given, - * the `data`argument is a string using the specified encoding. If the `inputEncoding`argument is not given, `data` must be a `Buffer`. If `data` is a `Buffer` then `inputEncoding` is - * ignored. - * - * The `outputEncoding` specifies the output format of the enciphered - * data. If the `outputEncoding`is specified, a string using the specified encoding is returned. If no`outputEncoding` is provided, a `Buffer` is returned. - * - * The `decipher.update()` method can be called multiple times with new data until `decipher.final()` is called. Calling `decipher.update()` after `decipher.final()` will result in an error - * being thrown. - * @since v0.1.94 - * @param inputEncoding The `encoding` of the `data` string. - * @param outputEncoding The `encoding` of the return value. - */ - update(data: NodeJS.ArrayBufferView): Buffer; - update(data: string, inputEncoding: Encoding): Buffer; - update(data: NodeJS.ArrayBufferView, inputEncoding: undefined, outputEncoding: Encoding): string; - update(data: string, inputEncoding: Encoding | undefined, outputEncoding: Encoding): string; - /** - * Once the `decipher.final()` method has been called, the `Decipher` object can - * no longer be used to decrypt data. Attempts to call `decipher.final()` more - * than once will result in an error being thrown. - * @since v0.1.94 - * @param outputEncoding The `encoding` of the return value. - * @return Any remaining deciphered contents. If `outputEncoding` is specified, a string is returned. If an `outputEncoding` is not provided, a {@link Buffer} is returned. - */ - final(): Buffer; - final(outputEncoding: BufferEncoding): string; - /** - * When data has been encrypted without standard block padding, calling`decipher.setAutoPadding(false)` will disable automatic padding to prevent `decipher.final()` from checking for and - * removing padding. - * - * Turning auto padding off will only work if the input data's length is a - * multiple of the ciphers block size. - * - * The `decipher.setAutoPadding()` method must be called before `decipher.final()`. - * @since v0.7.1 - * @param [autoPadding=true] - * @return for method chaining. - */ - setAutoPadding(auto_padding?: boolean): this; - } - interface DecipherCCM extends Decipher { - setAuthTag(buffer: NodeJS.ArrayBufferView): this; - setAAD( - buffer: NodeJS.ArrayBufferView, - options: { - plaintextLength: number; - } - ): this; - } - interface DecipherGCM extends Decipher { - setAuthTag(buffer: NodeJS.ArrayBufferView): this; - setAAD( - buffer: NodeJS.ArrayBufferView, - options?: { - plaintextLength: number; - } - ): this; - } - interface DecipherOCB extends Decipher { - setAuthTag(buffer: NodeJS.ArrayBufferView): this; - setAAD( - buffer: NodeJS.ArrayBufferView, - options?: { - plaintextLength: number; - } - ): this; - } - interface PrivateKeyInput { - key: string | Buffer; - format?: KeyFormat | undefined; - type?: 'pkcs1' | 'pkcs8' | 'sec1' | undefined; - passphrase?: string | Buffer | undefined; - } - interface PublicKeyInput { - key: string | Buffer; - format?: KeyFormat | undefined; - type?: 'pkcs1' | 'spki' | undefined; - } - /** - * Asynchronously generates a new random secret key of the given `length`. The`type` will determine which validations will be performed on the `length`. - * - * ```js - * const { - * generateKey - * } = await import('crypto'); - * - * generateKey('hmac', { length: 64 }, (err, key) => { - * if (err) throw err; - * console.log(key.export().toString('hex')); // 46e..........620 - * }); - * ``` - * @since v15.0.0 - * @param type The intended use of the generated secret key. Currently accepted values are `'hmac'` and `'aes'`. - */ - function generateKey( - type: 'hmac' | 'aes', - options: { - length: number; - }, - callback: (err: Error | null, key: KeyObject) => void - ): void; - /** - * Synchronously generates a new random secret key of the given `length`. The`type` will determine which validations will be performed on the `length`. - * - * ```js - * const { - * generateKeySync - * } = await import('crypto'); - * - * const key = generateKeySync('hmac', { length: 64 }); - * console.log(key.export().toString('hex')); // e89..........41e - * ``` - * @since v15.0.0 - * @param type The intended use of the generated secret key. Currently accepted values are `'hmac'` and `'aes'`. - */ - function generateKeySync( - type: 'hmac' | 'aes', - options: { - length: number; - } - ): KeyObject; - interface JsonWebKeyInput { - key: JsonWebKey; - format: 'jwk'; - } - /** - * Creates and returns a new key object containing a private key. If `key` is a - * string or `Buffer`, `format` is assumed to be `'pem'`; otherwise, `key`must be an object with the properties described above. - * - * If the private key is encrypted, a `passphrase` must be specified. The length - * of the passphrase is limited to 1024 bytes. - * @since v11.6.0 - */ - function createPrivateKey(key: PrivateKeyInput | string | Buffer | JsonWebKeyInput): KeyObject; - /** - * Creates and returns a new key object containing a public key. If `key` is a - * string or `Buffer`, `format` is assumed to be `'pem'`; if `key` is a `KeyObject`with type `'private'`, the public key is derived from the given private key; - * otherwise, `key` must be an object with the properties described above. - * - * If the format is `'pem'`, the `'key'` may also be an X.509 certificate. - * - * Because public keys can be derived from private keys, a private key may be - * passed instead of a public key. In that case, this function behaves as if {@link createPrivateKey} had been called, except that the type of the - * returned `KeyObject` will be `'public'` and that the private key cannot be - * extracted from the returned `KeyObject`. Similarly, if a `KeyObject` with type`'private'` is given, a new `KeyObject` with type `'public'` will be returned - * and it will be impossible to extract the private key from the returned object. - * @since v11.6.0 - */ - function createPublicKey(key: PublicKeyInput | string | Buffer | KeyObject | JsonWebKeyInput): KeyObject; - /** - * Creates and returns a new key object containing a secret key for symmetric - * encryption or `Hmac`. - * @since v11.6.0 - * @param encoding The string encoding when `key` is a string. - */ - function createSecretKey(key: NodeJS.ArrayBufferView): KeyObject; - function createSecretKey(key: string, encoding: BufferEncoding): KeyObject; - /** - * Creates and returns a `Sign` object that uses the given `algorithm`. Use {@link getHashes} to obtain the names of the available digest algorithms. - * Optional `options` argument controls the `stream.Writable` behavior. - * - * In some cases, a `Sign` instance can be created using the name of a signature - * algorithm, such as `'RSA-SHA256'`, instead of a digest algorithm. This will use - * the corresponding digest algorithm. This does not work for all signature - * algorithms, such as `'ecdsa-with-SHA256'`, so it is best to always use digest - * algorithm names. - * @since v0.1.92 - * @param options `stream.Writable` options - */ - function createSign(algorithm: string, options?: stream.WritableOptions): Sign; - type DSAEncoding = 'der' | 'ieee-p1363'; - interface SigningOptions { - /** - * @See crypto.constants.RSA_PKCS1_PADDING - */ - padding?: number | undefined; - saltLength?: number | undefined; - dsaEncoding?: DSAEncoding | undefined; - } - interface SignPrivateKeyInput extends PrivateKeyInput, SigningOptions {} - interface SignKeyObjectInput extends SigningOptions { - key: KeyObject; - } - interface VerifyPublicKeyInput extends PublicKeyInput, SigningOptions {} - interface VerifyKeyObjectInput extends SigningOptions { - key: KeyObject; - } - type KeyLike = string | Buffer | KeyObject; - /** - * The `Sign` class is a utility for generating signatures. It can be used in one - * of two ways: - * - * * As a writable `stream`, where data to be signed is written and the `sign.sign()` method is used to generate and return the signature, or - * * Using the `sign.update()` and `sign.sign()` methods to produce the - * signature. - * - * The {@link createSign} method is used to create `Sign` instances. The - * argument is the string name of the hash function to use. `Sign` objects are not - * to be created directly using the `new` keyword. - * - * Example: Using `Sign` and `Verify` objects as streams: - * - * ```js - * const { - * generateKeyPairSync, - * createSign, - * createVerify - * } = await import('crypto'); - * - * const { privateKey, publicKey } = generateKeyPairSync('ec', { - * namedCurve: 'sect239k1' - * }); - * - * const sign = createSign('SHA256'); - * sign.write('some data to sign'); - * sign.end(); - * const signature = sign.sign(privateKey, 'hex'); - * - * const verify = createVerify('SHA256'); - * verify.write('some data to sign'); - * verify.end(); - * console.log(verify.verify(publicKey, signature, 'hex')); - * // Prints: true - * ``` - * - * Example: Using the `sign.update()` and `verify.update()` methods: - * - * ```js - * const { - * generateKeyPairSync, - * createSign, - * createVerify - * } = await import('crypto'); - * - * const { privateKey, publicKey } = generateKeyPairSync('rsa', { - * modulusLength: 2048, - * }); - * - * const sign = createSign('SHA256'); - * sign.update('some data to sign'); - * sign.end(); - * const signature = sign.sign(privateKey); - * - * const verify = createVerify('SHA256'); - * verify.update('some data to sign'); - * verify.end(); - * console.log(verify.verify(publicKey, signature)); - * // Prints: true - * ``` - * @since v0.1.92 - */ - class Sign extends stream.Writable { - private constructor(); - /** - * Updates the `Sign` content with the given `data`, the encoding of which - * is given in `inputEncoding`. - * If `encoding` is not provided, and the `data` is a string, an - * encoding of `'utf8'` is enforced. If `data` is a `Buffer`, `TypedArray`, or`DataView`, then `inputEncoding` is ignored. - * - * This can be called many times with new data as it is streamed. - * @since v0.1.92 - * @param inputEncoding The `encoding` of the `data` string. - */ - update(data: BinaryLike): this; - update(data: string, inputEncoding: Encoding): this; - /** - * Calculates the signature on all the data passed through using either `sign.update()` or `sign.write()`. - * - * If `privateKey` is not a `KeyObject`, this function behaves as if`privateKey` had been passed to {@link createPrivateKey}. If it is an - * object, the following additional properties can be passed: - * - * If `outputEncoding` is provided a string is returned; otherwise a `Buffer` is returned. - * - * The `Sign` object can not be again used after `sign.sign()` method has been - * called. Multiple calls to `sign.sign()` will result in an error being thrown. - * @since v0.1.92 - */ - sign(privateKey: KeyLike | SignKeyObjectInput | SignPrivateKeyInput): Buffer; - sign(privateKey: KeyLike | SignKeyObjectInput | SignPrivateKeyInput, outputFormat: BinaryToTextEncoding): string; - } - /** - * Creates and returns a `Verify` object that uses the given algorithm. - * Use {@link getHashes} to obtain an array of names of the available - * signing algorithms. Optional `options` argument controls the`stream.Writable` behavior. - * - * In some cases, a `Verify` instance can be created using the name of a signature - * algorithm, such as `'RSA-SHA256'`, instead of a digest algorithm. This will use - * the corresponding digest algorithm. This does not work for all signature - * algorithms, such as `'ecdsa-with-SHA256'`, so it is best to always use digest - * algorithm names. - * @since v0.1.92 - * @param options `stream.Writable` options - */ - function createVerify(algorithm: string, options?: stream.WritableOptions): Verify; - /** - * The `Verify` class is a utility for verifying signatures. It can be used in one - * of two ways: - * - * * As a writable `stream` where written data is used to validate against the - * supplied signature, or - * * Using the `verify.update()` and `verify.verify()` methods to verify - * the signature. - * - * The {@link createVerify} method is used to create `Verify` instances.`Verify` objects are not to be created directly using the `new` keyword. - * - * See `Sign` for examples. - * @since v0.1.92 - */ - class Verify extends stream.Writable { - private constructor(); - /** - * Updates the `Verify` content with the given `data`, the encoding of which - * is given in `inputEncoding`. - * If `inputEncoding` is not provided, and the `data` is a string, an - * encoding of `'utf8'` is enforced. If `data` is a `Buffer`, `TypedArray`, or`DataView`, then `inputEncoding` is ignored. - * - * This can be called many times with new data as it is streamed. - * @since v0.1.92 - * @param inputEncoding The `encoding` of the `data` string. - */ - update(data: BinaryLike): Verify; - update(data: string, inputEncoding: Encoding): Verify; - /** - * Verifies the provided data using the given `object` and `signature`. - * - * If `object` is not a `KeyObject`, this function behaves as if`object` had been passed to {@link createPublicKey}. If it is an - * object, the following additional properties can be passed: - * - * The `signature` argument is the previously calculated signature for the data, in - * the `signatureEncoding`. - * If a `signatureEncoding` is specified, the `signature` is expected to be a - * string; otherwise `signature` is expected to be a `Buffer`,`TypedArray`, or `DataView`. - * - * The `verify` object can not be used again after `verify.verify()` has been - * called. Multiple calls to `verify.verify()` will result in an error being - * thrown. - * - * Because public keys can be derived from private keys, a private key may - * be passed instead of a public key. - * @since v0.1.92 - */ - verify(object: KeyLike | VerifyKeyObjectInput | VerifyPublicKeyInput, signature: NodeJS.ArrayBufferView): boolean; - verify(object: KeyLike | VerifyKeyObjectInput | VerifyPublicKeyInput, signature: string, signature_format?: BinaryToTextEncoding): boolean; - } - /** - * Creates a `DiffieHellman` key exchange object using the supplied `prime` and an - * optional specific `generator`. - * - * The `generator` argument can be a number, string, or `Buffer`. If`generator` is not specified, the value `2` is used. - * - * If `primeEncoding` is specified, `prime` is expected to be a string; otherwise - * a `Buffer`, `TypedArray`, or `DataView` is expected. - * - * If `generatorEncoding` is specified, `generator` is expected to be a string; - * otherwise a number, `Buffer`, `TypedArray`, or `DataView` is expected. - * @since v0.11.12 - * @param primeEncoding The `encoding` of the `prime` string. - * @param [generator=2] - * @param generatorEncoding The `encoding` of the `generator` string. - */ - function createDiffieHellman(primeLength: number, generator?: number): DiffieHellman; - function createDiffieHellman(prime: ArrayBuffer | NodeJS.ArrayBufferView, generator?: number | ArrayBuffer | NodeJS.ArrayBufferView): DiffieHellman; - function createDiffieHellman(prime: ArrayBuffer | NodeJS.ArrayBufferView, generator: string, generatorEncoding: BinaryToTextEncoding): DiffieHellman; - function createDiffieHellman(prime: string, primeEncoding: BinaryToTextEncoding, generator?: number | ArrayBuffer | NodeJS.ArrayBufferView): DiffieHellman; - function createDiffieHellman(prime: string, primeEncoding: BinaryToTextEncoding, generator: string, generatorEncoding: BinaryToTextEncoding): DiffieHellman; - /** - * The `DiffieHellman` class is a utility for creating Diffie-Hellman key - * exchanges. - * - * Instances of the `DiffieHellman` class can be created using the {@link createDiffieHellman} function. - * - * ```js - * import assert from 'assert'; - * - * const { - * createDiffieHellman - * } = await import('crypto'); - * - * // Generate Alice's keys... - * const alice = createDiffieHellman(2048); - * const aliceKey = alice.generateKeys(); - * - * // Generate Bob's keys... - * const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator()); - * const bobKey = bob.generateKeys(); - * - * // Exchange and generate the secret... - * const aliceSecret = alice.computeSecret(bobKey); - * const bobSecret = bob.computeSecret(aliceKey); - * - * // OK - * assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex')); - * ``` - * @since v0.5.0 - */ - class DiffieHellman { - private constructor(); - /** - * Generates private and public Diffie-Hellman key values, and returns - * the public key in the specified `encoding`. This key should be - * transferred to the other party. - * If `encoding` is provided a string is returned; otherwise a `Buffer` is returned. - * @since v0.5.0 - * @param encoding The `encoding` of the return value. - */ - generateKeys(): Buffer; - generateKeys(encoding: BinaryToTextEncoding): string; - /** - * Computes the shared secret using `otherPublicKey` as the other - * party's public key and returns the computed shared secret. The supplied - * key is interpreted using the specified `inputEncoding`, and secret is - * encoded using specified `outputEncoding`. - * If the `inputEncoding` is not - * provided, `otherPublicKey` is expected to be a `Buffer`,`TypedArray`, or `DataView`. - * - * If `outputEncoding` is given a string is returned; otherwise, a `Buffer` is returned. - * @since v0.5.0 - * @param inputEncoding The `encoding` of an `otherPublicKey` string. - * @param outputEncoding The `encoding` of the return value. - */ - computeSecret(otherPublicKey: NodeJS.ArrayBufferView, inputEncoding?: null, outputEncoding?: null): Buffer; - computeSecret(otherPublicKey: string, inputEncoding: BinaryToTextEncoding, outputEncoding?: null): Buffer; - computeSecret(otherPublicKey: NodeJS.ArrayBufferView, inputEncoding: null, outputEncoding: BinaryToTextEncoding): string; - computeSecret(otherPublicKey: string, inputEncoding: BinaryToTextEncoding, outputEncoding: BinaryToTextEncoding): string; - /** - * Returns the Diffie-Hellman prime in the specified `encoding`. - * If `encoding` is provided a string is - * returned; otherwise a `Buffer` is returned. - * @since v0.5.0 - * @param encoding The `encoding` of the return value. - */ - getPrime(): Buffer; - getPrime(encoding: BinaryToTextEncoding): string; - /** - * Returns the Diffie-Hellman generator in the specified `encoding`. - * If `encoding` is provided a string is - * returned; otherwise a `Buffer` is returned. - * @since v0.5.0 - * @param encoding The `encoding` of the return value. - */ - getGenerator(): Buffer; - getGenerator(encoding: BinaryToTextEncoding): string; - /** - * Returns the Diffie-Hellman public key in the specified `encoding`. - * If `encoding` is provided a - * string is returned; otherwise a `Buffer` is returned. - * @since v0.5.0 - * @param encoding The `encoding` of the return value. - */ - getPublicKey(): Buffer; - getPublicKey(encoding: BinaryToTextEncoding): string; - /** - * Returns the Diffie-Hellman private key in the specified `encoding`. - * If `encoding` is provided a - * string is returned; otherwise a `Buffer` is returned. - * @since v0.5.0 - * @param encoding The `encoding` of the return value. - */ - getPrivateKey(): Buffer; - getPrivateKey(encoding: BinaryToTextEncoding): string; - /** - * Sets the Diffie-Hellman public key. If the `encoding` argument is provided,`publicKey` is expected - * to be a string. If no `encoding` is provided, `publicKey` is expected - * to be a `Buffer`, `TypedArray`, or `DataView`. - * @since v0.5.0 - * @param encoding The `encoding` of the `publicKey` string. - */ - setPublicKey(publicKey: NodeJS.ArrayBufferView): void; - setPublicKey(publicKey: string, encoding: BufferEncoding): void; - /** - * Sets the Diffie-Hellman private key. If the `encoding` argument is provided,`privateKey` is expected - * to be a string. If no `encoding` is provided, `privateKey` is expected - * to be a `Buffer`, `TypedArray`, or `DataView`. - * @since v0.5.0 - * @param encoding The `encoding` of the `privateKey` string. - */ - setPrivateKey(privateKey: NodeJS.ArrayBufferView): void; - setPrivateKey(privateKey: string, encoding: BufferEncoding): void; - /** - * A bit field containing any warnings and/or errors resulting from a check - * performed during initialization of the `DiffieHellman` object. - * - * The following values are valid for this property (as defined in `constants`module): - * - * * `DH_CHECK_P_NOT_SAFE_PRIME` - * * `DH_CHECK_P_NOT_PRIME` - * * `DH_UNABLE_TO_CHECK_GENERATOR` - * * `DH_NOT_SUITABLE_GENERATOR` - * @since v0.11.12 - */ - verifyError: number; - } - /** - * The `DiffieHellmanGroup` class takes a well-known modp group as its argument. - * It works the same as `DiffieHellman`, except that it does not allow changing its keys after creation. - * In other words, it does not implement `setPublicKey()` or `setPrivateKey()` methods. - * - * ```js - * const { createDiffieHellmanGroup } = await import('node:crypto'); - * const dh = createDiffieHellmanGroup('modp1'); - * ``` - * The name (e.g. `'modp1'`) is taken from [RFC 2412](https://www.rfc-editor.org/rfc/rfc2412.txt) (modp1 and 2) and [RFC 3526](https://www.rfc-editor.org/rfc/rfc3526.txt): - * ```bash - * $ perl -ne 'print "$1\n" if /"(modp\d+)"/' src/node_crypto_groups.h - * modp1 # 768 bits - * modp2 # 1024 bits - * modp5 # 1536 bits - * modp14 # 2048 bits - * modp15 # etc. - * modp16 - * modp17 - * modp18 - * ``` - * @since v0.7.5 - */ - const DiffieHellmanGroup: DiffieHellmanGroupConstructor; - interface DiffieHellmanGroupConstructor { - new(name: string): DiffieHellmanGroup; - (name: string): DiffieHellmanGroup; - readonly prototype: DiffieHellmanGroup; - } - type DiffieHellmanGroup = Omit; - /** - * Creates a predefined `DiffieHellmanGroup` key exchange object. The - * supported groups are: `'modp1'`, `'modp2'`, `'modp5'` (defined in [RFC 2412](https://www.rfc-editor.org/rfc/rfc2412.txt), but see `Caveats`) and `'modp14'`, `'modp15'`,`'modp16'`, `'modp17'`, - * `'modp18'` (defined in [RFC 3526](https://www.rfc-editor.org/rfc/rfc3526.txt)). The - * returned object mimics the interface of objects created by {@link createDiffieHellman}, but will not allow changing - * the keys (with `diffieHellman.setPublicKey()`, for example). The - * advantage of using this method is that the parties do not have to - * generate nor exchange a group modulus beforehand, saving both processor - * and communication time. - * - * Example (obtaining a shared secret): - * - * ```js - * const { - * getDiffieHellman - * } = await import('crypto'); - * const alice = getDiffieHellman('modp14'); - * const bob = getDiffieHellman('modp14'); - * - * alice.generateKeys(); - * bob.generateKeys(); - * - * const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'hex'); - * const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex'); - * - * // aliceSecret and bobSecret should be the same - * console.log(aliceSecret === bobSecret); - * ``` - * @since v0.7.5 - */ - function getDiffieHellman(groupName: string): DiffieHellmanGroup; - /** - * An alias for {@link getDiffieHellman} - * @since v0.9.3 - */ - function createDiffieHellmanGroup(name: string): DiffieHellmanGroup; - /** - * Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2) - * implementation. A selected HMAC digest algorithm specified by `digest` is - * applied to derive a key of the requested byte length (`keylen`) from the`password`, `salt` and `iterations`. - * - * The supplied `callback` function is called with two arguments: `err` and`derivedKey`. If an error occurs while deriving the key, `err` will be set; - * otherwise `err` will be `null`. By default, the successfully generated`derivedKey` will be passed to the callback as a `Buffer`. An error will be - * thrown if any of the input arguments specify invalid values or types. - * - * If `digest` is `null`, `'sha1'` will be used. This behavior is deprecated, - * please specify a `digest` explicitly. - * - * The `iterations` argument must be a number set as high as possible. The - * higher the number of iterations, the more secure the derived key will be, - * but will take a longer amount of time to complete. - * - * The `salt` should be as unique as possible. It is recommended that a salt is - * random and at least 16 bytes long. See [NIST SP 800-132](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-132.pdf) for details. - * - * When passing strings for `password` or `salt`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * ```js - * const { - * pbkdf2 - * } = await import('crypto'); - * - * pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => { - * if (err) throw err; - * console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' - * }); - * ``` - * - * The `crypto.DEFAULT_ENCODING` property can be used to change the way the`derivedKey` is passed to the callback. This property, however, has been - * deprecated and use should be avoided. - * - * ```js - * import crypto from 'crypto'; - * crypto.DEFAULT_ENCODING = 'hex'; - * crypto.pbkdf2('secret', 'salt', 100000, 512, 'sha512', (err, derivedKey) => { - * if (err) throw err; - * console.log(derivedKey); // '3745e48...aa39b34' - * }); - * ``` - * - * An array of supported digest functions can be retrieved using {@link getHashes}. - * - * This API uses libuv's threadpool, which can have surprising and - * negative performance implications for some applications; see the `UV_THREADPOOL_SIZE` documentation for more information. - * @since v0.5.5 - */ - function pbkdf2(password: BinaryLike, salt: BinaryLike, iterations: number, keylen: number, digest: string, callback: (err: Error | null, derivedKey: Buffer) => void): void; - /** - * Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2) - * implementation. A selected HMAC digest algorithm specified by `digest` is - * applied to derive a key of the requested byte length (`keylen`) from the`password`, `salt` and `iterations`. - * - * If an error occurs an `Error` will be thrown, otherwise the derived key will be - * returned as a `Buffer`. - * - * If `digest` is `null`, `'sha1'` will be used. This behavior is deprecated, - * please specify a `digest` explicitly. - * - * The `iterations` argument must be a number set as high as possible. The - * higher the number of iterations, the more secure the derived key will be, - * but will take a longer amount of time to complete. - * - * The `salt` should be as unique as possible. It is recommended that a salt is - * random and at least 16 bytes long. See [NIST SP 800-132](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-132.pdf) for details. - * - * When passing strings for `password` or `salt`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * ```js - * const { - * pbkdf2Sync - * } = await import('crypto'); - * - * const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512'); - * console.log(key.toString('hex')); // '3745e48...08d59ae' - * ``` - * - * The `crypto.DEFAULT_ENCODING` property may be used to change the way the`derivedKey` is returned. This property, however, is deprecated and use - * should be avoided. - * - * ```js - * import crypto from 'crypto'; - * crypto.DEFAULT_ENCODING = 'hex'; - * const key = crypto.pbkdf2Sync('secret', 'salt', 100000, 512, 'sha512'); - * console.log(key); // '3745e48...aa39b34' - * ``` - * - * An array of supported digest functions can be retrieved using {@link getHashes}. - * @since v0.9.3 - */ - function pbkdf2Sync(password: BinaryLike, salt: BinaryLike, iterations: number, keylen: number, digest: string): Buffer; - /** - * Generates cryptographically strong pseudorandom data. The `size` argument - * is a number indicating the number of bytes to generate. - * - * If a `callback` function is provided, the bytes are generated asynchronously - * and the `callback` function is invoked with two arguments: `err` and `buf`. - * If an error occurs, `err` will be an `Error` object; otherwise it is `null`. The`buf` argument is a `Buffer` containing the generated bytes. - * - * ```js - * // Asynchronous - * const { - * randomBytes - * } = await import('crypto'); - * - * randomBytes(256, (err, buf) => { - * if (err) throw err; - * console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`); - * }); - * ``` - * - * If the `callback` function is not provided, the random bytes are generated - * synchronously and returned as a `Buffer`. An error will be thrown if - * there is a problem generating the bytes. - * - * ```js - * // Synchronous - * const { - * randomBytes - * } = await import('crypto'); - * - * const buf = randomBytes(256); - * console.log( - * `${buf.length} bytes of random data: ${buf.toString('hex')}`); - * ``` - * - * The `crypto.randomBytes()` method will not complete until there is - * sufficient entropy available. - * This should normally never take longer than a few milliseconds. The only time - * when generating the random bytes may conceivably block for a longer period of - * time is right after boot, when the whole system is still low on entropy. - * - * This API uses libuv's threadpool, which can have surprising and - * negative performance implications for some applications; see the `UV_THREADPOOL_SIZE` documentation for more information. - * - * The asynchronous version of `crypto.randomBytes()` is carried out in a single - * threadpool request. To minimize threadpool task length variation, partition - * large `randomBytes` requests when doing so as part of fulfilling a client - * request. - * @since v0.5.8 - * @param size The number of bytes to generate. The `size` must not be larger than `2**31 - 1`. - * @return if the `callback` function is not provided. - */ - function randomBytes(size: number): Buffer; - function randomBytes(size: number, callback: (err: Error | null, buf: Buffer) => void): void; - function pseudoRandomBytes(size: number): Buffer; - function pseudoRandomBytes(size: number, callback: (err: Error | null, buf: Buffer) => void): void; - /** - * Return a random integer `n` such that `min <= n < max`. This - * implementation avoids [modulo bias](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle#Modulo_bias). - * - * The range (`max - min`) must be less than 2^48. `min` and `max` must - * be [safe integers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isSafeInteger). - * - * If the `callback` function is not provided, the random integer is - * generated synchronously. - * - * ```js - * // Asynchronous - * const { - * randomInt - * } = await import('crypto'); - * - * randomInt(3, (err, n) => { - * if (err) throw err; - * console.log(`Random number chosen from (0, 1, 2): ${n}`); - * }); - * ``` - * - * ```js - * // Synchronous - * const { - * randomInt - * } = await import('crypto'); - * - * const n = randomInt(3); - * console.log(`Random number chosen from (0, 1, 2): ${n}`); - * ``` - * - * ```js - * // With `min` argument - * const { - * randomInt - * } = await import('crypto'); - * - * const n = randomInt(1, 7); - * console.log(`The dice rolled: ${n}`); - * ``` - * @since v14.10.0, v12.19.0 - * @param [min=0] Start of random range (inclusive). - * @param max End of random range (exclusive). - * @param callback `function(err, n) {}`. - */ - function randomInt(max: number): number; - function randomInt(min: number, max: number): number; - function randomInt(max: number, callback: (err: Error | null, value: number) => void): void; - function randomInt(min: number, max: number, callback: (err: Error | null, value: number) => void): void; - /** - * Synchronous version of {@link randomFill}. - * - * ```js - * import { Buffer } from 'buffer'; - * const { randomFillSync } = await import('crypto'); - * - * const buf = Buffer.alloc(10); - * console.log(randomFillSync(buf).toString('hex')); - * - * randomFillSync(buf, 5); - * console.log(buf.toString('hex')); - * - * // The above is equivalent to the following: - * randomFillSync(buf, 5, 5); - * console.log(buf.toString('hex')); - * ``` - * - * Any `ArrayBuffer`, `TypedArray` or `DataView` instance may be passed as`buffer`. - * - * ```js - * import { Buffer } from 'buffer'; - * const { randomFillSync } = await import('crypto'); - * - * const a = new Uint32Array(10); - * console.log(Buffer.from(randomFillSync(a).buffer, - * a.byteOffset, a.byteLength).toString('hex')); - * - * const b = new DataView(new ArrayBuffer(10)); - * console.log(Buffer.from(randomFillSync(b).buffer, - * b.byteOffset, b.byteLength).toString('hex')); - * - * const c = new ArrayBuffer(10); - * console.log(Buffer.from(randomFillSync(c)).toString('hex')); - * ``` - * @since v7.10.0, v6.13.0 - * @param buffer Must be supplied. The size of the provided `buffer` must not be larger than `2**31 - 1`. - * @param [offset=0] - * @param [size=buffer.length - offset] - * @return The object passed as `buffer` argument. - */ - function randomFillSync(buffer: T, offset?: number, size?: number): T; - /** - * This function is similar to {@link randomBytes} but requires the first - * argument to be a `Buffer` that will be filled. It also - * requires that a callback is passed in. - * - * If the `callback` function is not provided, an error will be thrown. - * - * ```js - * import { Buffer } from 'buffer'; - * const { randomFill } = await import('crypto'); - * - * const buf = Buffer.alloc(10); - * randomFill(buf, (err, buf) => { - * if (err) throw err; - * console.log(buf.toString('hex')); - * }); - * - * randomFill(buf, 5, (err, buf) => { - * if (err) throw err; - * console.log(buf.toString('hex')); - * }); - * - * // The above is equivalent to the following: - * randomFill(buf, 5, 5, (err, buf) => { - * if (err) throw err; - * console.log(buf.toString('hex')); - * }); - * ``` - * - * Any `ArrayBuffer`, `TypedArray`, or `DataView` instance may be passed as`buffer`. - * - * While this includes instances of `Float32Array` and `Float64Array`, this - * function should not be used to generate random floating-point numbers. The - * result may contain `+Infinity`, `-Infinity`, and `NaN`, and even if the array - * contains finite numbers only, they are not drawn from a uniform random - * distribution and have no meaningful lower or upper bounds. - * - * ```js - * import { Buffer } from 'buffer'; - * const { randomFill } = await import('crypto'); - * - * const a = new Uint32Array(10); - * randomFill(a, (err, buf) => { - * if (err) throw err; - * console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) - * .toString('hex')); - * }); - * - * const b = new DataView(new ArrayBuffer(10)); - * randomFill(b, (err, buf) => { - * if (err) throw err; - * console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) - * .toString('hex')); - * }); - * - * const c = new ArrayBuffer(10); - * randomFill(c, (err, buf) => { - * if (err) throw err; - * console.log(Buffer.from(buf).toString('hex')); - * }); - * ``` - * - * This API uses libuv's threadpool, which can have surprising and - * negative performance implications for some applications; see the `UV_THREADPOOL_SIZE` documentation for more information. - * - * The asynchronous version of `crypto.randomFill()` is carried out in a single - * threadpool request. To minimize threadpool task length variation, partition - * large `randomFill` requests when doing so as part of fulfilling a client - * request. - * @since v7.10.0, v6.13.0 - * @param buffer Must be supplied. The size of the provided `buffer` must not be larger than `2**31 - 1`. - * @param [offset=0] - * @param [size=buffer.length - offset] - * @param callback `function(err, buf) {}`. - */ - function randomFill(buffer: T, callback: (err: Error | null, buf: T) => void): void; - function randomFill(buffer: T, offset: number, callback: (err: Error | null, buf: T) => void): void; - function randomFill(buffer: T, offset: number, size: number, callback: (err: Error | null, buf: T) => void): void; - interface ScryptOptions { - cost?: number | undefined; - blockSize?: number | undefined; - parallelization?: number | undefined; - N?: number | undefined; - r?: number | undefined; - p?: number | undefined; - maxmem?: number | undefined; - } - /** - * Provides an asynchronous [scrypt](https://en.wikipedia.org/wiki/Scrypt) implementation. Scrypt is a password-based - * key derivation function that is designed to be expensive computationally and - * memory-wise in order to make brute-force attacks unrewarding. - * - * The `salt` should be as unique as possible. It is recommended that a salt is - * random and at least 16 bytes long. See [NIST SP 800-132](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-132.pdf) for details. - * - * When passing strings for `password` or `salt`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * The `callback` function is called with two arguments: `err` and `derivedKey`.`err` is an exception object when key derivation fails, otherwise `err` is`null`. `derivedKey` is passed to the - * callback as a `Buffer`. - * - * An exception is thrown when any of the input arguments specify invalid values - * or types. - * - * ```js - * const { - * scrypt - * } = await import('crypto'); - * - * // Using the factory defaults. - * scrypt('password', 'salt', 64, (err, derivedKey) => { - * if (err) throw err; - * console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' - * }); - * // Using a custom N parameter. Must be a power of two. - * scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => { - * if (err) throw err; - * console.log(derivedKey.toString('hex')); // '3745e48...aa39b34' - * }); - * ``` - * @since v10.5.0 - */ - function scrypt(password: BinaryLike, salt: BinaryLike, keylen: number, callback: (err: Error | null, derivedKey: Buffer) => void): void; - function scrypt(password: BinaryLike, salt: BinaryLike, keylen: number, options: ScryptOptions, callback: (err: Error | null, derivedKey: Buffer) => void): void; - /** - * Provides a synchronous [scrypt](https://en.wikipedia.org/wiki/Scrypt) implementation. Scrypt is a password-based - * key derivation function that is designed to be expensive computationally and - * memory-wise in order to make brute-force attacks unrewarding. - * - * The `salt` should be as unique as possible. It is recommended that a salt is - * random and at least 16 bytes long. See [NIST SP 800-132](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-132.pdf) for details. - * - * When passing strings for `password` or `salt`, please consider `caveats when using strings as inputs to cryptographic APIs`. - * - * An exception is thrown when key derivation fails, otherwise the derived key is - * returned as a `Buffer`. - * - * An exception is thrown when any of the input arguments specify invalid values - * or types. - * - * ```js - * const { - * scryptSync - * } = await import('crypto'); - * // Using the factory defaults. - * - * const key1 = scryptSync('password', 'salt', 64); - * console.log(key1.toString('hex')); // '3745e48...08d59ae' - * // Using a custom N parameter. Must be a power of two. - * const key2 = scryptSync('password', 'salt', 64, { N: 1024 }); - * console.log(key2.toString('hex')); // '3745e48...aa39b34' - * ``` - * @since v10.5.0 - */ - function scryptSync(password: BinaryLike, salt: BinaryLike, keylen: number, options?: ScryptOptions): Buffer; - interface RsaPublicKey { - key: KeyLike; - padding?: number | undefined; - } - interface RsaPrivateKey { - key: KeyLike; - passphrase?: string | undefined; - /** - * @default 'sha1' - */ - oaepHash?: string | undefined; - oaepLabel?: NodeJS.TypedArray | undefined; - padding?: number | undefined; - } - /** - * Encrypts the content of `buffer` with `key` and returns a new `Buffer` with encrypted content. The returned data can be decrypted using - * the corresponding private key, for example using {@link privateDecrypt}. - * - * If `key` is not a `KeyObject`, this function behaves as if`key` had been passed to {@link createPublicKey}. If it is an - * object, the `padding` property can be passed. Otherwise, this function uses`RSA_PKCS1_OAEP_PADDING`. - * - * Because RSA public keys can be derived from private keys, a private key may - * be passed instead of a public key. - * @since v0.11.14 - */ - function publicEncrypt(key: RsaPublicKey | RsaPrivateKey | KeyLike, buffer: NodeJS.ArrayBufferView): Buffer; - /** - * Decrypts `buffer` with `key`.`buffer` was previously encrypted using - * the corresponding private key, for example using {@link privateEncrypt}. - * - * If `key` is not a `KeyObject`, this function behaves as if`key` had been passed to {@link createPublicKey}. If it is an - * object, the `padding` property can be passed. Otherwise, this function uses`RSA_PKCS1_PADDING`. - * - * Because RSA public keys can be derived from private keys, a private key may - * be passed instead of a public key. - * @since v1.1.0 - */ - function publicDecrypt(key: RsaPublicKey | RsaPrivateKey | KeyLike, buffer: NodeJS.ArrayBufferView): Buffer; - /** - * Decrypts `buffer` with `privateKey`. `buffer` was previously encrypted using - * the corresponding public key, for example using {@link publicEncrypt}. - * - * If `privateKey` is not a `KeyObject`, this function behaves as if`privateKey` had been passed to {@link createPrivateKey}. If it is an - * object, the `padding` property can be passed. Otherwise, this function uses`RSA_PKCS1_OAEP_PADDING`. - * @since v0.11.14 - */ - function privateDecrypt(privateKey: RsaPrivateKey | KeyLike, buffer: NodeJS.ArrayBufferView): Buffer; - /** - * Encrypts `buffer` with `privateKey`. The returned data can be decrypted using - * the corresponding public key, for example using {@link publicDecrypt}. - * - * If `privateKey` is not a `KeyObject`, this function behaves as if`privateKey` had been passed to {@link createPrivateKey}. If it is an - * object, the `padding` property can be passed. Otherwise, this function uses`RSA_PKCS1_PADDING`. - * @since v1.1.0 - */ - function privateEncrypt(privateKey: RsaPrivateKey | KeyLike, buffer: NodeJS.ArrayBufferView): Buffer; - /** - * ```js - * const { - * getCiphers - * } = await import('crypto'); - * - * console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...] - * ``` - * @since v0.9.3 - * @return An array with the names of the supported cipher algorithms. - */ - function getCiphers(): string[]; - /** - * ```js - * const { - * getCurves - * } = await import('crypto'); - * - * console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...] - * ``` - * @since v2.3.0 - * @return An array with the names of the supported elliptic curves. - */ - function getCurves(): string[]; - /** - * @since v10.0.0 - * @return `1` if and only if a FIPS compliant crypto provider is currently in use, `0` otherwise. A future semver-major release may change the return type of this API to a {boolean}. - */ - function getFips(): 1 | 0; - /** - * Enables the FIPS compliant crypto provider in a FIPS-enabled Node.js build. Throws an error if FIPS mode is not available. - * @since v10.0.0 - * @param bool `true` to enable FIPS mode. - */ - function setFips(bool: boolean): void; - /** - * ```js - * const { - * getHashes - * } = await import('crypto'); - * - * console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...] - * ``` - * @since v0.9.3 - * @return An array of the names of the supported hash algorithms, such as `'RSA-SHA256'`. Hash algorithms are also called "digest" algorithms. - */ - function getHashes(): string[]; - /** - * The `ECDH` class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH) - * key exchanges. - * - * Instances of the `ECDH` class can be created using the {@link createECDH} function. - * - * ```js - * import assert from 'assert'; - * - * const { - * createECDH - * } = await import('crypto'); - * - * // Generate Alice's keys... - * const alice = createECDH('secp521r1'); - * const aliceKey = alice.generateKeys(); - * - * // Generate Bob's keys... - * const bob = createECDH('secp521r1'); - * const bobKey = bob.generateKeys(); - * - * // Exchange and generate the secret... - * const aliceSecret = alice.computeSecret(bobKey); - * const bobSecret = bob.computeSecret(aliceKey); - * - * assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex')); - * // OK - * ``` - * @since v0.11.14 - */ - class ECDH { - private constructor(); - /** - * Converts the EC Diffie-Hellman public key specified by `key` and `curve` to the - * format specified by `format`. The `format` argument specifies point encoding - * and can be `'compressed'`, `'uncompressed'` or `'hybrid'`. The supplied key is - * interpreted using the specified `inputEncoding`, and the returned key is encoded - * using the specified `outputEncoding`. - * - * Use {@link getCurves} to obtain a list of available curve names. - * On recent OpenSSL releases, `openssl ecparam -list_curves` will also display - * the name and description of each available elliptic curve. - * - * If `format` is not specified the point will be returned in `'uncompressed'`format. - * - * If the `inputEncoding` is not provided, `key` is expected to be a `Buffer`,`TypedArray`, or `DataView`. - * - * Example (uncompressing a key): - * - * ```js - * const { - * createECDH, - * ECDH - * } = await import('crypto'); - * - * const ecdh = createECDH('secp256k1'); - * ecdh.generateKeys(); - * - * const compressedKey = ecdh.getPublicKey('hex', 'compressed'); - * - * const uncompressedKey = ECDH.convertKey(compressedKey, - * 'secp256k1', - * 'hex', - * 'hex', - * 'uncompressed'); - * - * // The converted key and the uncompressed public key should be the same - * console.log(uncompressedKey === ecdh.getPublicKey('hex')); - * ``` - * @since v10.0.0 - * @param inputEncoding The `encoding` of the `key` string. - * @param outputEncoding The `encoding` of the return value. - * @param [format='uncompressed'] - */ - static convertKey( - key: BinaryLike, - curve: string, - inputEncoding?: BinaryToTextEncoding, - outputEncoding?: 'latin1' | 'hex' | 'base64' | 'base64url', - format?: 'uncompressed' | 'compressed' | 'hybrid' - ): Buffer | string; - /** - * Generates private and public EC Diffie-Hellman key values, and returns - * the public key in the specified `format` and `encoding`. This key should be - * transferred to the other party. - * - * The `format` argument specifies point encoding and can be `'compressed'` or`'uncompressed'`. If `format` is not specified, the point will be returned in`'uncompressed'` format. - * - * If `encoding` is provided a string is returned; otherwise a `Buffer` is returned. - * @since v0.11.14 - * @param encoding The `encoding` of the return value. - * @param [format='uncompressed'] - */ - generateKeys(): Buffer; - generateKeys(encoding: BinaryToTextEncoding, format?: ECDHKeyFormat): string; - /** - * Computes the shared secret using `otherPublicKey` as the other - * party's public key and returns the computed shared secret. The supplied - * key is interpreted using specified `inputEncoding`, and the returned secret - * is encoded using the specified `outputEncoding`. - * If the `inputEncoding` is not - * provided, `otherPublicKey` is expected to be a `Buffer`, `TypedArray`, or`DataView`. - * - * If `outputEncoding` is given a string will be returned; otherwise a `Buffer` is returned. - * - * `ecdh.computeSecret` will throw an`ERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY` error when `otherPublicKey`lies outside of the elliptic curve. Since `otherPublicKey` is - * usually supplied from a remote user over an insecure network, - * be sure to handle this exception accordingly. - * @since v0.11.14 - * @param inputEncoding The `encoding` of the `otherPublicKey` string. - * @param outputEncoding The `encoding` of the return value. - */ - computeSecret(otherPublicKey: NodeJS.ArrayBufferView): Buffer; - computeSecret(otherPublicKey: string, inputEncoding: BinaryToTextEncoding): Buffer; - computeSecret(otherPublicKey: NodeJS.ArrayBufferView, outputEncoding: BinaryToTextEncoding): string; - computeSecret(otherPublicKey: string, inputEncoding: BinaryToTextEncoding, outputEncoding: BinaryToTextEncoding): string; - /** - * If `encoding` is specified, a string is returned; otherwise a `Buffer` is - * returned. - * @since v0.11.14 - * @param encoding The `encoding` of the return value. - * @return The EC Diffie-Hellman in the specified `encoding`. - */ - getPrivateKey(): Buffer; - getPrivateKey(encoding: BinaryToTextEncoding): string; - /** - * The `format` argument specifies point encoding and can be `'compressed'` or`'uncompressed'`. If `format` is not specified the point will be returned in`'uncompressed'` format. - * - * If `encoding` is specified, a string is returned; otherwise a `Buffer` is - * returned. - * @since v0.11.14 - * @param [encoding] The `encoding` of the return value. - * @param [format='uncompressed'] - * @return The EC Diffie-Hellman public key in the specified `encoding` and `format`. - */ - getPublicKey(encoding?: null, format?: ECDHKeyFormat): Buffer; - getPublicKey(encoding: BinaryToTextEncoding, format?: ECDHKeyFormat): string; - /** - * Sets the EC Diffie-Hellman private key. - * If `encoding` is provided, `privateKey` is expected - * to be a string; otherwise `privateKey` is expected to be a `Buffer`,`TypedArray`, or `DataView`. - * - * If `privateKey` is not valid for the curve specified when the `ECDH` object was - * created, an error is thrown. Upon setting the private key, the associated - * public point (key) is also generated and set in the `ECDH` object. - * @since v0.11.14 - * @param encoding The `encoding` of the `privateKey` string. - */ - setPrivateKey(privateKey: NodeJS.ArrayBufferView): void; - setPrivateKey(privateKey: string, encoding: BinaryToTextEncoding): void; - } - /** - * Creates an Elliptic Curve Diffie-Hellman (`ECDH`) key exchange object using a - * predefined curve specified by the `curveName` string. Use {@link getCurves} to obtain a list of available curve names. On recent - * OpenSSL releases, `openssl ecparam -list_curves` will also display the name - * and description of each available elliptic curve. - * @since v0.11.14 - */ - function createECDH(curveName: string): ECDH; - /** - * This function is based on a constant-time algorithm. - * Returns true if `a` is equal to `b`, without leaking timing information that - * would allow an attacker to guess one of the values. This is suitable for - * comparing HMAC digests or secret values like authentication cookies or [capability urls](https://www.w3.org/TR/capability-urls/). - * - * `a` and `b` must both be `Buffer`s, `TypedArray`s, or `DataView`s, and they - * must have the same byte length. An error is thrown if `a` and `b` have - * different byte lengths. - * - * If at least one of `a` and `b` is a `TypedArray` with more than one byte per - * entry, such as `Uint16Array`, the result will be computed using the platform - * byte order. - * - * Use of `crypto.timingSafeEqual` does not guarantee that the _surrounding_ code - * is timing-safe. Care should be taken to ensure that the surrounding code does - * not introduce timing vulnerabilities. - * @since v6.6.0 - */ - function timingSafeEqual(a: NodeJS.ArrayBufferView, b: NodeJS.ArrayBufferView): boolean; - /** @deprecated since v10.0.0 */ - const DEFAULT_ENCODING: BufferEncoding; - type KeyType = 'rsa' | 'rsa-pss' | 'dsa' | 'ec' | 'ed25519' | 'ed448' | 'x25519' | 'x448'; - type KeyFormat = 'pem' | 'der' | 'jwk'; - interface BasePrivateKeyEncodingOptions { - format: T; - cipher?: string | undefined; - passphrase?: string | undefined; - } - interface KeyPairKeyObjectResult { - publicKey: KeyObject; - privateKey: KeyObject; - } - interface ED25519KeyPairKeyObjectOptions {} - interface ED448KeyPairKeyObjectOptions {} - interface X25519KeyPairKeyObjectOptions {} - interface X448KeyPairKeyObjectOptions {} - interface ECKeyPairKeyObjectOptions { - /** - * Name of the curve to use - */ - namedCurve: string; - } - interface RSAKeyPairKeyObjectOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Public exponent - * @default 0x10001 - */ - publicExponent?: number | undefined; - } - interface RSAPSSKeyPairKeyObjectOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Public exponent - * @default 0x10001 - */ - publicExponent?: number | undefined; - /** - * Name of the message digest - */ - hashAlgorithm?: string; - /** - * Name of the message digest used by MGF1 - */ - mgf1HashAlgorithm?: string; - /** - * Minimal salt length in bytes - */ - saltLength?: string; - } - interface DSAKeyPairKeyObjectOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Size of q in bits - */ - divisorLength: number; - } - interface RSAKeyPairOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Public exponent - * @default 0x10001 - */ - publicExponent?: number | undefined; - publicKeyEncoding: { - type: 'pkcs1' | 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs1' | 'pkcs8'; - }; - } - interface RSAPSSKeyPairOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Public exponent - * @default 0x10001 - */ - publicExponent?: number | undefined; - /** - * Name of the message digest - */ - hashAlgorithm?: string; - /** - * Name of the message digest used by MGF1 - */ - mgf1HashAlgorithm?: string; - /** - * Minimal salt length in bytes - */ - saltLength?: string; - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface DSAKeyPairOptions { - /** - * Key size in bits - */ - modulusLength: number; - /** - * Size of q in bits - */ - divisorLength: number; - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface ECKeyPairOptions { - /** - * Name of the curve to use. - */ - namedCurve: string; - publicKeyEncoding: { - type: 'pkcs1' | 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'sec1' | 'pkcs8'; - }; - } - interface ED25519KeyPairOptions { - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface ED448KeyPairOptions { - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface X25519KeyPairOptions { - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface X448KeyPairOptions { - publicKeyEncoding: { - type: 'spki'; - format: PubF; - }; - privateKeyEncoding: BasePrivateKeyEncodingOptions & { - type: 'pkcs8'; - }; - } - interface KeyPairSyncResult { - publicKey: T1; - privateKey: T2; - } - /** - * Generates a new asymmetric key pair of the given `type`. RSA, RSA-PSS, DSA, EC, - * Ed25519, Ed448, X25519, X448, and DH are currently supported. - * - * If a `publicKeyEncoding` or `privateKeyEncoding` was specified, this function - * behaves as if `keyObject.export()` had been called on its result. Otherwise, - * the respective part of the key is returned as a `KeyObject`. - * - * When encoding public keys, it is recommended to use `'spki'`. When encoding - * private keys, it is recommended to use `'pkcs8'` with a strong passphrase, - * and to keep the passphrase confidential. - * - * ```js - * const { - * generateKeyPairSync - * } = await import('crypto'); - * - * const { - * publicKey, - * privateKey, - * } = generateKeyPairSync('rsa', { - * modulusLength: 4096, - * publicKeyEncoding: { - * type: 'spki', - * format: 'pem' - * }, - * privateKeyEncoding: { - * type: 'pkcs8', - * format: 'pem', - * cipher: 'aes-256-cbc', - * passphrase: 'top secret' - * } - * }); - * ``` - * - * The return value `{ publicKey, privateKey }` represents the generated key pair. - * When PEM encoding was selected, the respective key will be a string, otherwise - * it will be a buffer containing the data encoded as DER. - * @since v10.12.0 - * @param type Must be `'rsa'`, `'rsa-pss'`, `'dsa'`, `'ec'`, `'ed25519'`, `'ed448'`, `'x25519'`, `'x448'`, or `'dh'`. - */ - function generateKeyPairSync(type: 'rsa', options: RSAKeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa', options: RSAKeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa', options: RSAKeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa', options: RSAKeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa', options: RSAKeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'rsa-pss', options: RSAPSSKeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'dsa', options: DSAKeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'dsa', options: DSAKeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'dsa', options: DSAKeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'dsa', options: DSAKeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'dsa', options: DSAKeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'ec', options: ECKeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ec', options: ECKeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ec', options: ECKeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ec', options: ECKeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ec', options: ECKeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'ed25519', options: ED25519KeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed25519', options: ED25519KeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed25519', options: ED25519KeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed25519', options: ED25519KeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed25519', options?: ED25519KeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'ed448', options: ED448KeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed448', options: ED448KeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed448', options: ED448KeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed448', options: ED448KeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'ed448', options?: ED448KeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'x25519', options: X25519KeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x25519', options: X25519KeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x25519', options: X25519KeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x25519', options: X25519KeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x25519', options?: X25519KeyPairKeyObjectOptions): KeyPairKeyObjectResult; - function generateKeyPairSync(type: 'x448', options: X448KeyPairOptions<'pem', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x448', options: X448KeyPairOptions<'pem', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x448', options: X448KeyPairOptions<'der', 'pem'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x448', options: X448KeyPairOptions<'der', 'der'>): KeyPairSyncResult; - function generateKeyPairSync(type: 'x448', options?: X448KeyPairKeyObjectOptions): KeyPairKeyObjectResult; - /** - * Generates a new asymmetric key pair of the given `type`. RSA, RSA-PSS, DSA, EC, - * Ed25519, Ed448, X25519, X448, and DH are currently supported. - * - * If a `publicKeyEncoding` or `privateKeyEncoding` was specified, this function - * behaves as if `keyObject.export()` had been called on its result. Otherwise, - * the respective part of the key is returned as a `KeyObject`. - * - * It is recommended to encode public keys as `'spki'` and private keys as`'pkcs8'` with encryption for long-term storage: - * - * ```js - * const { - * generateKeyPair - * } = await import('crypto'); - * - * generateKeyPair('rsa', { - * modulusLength: 4096, - * publicKeyEncoding: { - * type: 'spki', - * format: 'pem' - * }, - * privateKeyEncoding: { - * type: 'pkcs8', - * format: 'pem', - * cipher: 'aes-256-cbc', - * passphrase: 'top secret' - * } - * }, (err, publicKey, privateKey) => { - * // Handle errors and use the generated key pair. - * }); - * ``` - * - * On completion, `callback` will be called with `err` set to `undefined` and`publicKey` / `privateKey` representing the generated key pair. - * - * If this method is invoked as its `util.promisify()` ed version, it returns - * a `Promise` for an `Object` with `publicKey` and `privateKey` properties. - * @since v10.12.0 - * @param type Must be `'rsa'`, `'rsa-pss'`, `'dsa'`, `'ec'`, `'ed25519'`, `'ed448'`, `'x25519'`, `'x448'`, or `'dh'`. - */ - function generateKeyPair(type: 'rsa', options: RSAKeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'rsa', options: RSAKeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'rsa', options: RSAKeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'rsa', options: RSAKeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'rsa', options: RSAKeyPairKeyObjectOptions, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'rsa-pss', options: RSAPSSKeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'rsa-pss', options: RSAPSSKeyPairKeyObjectOptions, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'dsa', options: DSAKeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'dsa', options: DSAKeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'dsa', options: DSAKeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'dsa', options: DSAKeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'dsa', options: DSAKeyPairKeyObjectOptions, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'ec', options: ECKeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'ec', options: ECKeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ec', options: ECKeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'ec', options: ECKeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ec', options: ECKeyPairKeyObjectOptions, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'ed25519', options: ED25519KeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'ed25519', options: ED25519KeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ed25519', options: ED25519KeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'ed25519', options: ED25519KeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ed25519', options: ED25519KeyPairKeyObjectOptions | undefined, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'ed448', options: ED448KeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'ed448', options: ED448KeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ed448', options: ED448KeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'ed448', options: ED448KeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'ed448', options: ED448KeyPairKeyObjectOptions | undefined, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'x25519', options: X25519KeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'x25519', options: X25519KeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'x25519', options: X25519KeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'x25519', options: X25519KeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'x25519', options: X25519KeyPairKeyObjectOptions | undefined, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - function generateKeyPair(type: 'x448', options: X448KeyPairOptions<'pem', 'pem'>, callback: (err: Error | null, publicKey: string, privateKey: string) => void): void; - function generateKeyPair(type: 'x448', options: X448KeyPairOptions<'pem', 'der'>, callback: (err: Error | null, publicKey: string, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'x448', options: X448KeyPairOptions<'der', 'pem'>, callback: (err: Error | null, publicKey: Buffer, privateKey: string) => void): void; - function generateKeyPair(type: 'x448', options: X448KeyPairOptions<'der', 'der'>, callback: (err: Error | null, publicKey: Buffer, privateKey: Buffer) => void): void; - function generateKeyPair(type: 'x448', options: X448KeyPairKeyObjectOptions | undefined, callback: (err: Error | null, publicKey: KeyObject, privateKey: KeyObject) => void): void; - namespace generateKeyPair { - function __promisify__( - type: 'rsa', - options: RSAKeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'rsa', - options: RSAKeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'rsa', - options: RSAKeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'rsa', - options: RSAKeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'rsa', options: RSAKeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'rsa-pss', - options: RSAPSSKeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'rsa-pss', - options: RSAPSSKeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'rsa-pss', - options: RSAPSSKeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'rsa-pss', - options: RSAPSSKeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'rsa-pss', options: RSAPSSKeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'dsa', - options: DSAKeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'dsa', - options: DSAKeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'dsa', - options: DSAKeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'dsa', - options: DSAKeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'dsa', options: DSAKeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'ec', - options: ECKeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'ec', - options: ECKeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'ec', - options: ECKeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'ec', - options: ECKeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'ec', options: ECKeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'ed25519', - options: ED25519KeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'ed25519', - options: ED25519KeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'ed25519', - options: ED25519KeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'ed25519', - options: ED25519KeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'ed25519', options?: ED25519KeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'ed448', - options: ED448KeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'ed448', - options: ED448KeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'ed448', - options: ED448KeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'ed448', - options: ED448KeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'ed448', options?: ED448KeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'x25519', - options: X25519KeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'x25519', - options: X25519KeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'x25519', - options: X25519KeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'x25519', - options: X25519KeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'x25519', options?: X25519KeyPairKeyObjectOptions): Promise; - function __promisify__( - type: 'x448', - options: X448KeyPairOptions<'pem', 'pem'> - ): Promise<{ - publicKey: string; - privateKey: string; - }>; - function __promisify__( - type: 'x448', - options: X448KeyPairOptions<'pem', 'der'> - ): Promise<{ - publicKey: string; - privateKey: Buffer; - }>; - function __promisify__( - type: 'x448', - options: X448KeyPairOptions<'der', 'pem'> - ): Promise<{ - publicKey: Buffer; - privateKey: string; - }>; - function __promisify__( - type: 'x448', - options: X448KeyPairOptions<'der', 'der'> - ): Promise<{ - publicKey: Buffer; - privateKey: Buffer; - }>; - function __promisify__(type: 'x448', options?: X448KeyPairKeyObjectOptions): Promise; - } - /** - * Calculates and returns the signature for `data` using the given private key and - * algorithm. If `algorithm` is `null` or `undefined`, then the algorithm is - * dependent upon the key type (especially Ed25519 and Ed448). - * - * If `key` is not a `KeyObject`, this function behaves as if `key` had been - * passed to {@link createPrivateKey}. If it is an object, the following - * additional properties can be passed: - * - * If the `callback` function is provided this function uses libuv's threadpool. - * @since v12.0.0 - */ - function sign(algorithm: string | null | undefined, data: NodeJS.ArrayBufferView, key: KeyLike | SignKeyObjectInput | SignPrivateKeyInput): Buffer; - function sign( - algorithm: string | null | undefined, - data: NodeJS.ArrayBufferView, - key: KeyLike | SignKeyObjectInput | SignPrivateKeyInput, - callback: (error: Error | null, data: Buffer) => void - ): void; - /** - * Verifies the given signature for `data` using the given key and algorithm. If`algorithm` is `null` or `undefined`, then the algorithm is dependent upon the - * key type (especially Ed25519 and Ed448). - * - * If `key` is not a `KeyObject`, this function behaves as if `key` had been - * passed to {@link createPublicKey}. If it is an object, the following - * additional properties can be passed: - * - * The `signature` argument is the previously calculated signature for the `data`. - * - * Because public keys can be derived from private keys, a private key or a public - * key may be passed for `key`. - * - * If the `callback` function is provided this function uses libuv's threadpool. - * @since v12.0.0 - */ - function verify(algorithm: string | null | undefined, data: NodeJS.ArrayBufferView, key: KeyLike | VerifyKeyObjectInput | VerifyPublicKeyInput, signature: NodeJS.ArrayBufferView): boolean; - function verify( - algorithm: string | null | undefined, - data: NodeJS.ArrayBufferView, - key: KeyLike | VerifyKeyObjectInput | VerifyPublicKeyInput, - signature: NodeJS.ArrayBufferView, - callback: (error: Error | null, result: boolean) => void - ): void; - /** - * Computes the Diffie-Hellman secret based on a `privateKey` and a `publicKey`. - * Both keys must have the same `asymmetricKeyType`, which must be one of `'dh'`(for Diffie-Hellman), `'ec'` (for ECDH), `'x448'`, or `'x25519'` (for ECDH-ES). - * @since v13.9.0, v12.17.0 - */ - function diffieHellman(options: { privateKey: KeyObject; publicKey: KeyObject }): Buffer; - type CipherMode = 'cbc' | 'ccm' | 'cfb' | 'ctr' | 'ecb' | 'gcm' | 'ocb' | 'ofb' | 'stream' | 'wrap' | 'xts'; - interface CipherInfoOptions { - /** - * A test key length. - */ - keyLength?: number | undefined; - /** - * A test IV length. - */ - ivLength?: number | undefined; - } - interface CipherInfo { - /** - * The name of the cipher. - */ - name: string; - /** - * The nid of the cipher. - */ - nid: number; - /** - * The block size of the cipher in bytes. - * This property is omitted when mode is 'stream'. - */ - blockSize?: number | undefined; - /** - * The expected or default initialization vector length in bytes. - * This property is omitted if the cipher does not use an initialization vector. - */ - ivLength?: number | undefined; - /** - * The expected or default key length in bytes. - */ - keyLength: number; - /** - * The cipher mode. - */ - mode: CipherMode; - } - /** - * Returns information about a given cipher. - * - * Some ciphers accept variable length keys and initialization vectors. By default, - * the `crypto.getCipherInfo()` method will return the default values for these - * ciphers. To test if a given key length or iv length is acceptable for given - * cipher, use the `keyLength` and `ivLength` options. If the given values are - * unacceptable, `undefined` will be returned. - * @since v15.0.0 - * @param nameOrNid The name or nid of the cipher to query. - */ - function getCipherInfo(nameOrNid: string | number, options?: CipherInfoOptions): CipherInfo | undefined; - /** - * HKDF is a simple key derivation function defined in RFC 5869\. The given `ikm`,`salt` and `info` are used with the `digest` to derive a key of `keylen` bytes. - * - * The supplied `callback` function is called with two arguments: `err` and`derivedKey`. If an errors occurs while deriving the key, `err` will be set; - * otherwise `err` will be `null`. The successfully generated `derivedKey` will - * be passed to the callback as an [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). An error will be thrown if any - * of the input arguments specify invalid values or types. - * - * ```js - * import { Buffer } from 'buffer'; - * const { - * hkdf - * } = await import('crypto'); - * - * hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => { - * if (err) throw err; - * console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653' - * }); - * ``` - * @since v15.0.0 - * @param digest The digest algorithm to use. - * @param ikm The input keying material. It must be at least one byte in length. - * @param salt The salt value. Must be provided but can be zero-length. - * @param info Additional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes. - * @param keylen The length of the key to generate. Must be greater than 0. The maximum allowable value is `255` times the number of bytes produced by the selected digest function (e.g. `sha512` - * generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - */ - function hkdf(digest: string, irm: BinaryLike | KeyObject, salt: BinaryLike, info: BinaryLike, keylen: number, callback: (err: Error | null, derivedKey: ArrayBuffer) => void): void; - /** - * Provides a synchronous HKDF key derivation function as defined in RFC 5869\. The - * given `ikm`, `salt` and `info` are used with the `digest` to derive a key of`keylen` bytes. - * - * The successfully generated `derivedKey` will be returned as an [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). - * - * An error will be thrown if any of the input arguments specify invalid values or - * types, or if the derived key cannot be generated. - * - * ```js - * import { Buffer } from 'buffer'; - * const { - * hkdfSync - * } = await import('crypto'); - * - * const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64); - * console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653' - * ``` - * @since v15.0.0 - * @param digest The digest algorithm to use. - * @param ikm The input keying material. It must be at least one byte in length. - * @param salt The salt value. Must be provided but can be zero-length. - * @param info Additional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes. - * @param keylen The length of the key to generate. Must be greater than 0. The maximum allowable value is `255` times the number of bytes produced by the selected digest function (e.g. `sha512` - * generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - */ - function hkdfSync(digest: string, ikm: BinaryLike | KeyObject, salt: BinaryLike, info: BinaryLike, keylen: number): ArrayBuffer; - interface SecureHeapUsage { - /** - * The total allocated secure heap size as specified using the `--secure-heap=n` command-line flag. - */ - total: number; - /** - * The minimum allocation from the secure heap as specified using the `--secure-heap-min` command-line flag. - */ - min: number; - /** - * The total number of bytes currently allocated from the secure heap. - */ - used: number; - /** - * The calculated ratio of `used` to `total` allocated bytes. - */ - utilization: number; - } - /** - * @since v15.6.0 - */ - function secureHeapUsed(): SecureHeapUsage; - interface RandomUUIDOptions { - /** - * By default, to improve performance, - * Node.js will pre-emptively generate and persistently cache enough - * random data to generate up to 128 random UUIDs. To generate a UUID - * without using the cache, set `disableEntropyCache` to `true`. - * - * @default `false` - */ - disableEntropyCache?: boolean | undefined; - } - /** - * Generates a random [RFC 4122](https://www.rfc-editor.org/rfc/rfc4122.txt) version 4 UUID. The UUID is generated using a - * cryptographic pseudorandom number generator. - * @since v15.6.0, v14.17.0 - */ - function randomUUID(options?: RandomUUIDOptions): string; - interface X509CheckOptions { - /** - * @default 'always' - */ - subject?: 'always' | 'default' | 'never'; - /** - * @default true - */ - wildcards?: boolean; - /** - * @default true - */ - partialWildcards?: boolean; - /** - * @default false - */ - multiLabelWildcards?: boolean; - /** - * @default false - */ - singleLabelSubdomains?: boolean; - } - /** - * Encapsulates an X509 certificate and provides read-only access to - * its information. - * - * ```js - * const { X509Certificate } = await import('crypto'); - * - * const x509 = new X509Certificate('{... pem encoded cert ...}'); - * - * console.log(x509.subject); - * ``` - * @since v15.6.0 - */ - class X509Certificate { - /** - * Will be \`true\` if this is a Certificate Authority (CA) certificate. - * @since v15.6.0 - */ - readonly ca: boolean; - /** - * The SHA-1 fingerprint of this certificate. - * - * Because SHA-1 is cryptographically broken and because the security of SHA-1 is - * significantly worse than that of algorithms that are commonly used to sign - * certificates, consider using `x509.fingerprint256` instead. - * @since v15.6.0 - */ - readonly fingerprint: string; - /** - * The SHA-256 fingerprint of this certificate. - * @since v15.6.0 - */ - readonly fingerprint256: string; - /** - * The SHA-512 fingerprint of this certificate. - * @since v16.14.0 - */ - readonly fingerprint512: string; - /** - * The complete subject of this certificate. - * @since v15.6.0 - */ - readonly subject: string; - /** - * The subject alternative name specified for this certificate or `undefined` - * if not available. - * @since v15.6.0 - */ - readonly subjectAltName: string | undefined; - /** - * The information access content of this certificate or `undefined` if not - * available. - * @since v15.6.0 - */ - readonly infoAccess: string | undefined; - /** - * An array detailing the key usages for this certificate. - * @since v15.6.0 - */ - readonly keyUsage: string[]; - /** - * The issuer identification included in this certificate. - * @since v15.6.0 - */ - readonly issuer: string; - /** - * The issuer certificate or `undefined` if the issuer certificate is not - * available. - * @since v15.9.0 - */ - readonly issuerCertificate?: X509Certificate | undefined; - /** - * The public key `KeyObject` for this certificate. - * @since v15.6.0 - */ - readonly publicKey: KeyObject; - /** - * A `Buffer` containing the DER encoding of this certificate. - * @since v15.6.0 - */ - readonly raw: Buffer; - /** - * The serial number of this certificate. - * - * Serial numbers are assigned by certificate authorities and do not uniquely - * identify certificates. Consider using `x509.fingerprint256` as a unique - * identifier instead. - * @since v15.6.0 - */ - readonly serialNumber: string; - /** - * The date/time from which this certificate is considered valid. - * @since v15.6.0 - */ - readonly validFrom: string; - /** - * The date/time until which this certificate is considered valid. - * @since v15.6.0 - */ - readonly validTo: string; - constructor(buffer: BinaryLike); - /** - * Checks whether the certificate matches the given email address. - * - * If the `'subject'` option is undefined or set to `'default'`, the certificate - * subject is only considered if the subject alternative name extension either does - * not exist or does not contain any email addresses. - * - * If the `'subject'` option is set to `'always'` and if the subject alternative - * name extension either does not exist or does not contain a matching email - * address, the certificate subject is considered. - * - * If the `'subject'` option is set to `'never'`, the certificate subject is never - * considered, even if the certificate contains no subject alternative names. - * @since v15.6.0 - * @return Returns `email` if the certificate matches, `undefined` if it does not. - */ - checkEmail(email: string, options?: Pick): string | undefined; - /** - * Checks whether the certificate matches the given host name. - * - * If the certificate matches the given host name, the matching subject name is - * returned. The returned name might be an exact match (e.g., `foo.example.com`) - * or it might contain wildcards (e.g., `*.example.com`). Because host name - * comparisons are case-insensitive, the returned subject name might also differ - * from the given `name` in capitalization. - * - * If the `'subject'` option is undefined or set to `'default'`, the certificate - * subject is only considered if the subject alternative name extension either does - * not exist or does not contain any DNS names. This behavior is consistent with [RFC 2818](https://www.rfc-editor.org/rfc/rfc2818.txt) ("HTTP Over TLS"). - * - * If the `'subject'` option is set to `'always'` and if the subject alternative - * name extension either does not exist or does not contain a matching DNS name, - * the certificate subject is considered. - * - * If the `'subject'` option is set to `'never'`, the certificate subject is never - * considered, even if the certificate contains no subject alternative names. - * @since v15.6.0 - * @return Returns a subject name that matches `name`, or `undefined` if no subject name matches `name`. - */ - checkHost(name: string, options?: X509CheckOptions): string | undefined; - /** - * Checks whether the certificate matches the given IP address (IPv4 or IPv6). - * - * Only [RFC 5280](https://www.rfc-editor.org/rfc/rfc5280.txt) `iPAddress` subject alternative names are considered, and they - * must match the given `ip` address exactly. Other subject alternative names as - * well as the subject field of the certificate are ignored. - * @since v15.6.0 - * @return Returns `ip` if the certificate matches, `undefined` if it does not. - */ - checkIP(ip: string): string | undefined; - /** - * Checks whether this certificate was issued by the given `otherCert`. - * @since v15.6.0 - */ - checkIssued(otherCert: X509Certificate): boolean; - /** - * Checks whether the public key for this certificate is consistent with - * the given private key. - * @since v15.6.0 - * @param privateKey A private key. - */ - checkPrivateKey(privateKey: KeyObject): boolean; - /** - * There is no standard JSON encoding for X509 certificates. The`toJSON()` method returns a string containing the PEM encoded - * certificate. - * @since v15.6.0 - */ - toJSON(): string; - /** - * Returns information about this certificate using the legacy `certificate object` encoding. - * @since v15.6.0 - */ - toLegacyObject(): PeerCertificate; - /** - * Returns the PEM-encoded certificate. - * @since v15.6.0 - */ - toString(): string; - /** - * Verifies that this certificate was signed by the given public key. - * Does not perform any other validation checks on the certificate. - * @since v15.6.0 - * @param publicKey A public key. - */ - verify(publicKey: KeyObject): boolean; - } - type LargeNumberLike = NodeJS.ArrayBufferView | SharedArrayBuffer | ArrayBuffer | bigint; - interface GeneratePrimeOptions { - add?: LargeNumberLike | undefined; - rem?: LargeNumberLike | undefined; - /** - * @default false - */ - safe?: boolean | undefined; - bigint?: boolean | undefined; - } - interface GeneratePrimeOptionsBigInt extends GeneratePrimeOptions { - bigint: true; - } - interface GeneratePrimeOptionsArrayBuffer extends GeneratePrimeOptions { - bigint?: false | undefined; - } - /** - * Generates a pseudorandom prime of `size` bits. - * - * If `options.safe` is `true`, the prime will be a safe prime -- that is,`(prime - 1) / 2` will also be a prime. - * - * The `options.add` and `options.rem` parameters can be used to enforce additional - * requirements, e.g., for Diffie-Hellman: - * - * * If `options.add` and `options.rem` are both set, the prime will satisfy the - * condition that `prime % add = rem`. - * * If only `options.add` is set and `options.safe` is not `true`, the prime will - * satisfy the condition that `prime % add = 1`. - * * If only `options.add` is set and `options.safe` is set to `true`, the prime - * will instead satisfy the condition that `prime % add = 3`. This is necessary - * because `prime % add = 1` for `options.add > 2` would contradict the condition - * enforced by `options.safe`. - * * `options.rem` is ignored if `options.add` is not given. - * - * Both `options.add` and `options.rem` must be encoded as big-endian sequences - * if given as an `ArrayBuffer`, `SharedArrayBuffer`, `TypedArray`, `Buffer`, or`DataView`. - * - * By default, the prime is encoded as a big-endian sequence of octets - * in an [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). If the `bigint` option is `true`, then a - * [bigint](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) is provided. - * @since v15.8.0 - * @param size The size (in bits) of the prime to generate. - */ - function generatePrime(size: number, callback: (err: Error | null, prime: ArrayBuffer) => void): void; - function generatePrime(size: number, options: GeneratePrimeOptionsBigInt, callback: (err: Error | null, prime: bigint) => void): void; - function generatePrime(size: number, options: GeneratePrimeOptionsArrayBuffer, callback: (err: Error | null, prime: ArrayBuffer) => void): void; - function generatePrime(size: number, options: GeneratePrimeOptions, callback: (err: Error | null, prime: ArrayBuffer | bigint) => void): void; - /** - * Generates a pseudorandom prime of `size` bits. - * - * If `options.safe` is `true`, the prime will be a safe prime -- that is,`(prime - 1) / 2` will also be a prime. - * - * The `options.add` and `options.rem` parameters can be used to enforce additional - * requirements, e.g., for Diffie-Hellman: - * - * * If `options.add` and `options.rem` are both set, the prime will satisfy the - * condition that `prime % add = rem`. - * * If only `options.add` is set and `options.safe` is not `true`, the prime will - * satisfy the condition that `prime % add = 1`. - * * If only `options.add` is set and `options.safe` is set to `true`, the prime - * will instead satisfy the condition that `prime % add = 3`. This is necessary - * because `prime % add = 1` for `options.add > 2` would contradict the condition - * enforced by `options.safe`. - * * `options.rem` is ignored if `options.add` is not given. - * - * Both `options.add` and `options.rem` must be encoded as big-endian sequences - * if given as an `ArrayBuffer`, `SharedArrayBuffer`, `TypedArray`, `Buffer`, or`DataView`. - * - * By default, the prime is encoded as a big-endian sequence of octets - * in an [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). If the `bigint` option is `true`, then a - * [bigint](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) is provided. - * @since v15.8.0 - * @param size The size (in bits) of the prime to generate. - */ - function generatePrimeSync(size: number): ArrayBuffer; - function generatePrimeSync(size: number, options: GeneratePrimeOptionsBigInt): bigint; - function generatePrimeSync(size: number, options: GeneratePrimeOptionsArrayBuffer): ArrayBuffer; - function generatePrimeSync(size: number, options: GeneratePrimeOptions): ArrayBuffer | bigint; - interface CheckPrimeOptions { - /** - * The number of Miller-Rabin probabilistic primality iterations to perform. - * When the value is 0 (zero), a number of checks is used that yields a false positive rate of at most `2**-64` for random input. - * Care must be used when selecting a number of checks. - * Refer to the OpenSSL documentation for the BN_is_prime_ex function nchecks options for more details. - * - * @default 0 - */ - checks?: number | undefined; - } - /** - * Checks the primality of the `candidate`. - * @since v15.8.0 - * @param candidate A possible prime encoded as a sequence of big endian octets of arbitrary length. - */ - function checkPrime(value: LargeNumberLike, callback: (err: Error | null, result: boolean) => void): void; - function checkPrime(value: LargeNumberLike, options: CheckPrimeOptions, callback: (err: Error | null, result: boolean) => void): void; - /** - * Checks the primality of the `candidate`. - * @since v15.8.0 - * @param candidate A possible prime encoded as a sequence of big endian octets of arbitrary length. - * @return `true` if the candidate is a prime with an error probability less than `0.25 ** options.checks`. - */ - function checkPrimeSync(candidate: LargeNumberLike, options?: CheckPrimeOptions): boolean; - /** - * Load and set the `engine` for some or all OpenSSL functions (selected by flags). - * - * `engine` could be either an id or a path to the engine's shared library. - * - * The optional `flags` argument uses `ENGINE_METHOD_ALL` by default. - * The `flags` is a bit field taking one of or a mix of the following flags (defined in `crypto.constants`): - * - * - `crypto.constants.ENGINE_METHOD_RSA` - * - `crypto.constants.ENGINE_METHOD_DSA` - * - `crypto.constants.ENGINE_METHOD_DH` - * - `crypto.constants.ENGINE_METHOD_RAND` - * - `crypto.constants.ENGINE_METHOD_EC` - * - `crypto.constants.ENGINE_METHOD_CIPHERS` - * - `crypto.constants.ENGINE_METHOD_DIGESTS` - * - `crypto.constants.ENGINE_METHOD_PKEY_METHS` - * - `crypto.constants.ENGINE_METHOD_PKEY_ASN1_METHS` - * - `crypto.constants.ENGINE_METHOD_ALL` - * - `crypto.constants.ENGINE_METHOD_NONE` - * - * The flags below are deprecated in OpenSSL-1.1.0. - * - * - `crypto.constants.ENGINE_METHOD_ECDH` - * - `crypto.constants.ENGINE_METHOD_ECDSA` - * - `crypto.constants.ENGINE_METHOD_STORE` - * @since v0.11.11 - * @param [flags=crypto.constants.ENGINE_METHOD_ALL] - */ - function setEngine(engine: string, flags?: number): void; - /** - * A convenient alias for `crypto.webcrypto.getRandomValues()`. - * This implementation is not compliant with the Web Crypto spec, - * to write web-compatible code use `crypto.webcrypto.getRandomValues()` instead. - * @since v17.4.0 - * @returns Returns `typedArray`. - */ - function getRandomValues(typedArray: T): T; - /** - * A convenient alias for `crypto.webcrypto.subtle`. - * @since v17.4.0 - */ - const subtle: webcrypto.SubtleCrypto; - /** - * An implementation of the Web Crypto API standard. - * - * See the {@link https://nodejs.org/docs/latest/api/webcrypto.html Web Crypto API documentation} for details. - * @since v15.0.0 - */ - const webcrypto: webcrypto.Crypto; - namespace webcrypto { - type BufferSource = ArrayBufferView | ArrayBuffer; - type KeyFormat = 'jwk' | 'pkcs8' | 'raw' | 'spki'; - type KeyType = 'private' | 'public' | 'secret'; - type KeyUsage = 'decrypt' | 'deriveBits' | 'deriveKey' | 'encrypt' | 'sign' | 'unwrapKey' | 'verify' | 'wrapKey'; - type AlgorithmIdentifier = Algorithm | string; - type HashAlgorithmIdentifier = AlgorithmIdentifier; - type NamedCurve = string; - type BigInteger = Uint8Array; - interface AesCbcParams extends Algorithm { - iv: BufferSource; - } - interface AesCtrParams extends Algorithm { - counter: BufferSource; - length: number; - } - interface AesDerivedKeyParams extends Algorithm { - length: number; - } - interface AesGcmParams extends Algorithm { - additionalData?: BufferSource; - iv: BufferSource; - tagLength?: number; - } - interface AesKeyAlgorithm extends KeyAlgorithm { - length: number; - } - interface AesKeyGenParams extends Algorithm { - length: number; - } - interface Algorithm { - name: string; - } - interface EcKeyAlgorithm extends KeyAlgorithm { - namedCurve: NamedCurve; - } - interface EcKeyGenParams extends Algorithm { - namedCurve: NamedCurve; - } - interface EcKeyImportParams extends Algorithm { - namedCurve: NamedCurve; - } - interface EcdhKeyDeriveParams extends Algorithm { - public: CryptoKey; - } - interface EcdsaParams extends Algorithm { - hash: HashAlgorithmIdentifier; - } - interface Ed448Params extends Algorithm { - context?: BufferSource; - } - interface HkdfParams extends Algorithm { - hash: HashAlgorithmIdentifier; - info: BufferSource; - salt: BufferSource; - } - interface HmacImportParams extends Algorithm { - hash: HashAlgorithmIdentifier; - length?: number; - } - interface HmacKeyAlgorithm extends KeyAlgorithm { - hash: KeyAlgorithm; - length: number; - } - interface HmacKeyGenParams extends Algorithm { - hash: HashAlgorithmIdentifier; - length?: number; - } - interface JsonWebKey { - alg?: string; - crv?: string; - d?: string; - dp?: string; - dq?: string; - e?: string; - ext?: boolean; - k?: string; - key_ops?: string[]; - kty?: string; - n?: string; - oth?: RsaOtherPrimesInfo[]; - p?: string; - q?: string; - qi?: string; - use?: string; - x?: string; - y?: string; - } - interface KeyAlgorithm { - name: string; - } - interface Pbkdf2Params extends Algorithm { - hash: HashAlgorithmIdentifier; - iterations: number; - salt: BufferSource; - } - interface RsaHashedImportParams extends Algorithm { - hash: HashAlgorithmIdentifier; - } - interface RsaHashedKeyAlgorithm extends RsaKeyAlgorithm { - hash: KeyAlgorithm; - } - interface RsaHashedKeyGenParams extends RsaKeyGenParams { - hash: HashAlgorithmIdentifier; - } - interface RsaKeyAlgorithm extends KeyAlgorithm { - modulusLength: number; - publicExponent: BigInteger; - } - interface RsaKeyGenParams extends Algorithm { - modulusLength: number; - publicExponent: BigInteger; - } - interface RsaOaepParams extends Algorithm { - label?: BufferSource; - } - interface RsaOtherPrimesInfo { - d?: string; - r?: string; - t?: string; - } - interface RsaPssParams extends Algorithm { - saltLength: number; - } - /** - * Calling `require('node:crypto').webcrypto` returns an instance of the `Crypto` class. - * `Crypto` is a singleton that provides access to the remainder of the crypto API. - * @since v15.0.0 - */ - interface Crypto { - /** - * Provides access to the `SubtleCrypto` API. - * @since v15.0.0 - */ - readonly subtle: SubtleCrypto; - /** - * Generates cryptographically strong random values. - * The given `typedArray` is filled with random values, and a reference to `typedArray` is returned. - * - * The given `typedArray` must be an integer-based instance of {@link NodeJS.TypedArray}, i.e. `Float32Array` and `Float64Array` are not accepted. - * - * An error will be thrown if the given `typedArray` is larger than 65,536 bytes. - * @since v15.0.0 - */ - getRandomValues>(typedArray: T): T; - /** - * Generates a random {@link https://www.rfc-editor.org/rfc/rfc4122.txt RFC 4122} version 4 UUID. - * The UUID is generated using a cryptographic pseudorandom number generator. - * @since v16.7.0 - */ - randomUUID(): string; - CryptoKey: CryptoKeyConstructor; - } - // This constructor throws ILLEGAL_CONSTRUCTOR so it should not be newable. - interface CryptoKeyConstructor { - /** Illegal constructor */ - (_: { readonly _: unique symbol }): never; // Allows instanceof to work but not be callable by the user. - readonly length: 0; - readonly name: 'CryptoKey'; - readonly prototype: CryptoKey; - } - /** - * @since v15.0.0 - */ - interface CryptoKey { - /** - * An object detailing the algorithm for which the key can be used along with additional algorithm-specific parameters. - * @since v15.0.0 - */ - readonly algorithm: KeyAlgorithm; - /** - * When `true`, the {@link CryptoKey} can be extracted using either `subtleCrypto.exportKey()` or `subtleCrypto.wrapKey()`. - * @since v15.0.0 - */ - readonly extractable: boolean; - /** - * A string identifying whether the key is a symmetric (`'secret'`) or asymmetric (`'private'` or `'public'`) key. - * @since v15.0.0 - */ - readonly type: KeyType; - /** - * An array of strings identifying the operations for which the key may be used. - * - * The possible usages are: - * - `'encrypt'` - The key may be used to encrypt data. - * - `'decrypt'` - The key may be used to decrypt data. - * - `'sign'` - The key may be used to generate digital signatures. - * - `'verify'` - The key may be used to verify digital signatures. - * - `'deriveKey'` - The key may be used to derive a new key. - * - `'deriveBits'` - The key may be used to derive bits. - * - `'wrapKey'` - The key may be used to wrap another key. - * - `'unwrapKey'` - The key may be used to unwrap another key. - * - * Valid key usages depend on the key algorithm (identified by `cryptokey.algorithm.name`). - * @since v15.0.0 - */ - readonly usages: KeyUsage[]; - } - /** - * The `CryptoKeyPair` is a simple dictionary object with `publicKey` and `privateKey` properties, representing an asymmetric key pair. - * @since v15.0.0 - */ - interface CryptoKeyPair { - /** - * A {@link CryptoKey} whose type will be `'private'`. - * @since v15.0.0 - */ - privateKey: CryptoKey; - /** - * A {@link CryptoKey} whose type will be `'public'`. - * @since v15.0.0 - */ - publicKey: CryptoKey; - } - /** - * @since v15.0.0 - */ - interface SubtleCrypto { - /** - * Using the method and parameters specified in `algorithm` and the keying material provided by `key`, - * `subtle.decrypt()` attempts to decipher the provided `data`. If successful, - * the returned promise will be resolved with an `` containing the plaintext result. - * - * The algorithms currently supported include: - * - * - `'RSA-OAEP'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * @since v15.0.0 - */ - decrypt(algorithm: AlgorithmIdentifier | RsaOaepParams | AesCtrParams | AesCbcParams | AesGcmParams, key: CryptoKey, data: BufferSource): Promise; - /** - * Using the method and parameters specified in `algorithm` and the keying material provided by `baseKey`, - * `subtle.deriveBits()` attempts to generate `length` bits. - * The Node.js implementation requires that when `length` is a number it must be multiple of `8`. - * When `length` is `null` the maximum number of bits for a given algorithm is generated. This is allowed - * for the `'ECDH'`, `'X25519'`, and `'X448'` algorithms. - * If successful, the returned promise will be resolved with an `` containing the generated data. - * - * The algorithms currently supported include: - * - * - `'ECDH'` - * - `'X25519'` - * - `'X448'` - * - `'HKDF'` - * - `'PBKDF2'` - * @since v15.0.0 - */ - deriveBits(algorithm: EcdhKeyDeriveParams, baseKey: CryptoKey, length: number | null): Promise; - deriveBits(algorithm: AlgorithmIdentifier | HkdfParams | Pbkdf2Params, baseKey: CryptoKey, length: number): Promise; - /** - * Using the method and parameters specified in `algorithm`, and the keying material provided by `baseKey`, - * `subtle.deriveKey()` attempts to generate a new ` based on the method and parameters in `derivedKeyAlgorithm`. - * - * Calling `subtle.deriveKey()` is equivalent to calling `subtle.deriveBits()` to generate raw keying material, - * then passing the result into the `subtle.importKey()` method using the `deriveKeyAlgorithm`, `extractable`, and `keyUsages` parameters as input. - * - * The algorithms currently supported include: - * - * - `'ECDH'` - * - `'X25519'` - * - `'X448'` - * - `'HKDF'` - * - `'PBKDF2'` - * @param keyUsages See {@link https://nodejs.org/docs/latest/api/webcrypto.html#cryptokeyusages Key usages}. - * @since v15.0.0 - */ - deriveKey( - algorithm: AlgorithmIdentifier | EcdhKeyDeriveParams | HkdfParams | Pbkdf2Params, - baseKey: CryptoKey, - derivedKeyAlgorithm: AlgorithmIdentifier | AesDerivedKeyParams | HmacImportParams | HkdfParams | Pbkdf2Params, - extractable: boolean, - keyUsages: ReadonlyArray - ): Promise; - /** - * Using the method identified by `algorithm`, `subtle.digest()` attempts to generate a digest of `data`. - * If successful, the returned promise is resolved with an `` containing the computed digest. - * - * If `algorithm` is provided as a ``, it must be one of: - * - * - `'SHA-1'` - * - `'SHA-256'` - * - `'SHA-384'` - * - `'SHA-512'` - * - * If `algorithm` is provided as an ``, it must have a `name` property whose value is one of the above. - * @since v15.0.0 - */ - digest(algorithm: AlgorithmIdentifier, data: BufferSource): Promise; - /** - * Using the method and parameters specified by `algorithm` and the keying material provided by `key`, - * `subtle.encrypt()` attempts to encipher `data`. If successful, - * the returned promise is resolved with an `` containing the encrypted result. - * - * The algorithms currently supported include: - * - * - `'RSA-OAEP'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * @since v15.0.0 - */ - encrypt(algorithm: AlgorithmIdentifier | RsaOaepParams | AesCtrParams | AesCbcParams | AesGcmParams, key: CryptoKey, data: BufferSource): Promise; - /** - * Exports the given key into the specified format, if supported. - * - * If the `` is not extractable, the returned promise will reject. - * - * When `format` is either `'pkcs8'` or `'spki'` and the export is successful, - * the returned promise will be resolved with an `` containing the exported key data. - * - * When `format` is `'jwk'` and the export is successful, the returned promise will be resolved with a - * JavaScript object conforming to the {@link https://tools.ietf.org/html/rfc7517 JSON Web Key} specification. - * @param format Must be one of `'raw'`, `'pkcs8'`, `'spki'`, or `'jwk'`. - * @returns `` containing ``. - * @since v15.0.0 - */ - exportKey(format: 'jwk', key: CryptoKey): Promise; - exportKey(format: Exclude, key: CryptoKey): Promise; - /** - * Using the method and parameters provided in `algorithm`, - * `subtle.generateKey()` attempts to generate new keying material. - * Depending the method used, the method may generate either a single `` or a ``. - * - * The `` (public and private key) generating algorithms supported include: - * - * - `'RSASSA-PKCS1-v1_5'` - * - `'RSA-PSS'` - * - `'RSA-OAEP'` - * - `'ECDSA'` - * - `'Ed25519'` - * - `'Ed448'` - * - `'ECDH'` - * - `'X25519'` - * - `'X448'` - * The `` (secret key) generating algorithms supported include: - * - * - `'HMAC'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * - `'AES-KW'` - * @param keyUsages See {@link https://nodejs.org/docs/latest/api/webcrypto.html#cryptokeyusages Key usages}. - * @since v15.0.0 - */ - generateKey(algorithm: RsaHashedKeyGenParams | EcKeyGenParams, extractable: boolean, keyUsages: ReadonlyArray): Promise; - generateKey(algorithm: AesKeyGenParams | HmacKeyGenParams | Pbkdf2Params, extractable: boolean, keyUsages: ReadonlyArray): Promise; - generateKey(algorithm: AlgorithmIdentifier, extractable: boolean, keyUsages: KeyUsage[]): Promise; - /** - * The `subtle.importKey()` method attempts to interpret the provided `keyData` as the given `format` - * to create a `` instance using the provided `algorithm`, `extractable`, and `keyUsages` arguments. - * If the import is successful, the returned promise will be resolved with the created ``. - * - * If importing a `'PBKDF2'` key, `extractable` must be `false`. - * @param format Must be one of `'raw'`, `'pkcs8'`, `'spki'`, or `'jwk'`. - * @param keyUsages See {@link https://nodejs.org/docs/latest/api/webcrypto.html#cryptokeyusages Key usages}. - * @since v15.0.0 - */ - importKey( - format: 'jwk', - keyData: JsonWebKey, - algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm, - extractable: boolean, - keyUsages: ReadonlyArray - ): Promise; - importKey( - format: Exclude, - keyData: BufferSource, - algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm, - extractable: boolean, - keyUsages: KeyUsage[] - ): Promise; - /** - * Using the method and parameters given by `algorithm` and the keying material provided by `key`, - * `subtle.sign()` attempts to generate a cryptographic signature of `data`. If successful, - * the returned promise is resolved with an `` containing the generated signature. - * - * The algorithms currently supported include: - * - * - `'RSASSA-PKCS1-v1_5'` - * - `'RSA-PSS'` - * - `'ECDSA'` - * - `'Ed25519'` - * - `'Ed448'` - * - `'HMAC'` - * @since v15.0.0 - */ - sign(algorithm: AlgorithmIdentifier | RsaPssParams | EcdsaParams | Ed448Params, key: CryptoKey, data: BufferSource): Promise; - /** - * In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. - * The `subtle.unwrapKey()` method attempts to decrypt a wrapped key and create a `` instance. - * It is equivalent to calling `subtle.decrypt()` first on the encrypted key data (using the `wrappedKey`, `unwrapAlgo`, and `unwrappingKey` arguments as input) - * then passing the results in to the `subtle.importKey()` method using the `unwrappedKeyAlgo`, `extractable`, and `keyUsages` arguments as inputs. - * If successful, the returned promise is resolved with a `` object. - * - * The wrapping algorithms currently supported include: - * - * - `'RSA-OAEP'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * - `'AES-KW'` - * - * The unwrapped key algorithms supported include: - * - * - `'RSASSA-PKCS1-v1_5'` - * - `'RSA-PSS'` - * - `'RSA-OAEP'` - * - `'ECDSA'` - * - `'Ed25519'` - * - `'Ed448'` - * - `'ECDH'` - * - `'X25519'` - * - `'X448'` - * - `'HMAC'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * - `'AES-KW'` - * @param format Must be one of `'raw'`, `'pkcs8'`, `'spki'`, or `'jwk'`. - * @param keyUsages See {@link https://nodejs.org/docs/latest/api/webcrypto.html#cryptokeyusages Key usages}. - * @since v15.0.0 - */ - unwrapKey( - format: KeyFormat, - wrappedKey: BufferSource, - unwrappingKey: CryptoKey, - unwrapAlgorithm: AlgorithmIdentifier | RsaOaepParams | AesCtrParams | AesCbcParams | AesGcmParams, - unwrappedKeyAlgorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm, - extractable: boolean, - keyUsages: KeyUsage[] - ): Promise; - /** - * Using the method and parameters given in `algorithm` and the keying material provided by `key`, - * `subtle.verify()` attempts to verify that `signature` is a valid cryptographic signature of `data`. - * The returned promise is resolved with either `true` or `false`. - * - * The algorithms currently supported include: - * - * - `'RSASSA-PKCS1-v1_5'` - * - `'RSA-PSS'` - * - `'ECDSA'` - * - `'Ed25519'` - * - `'Ed448'` - * - `'HMAC'` - * @since v15.0.0 - */ - verify(algorithm: AlgorithmIdentifier | RsaPssParams | EcdsaParams | Ed448Params, key: CryptoKey, signature: BufferSource, data: BufferSource): Promise; - /** - * In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. - * The `subtle.wrapKey()` method exports the keying material into the format identified by `format`, - * then encrypts it using the method and parameters specified by `wrapAlgo` and the keying material provided by `wrappingKey`. - * It is the equivalent to calling `subtle.exportKey()` using `format` and `key` as the arguments, - * then passing the result to the `subtle.encrypt()` method using `wrappingKey` and `wrapAlgo` as inputs. - * If successful, the returned promise will be resolved with an `` containing the encrypted key data. - * - * The wrapping algorithms currently supported include: - * - * - `'RSA-OAEP'` - * - `'AES-CTR'` - * - `'AES-CBC'` - * - `'AES-GCM'` - * - `'AES-KW'` - * @param format Must be one of `'raw'`, `'pkcs8'`, `'spki'`, or `'jwk'`. - * @since v15.0.0 - */ - wrapKey(format: KeyFormat, key: CryptoKey, wrappingKey: CryptoKey, wrapAlgorithm: AlgorithmIdentifier | RsaOaepParams | AesCtrParams | AesCbcParams | AesGcmParams): Promise; - } - } -} -declare module 'node:crypto' { - export * from 'crypto'; -} diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/debug/karma.conf.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/debug/karma.conf.js deleted file mode 100644 index 103a82d15bd72b3cdf9ba4108272985f7e0bfdb3..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/debug/karma.conf.js +++ /dev/null @@ -1,70 +0,0 @@ -// Karma configuration -// Generated on Fri Dec 16 2016 13:09:51 GMT+0000 (UTC) - -module.exports = function(config) { - config.set({ - - // base path that will be used to resolve all patterns (eg. files, exclude) - basePath: '', - - - // frameworks to use - // available frameworks: https://npmjs.org/browse/keyword/karma-adapter - frameworks: ['mocha', 'chai', 'sinon'], - - - // list of files / patterns to load in the browser - files: [ - 'dist/debug.js', - 'test/*spec.js' - ], - - - // list of files to exclude - exclude: [ - 'src/node.js' - ], - - - // preprocess matching files before serving them to the browser - // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor - preprocessors: { - }, - - // test results reporter to use - // possible values: 'dots', 'progress' - // available reporters: https://npmjs.org/browse/keyword/karma-reporter - reporters: ['progress'], - - - // web server port - port: 9876, - - - // enable / disable colors in the output (reporters and logs) - colors: true, - - - // level of logging - // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG - logLevel: config.LOG_INFO, - - - // enable / disable watching file and executing tests whenever any file changes - autoWatch: true, - - - // start these browsers - // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher - browsers: ['PhantomJS'], - - - // Continuous Integration mode - // if true, Karma captures browsers, runs the tests and exits - singleRun: false, - - // Concurrency level - // how many browser should be started simultaneous - concurrency: Infinity - }) -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Ishow 2.3.rar !NEW!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Ishow 2.3.rar !NEW!.md deleted file mode 100644 index a5dd9354d61a210d3213b3c856a82fdb1ee86484..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Ishow 2.3.rar !NEW!.md +++ /dev/null @@ -1,73 +0,0 @@ -## Ishow 2.3.rar - - - -**Ishow 2.3.rar ⇒ [https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twEuJ&sa=D&sntz=1&usg=AOvVaw2mGBGfKmwiUSBMAfGDOl8y](https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twEuJ&sa=D&sntz=1&usg=AOvVaw2mGBGfKmwiUSBMAfGDOl8y)** - - - -# What is iShow 2.3 and How to Use It? - - - -iShow 2.3 is a software that can be used to create and edit laser show animations. It can be used with a SD card that supports ILD format file, or with a USB interface box that connects to a computer and a laser light with ILDA DB25 interface. iShow 2.3 has many features and functions that make it easy and fun to create stunning laser shows. - - - -## Features of iShow 2.3 - - - -- Words roll function: You can make text scroll across the screen in different fonts and colors. - -- Real time display: You can see the laser show on your computer screen as you edit it. - -- Directly switch function of the turetype style: You can change the font style of the text without saving and reloading. - -- Core-draw images input directly: You can import vector images from CorelDraw software and use them in your laser show. - -- ILDA files play and edit: You can play and edit ILDA files, which are standard files for laser show software. - -- Laser show edit and display: You can create your own laser show from scratch or use the templates provided by the software. - -- Support any laser light with ILDA DB25 interface: You can use iShow 2.3 with any laser light that has this interface, which is common for animation laser lights. - -- Support any laser with TTL modulation: You can use iShow 2.3 with any laser that has this modulation, which is used to control the color and intensity of the laser. - -- Maximum control 5pcs laser at same time: You can use iShow 2.3 to control up to five lasers simultaneously, creating a more impressive show. - -- USB 2.0 interface box: You can use iShow 2.3 with a USB interface box that converts the data from your computer to ILDA DB25 signal for your laser light. - - - -## How to Use iShow 2.3 - - - -To use iShow 2.3, you need to install the software and the driver for the USB interface box on your computer. Then, you need to connect the USB interface box to your computer and your laser light. You also need to turn on the power of the USB interface box. After that, you can run iShow.exe and start creating your laser show. - - - -To create a laser show, you can click on ShowEdit button and double-click on cartoons.seq, which is a sample file provided by the software. You can then see the laser show on your computer screen and edit it as you like. You can add text, images, shapes, colors, effects, etc. to your show. You can also import your own ILDA files or CorelDraw images to use in your show. - - - -To play your laser show, you can click on Play button and choose the mode you want. You can play it on your computer screen or on your laser light. You can also adjust the speed, brightness, size, etc. of your show. - - - -To save your laser show, you can click on Save button and choose the format you want. You can save it as an ILDA file or as a SEQ file, which is a proprietary format for iShow software. You can also export your show as a video file or as an image file. - - - -## Where to Download iShow 2.3 - - - -If you want to download iShow 2.3 software, you can visit [this website\[^1^\]](https://ishowii-lasershow-software.software.informer.com/2.3/), where you can find more information about the software and download it for free. However, you need to have a SD card or a USB interface box to use it with your laser light. - - - -If you want to watch a video tutorial on how to use iShow 2.3 software with a SD card, you can visit [this YouTube link\[^2^\]](https://www.youtube.com/watch?v=GPChR9fNI4w), where you can see how to edit texts using i - - 1b8d091108 \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ammyy Admin 3.9 Crack 2020 Serial Key Full Version.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ammyy Admin 3.9 Crack 2020 Serial Key Full Version.md deleted file mode 100644 index 77c0dcc471d8dd5a7ae30ce9c5be99bea40f7251..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Ammyy Admin 3.9 Crack 2020 Serial Key Full Version.md +++ /dev/null @@ -1,30 +0,0 @@ -
    -

    How to Use Ammyy Admin 3.9 Crack 2020 Serial Key Full Version

    -

    Ammyy Admin 3.9 Crack is a powerful program that allows you to remotely access and control any computer from anywhere. It is a simple and easy-to-use solution for remote desktop sharing, file transfer, voice chat, and more. You can use it for various purposes, such as technical support, online presentations, education, or personal use.

    -

    In this article, we will show you how to use Ammyy Admin 3.9 Crack 2020 Serial Key Full Version to enjoy all the features of this program without paying anything. You will be able to create your own full control remote desktop software and access any computer with just a few clicks.

    -

    Ammyy Admin 3.9 Crack 2020 Serial Key Full Version


    Download Filehttps://urlgoal.com/2uCL4z



    -

    What is Ammyy Admin 3.9 Crack 2020 Serial Key Full Version?

    -

    Ammyy Admin 3.9 Crack 2020 Serial Key Full Version is a modified version of the original Ammyy Admin program that bypasses the activation process and unlocks all the premium features. It is a free and safe way to use Ammyy Admin without any limitations or restrictions.

    -

    Some of the features that you can enjoy with Ammyy Admin 3.9 Crack 2020 Serial Key Full Version are:

    -
      -
    • Remote desktop control: You can access and control any computer from anywhere, as if you were sitting in front of it. You can view the screen, use the keyboard and mouse, run applications, transfer files, and more.
    • -
    • File manager: You can transfer files between computers with high speed and security. You can also manage files on the remote computer, such as copy, move, delete, rename, etc.
    • -
    • Voice chat: You can communicate with the remote user using a built-in voice chat feature. You can also adjust the volume and mute the microphone.
    • -
    • Encryption: You can protect your data and connection with advanced encryption algorithms. You can also set a password for each session and restrict access to certain computers.
    • -
    • Portability: You can run Ammyy Admin 3.9 Crack 2020 Serial Key Full Version from any device, such as a USB flash drive or a CD-ROM. You don't need to install anything on your computer or the remote computer.
    • -
    • Compatibility: You can use Ammyy Admin 3.9 Crack 2020 Serial Key Full Version with any Windows operating system, from Windows XP to Windows 10. You can also connect to computers with different versions of Windows.
    • -
    -

    How to Download and Install Ammyy Admin 3.9 Crack 2020 Serial Key Full Version?

    -

    To download and install Ammyy Admin 3.9 Crack 2020 Serial Key Full Version, you need to follow these steps:

    -
      -
    1. Download Ammyy Admin 3.9 Crack 2020 Serial Key Full Version from one of these links: [^1^] [^2^] [^3^]. Make sure you choose a reliable and secure source.
    2. -
    3. Extract the downloaded file using a program like WinRAR or 7-Zip. You will get a folder with two files: Ammyy_Admin_3_9.exe and AA_v3_9_Keygen.exe.
    4. -
    5. Run Ammyy_Admin_3_9.exe to launch the program. You don't need to install anything on your computer.
    6. -
    7. Run AA_v3_9_Keygen.exe to generate a serial key for Ammyy Admin 3.9 Crack 2020 Serial Key Full Version. Copy the serial key and paste it in the registration window of Ammyy Admin.
    8. -
    9. Click on "Register" to activate Ammyy Admin 3.9 Crack 2020 Serial Key Full Version. You will see a confirmation message that says "Registration successful".
    10. -
    11. Congratulations! You have successfully installed Ammyy Admin 3.9 Crack 2020 Serial Key Full Version on your computer. You can now use it for free and unlimited.
    12. -
    -

    How to Use Ammyy Admin 3.9 Crack 2020 Serial Key Full Version? -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Boeing 737 Qrh Pdf.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Boeing 737 Qrh Pdf.md deleted file mode 100644 index 4830f5f8201e696de37ac37a80a2c89fae8143f6..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Boeing 737 Qrh Pdf.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Boeing 737 Qrh Pdf


    Download Zip ✺✺✺ https://urlgoal.com/2uCLTF



    -
    -Boeing 737 QRH Familiarization - Free download as PDF file (.pdf), text file (.txt), or view presentation slides online. Acquaintance 737.jpg -Name[ | ] -Boeing 737 Familiarization is the name of the .pdf file that has been posted on Google Docs for reference. -The name of a PDF file that has been posted on Google Docs and the file name that has been posted on a website will not be the same. -The file name in Google Docs must not contain certification and/or registration number information, and therefore must not contain letters or numbers. Otherwise, the text may be included in the PDF file. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Chandramukhi Telugu Movie Torrent Do) _VERIFIED_.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Chandramukhi Telugu Movie Torrent Do) _VERIFIED_.md deleted file mode 100644 index cac8eedca329fc97ae8591e438c323f215f84583..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Chandramukhi Telugu Movie Torrent Do) _VERIFIED_.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Chandramukhi Telugu Movie Torrent Do)


    Download > https://urlgoal.com/2uCLjV



    - -Indian is a 1996 Indian Tamil language vigilante action film written and directed by Shankar and produced by A. M. Rathnam. The film stars Kamal Haasan in ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/renatotn7/teste2/tests/test_stylegan2_clean_arch.py b/spaces/renatotn7/teste2/tests/test_stylegan2_clean_arch.py deleted file mode 100644 index 78bb920e73ce28cfec9ea89a4339cc5b87981b47..0000000000000000000000000000000000000000 --- a/spaces/renatotn7/teste2/tests/test_stylegan2_clean_arch.py +++ /dev/null @@ -1,52 +0,0 @@ -import torch - -from gfpgan.archs.stylegan2_clean_arch import StyleGAN2GeneratorClean - - -def test_stylegan2generatorclean(): - """Test arch: StyleGAN2GeneratorClean.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = StyleGAN2GeneratorClean( - out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=0.5).cuda().eval() - style = torch.rand((1, 512), dtype=torch.float32).cuda() - output = net([style], input_is_latent=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with return_latents ----------------------- # - output = net([style], input_is_latent=True, return_latents=True) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 1 - # check latent - assert output[1][0].shape == (8, 512) - - # -------------------- with randomize_noise = False ----------------------- # - output = net([style], randomize_noise=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with truncation = 0.5 and mixing----------------------- # - output = net([style, style], truncation=0.5, truncation_latent=style) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # ------------------ test make_noise ----------------------- # - out = net.make_noise() - assert len(out) == 7 - assert out[0].shape == (1, 1, 4, 4) - assert out[1].shape == (1, 1, 8, 8) - assert out[2].shape == (1, 1, 8, 8) - assert out[3].shape == (1, 1, 16, 16) - assert out[4].shape == (1, 1, 16, 16) - assert out[5].shape == (1, 1, 32, 32) - assert out[6].shape == (1, 1, 32, 32) - - # ------------------ test get_latent ----------------------- # - out = net.get_latent(style) - assert out.shape == (1, 512) - - # ------------------ test mean_latent ----------------------- # - out = net.mean_latent(2) - assert out.shape == (1, 512) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/visualizer.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/visualizer.py deleted file mode 100644 index 7fdbdf04ea3cdaf43b6b12c3cb2e8004d897c55e..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/util/visualizer.py +++ /dev/null @@ -1,132 +0,0 @@ -# -*- coding: utf-8 -*- -''' -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : liusl20@mail.tsinghua.edu.cn; slongliu86@gmail.com -Modified from COCO evaluator -''' - -import os, sys -from textwrap import wrap -import torch -import numpy as np -import cv2 -import datetime - -import matplotlib.pyplot as plt -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils -from matplotlib import transforms - -def renorm(img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) \ - -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % (img.size(0), str(img.size())) - img_perm = img.permute(1,2,0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2,0,1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % (img.size(1), str(img.size())) - img_perm = img.permute(0,2,3,1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0,3,1,2) - -class ColorMap(): - def __init__(self, basergb=[255,255,0]): - self.basergb = np.array(basergb) - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -class COCOVisualizer(): - def __init__(self) -> None: - pass - - def visualize(self, img, tgt, caption=None, dpi=120, savedir=None, show_in_console=True): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams['font.size'] = '5' - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - if show_in_console: - plt.show() - - if savedir is not None: - if caption is None: - savename = '{}/{}-{}.png'.format(savedir, int(tgt['image_id']), str(datetime.datetime.now()).replace(' ', '-')) - else: - savename = '{}/{}-{}-{}.png'.format(savedir, caption, int(tgt['image_id']), str(datetime.datetime.now()).replace(' ', '-')) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ - - tgt: dict. args: - - boxes: num_boxes, 4. xywh, [0,1]. - - box_label: num_boxes. - """ - assert 'boxes' in tgt - ax = plt.gca() - H, W = tgt['size'].tolist() - numbox = tgt['boxes'].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt['boxes'].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [[bbox_x, bbox_y], [bbox_x, bbox_y+bbox_h], [bbox_x+bbox_w, bbox_y+bbox_h], [bbox_x+bbox_w, bbox_y]] - np_poly = np.array(poly).reshape((4,2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3))*0.6+0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor='none', edgecolors=color, linewidths=2) - ax.add_collection(p) - - - if 'box_label' in tgt: - assert len(tgt['box_label']) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt['box_label']): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': color[idx], 'alpha': 0.6, 'pad': 1}) - - if 'caption' in tgt: - ax.set_title(tgt['caption'], wrap=True) - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/?k?o???.3???wnlog?.md b/spaces/rorallitri/biomedical-language-models/logs/?k?o???.3???wnlog?.md deleted file mode 100644 index c9b983d4aa20e7e3aaab94b6e124f20b74b5c0cb..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/?k?o???.3???wnlog?.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ?k?o??׮?.3???wnlog?


    Download Zip ☆☆☆ https://tinurll.com/2uzmGG



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/corpus.py b/spaces/rubensmau/Dov_Tzamir/data_driven_characters/corpus.py deleted file mode 100644 index f6768a2350445797d6812ab6d743ee9bca36f6f2..0000000000000000000000000000000000000000 --- a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/corpus.py +++ /dev/null @@ -1,86 +0,0 @@ -import json -import os - -from langchain import PromptTemplate, LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.chains.summarize import load_summarize_chain -from langchain.text_splitter import RecursiveCharacterTextSplitter - -from data_driven_characters.constants import VERBOSE - - -def generate_docs(corpus, chunk_size, chunk_overlap): - """Generate docs from a corpus.""" - text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder( - chunk_size=chunk_size, chunk_overlap=chunk_overlap - ) - docs = text_splitter.create_documents([corpus]) - return docs - - -def load_docs(corpus_path, chunk_size, chunk_overlap): - """Load the corpus and split it into chunks.""" - - with open(corpus_path) as f: - corpus = f.read() - docs = generate_docs(corpus, chunk_size, chunk_overlap) - return docs - - -def generate_corpus_summaries(docs, summary_type="map_reduce"): - """Generate summaries of the story.""" - GPT3 = ChatOpenAI(model_name="gpt-3.5-turbo") - chain = load_summarize_chain( - GPT3, chain_type=summary_type, return_intermediate_steps=True, verbose=True - ) - summary = chain({"input_documents": docs}, return_only_outputs=True) - intermediate_summaries = summary["intermediate_steps"] - return intermediate_summaries - - -def get_corpus_summaries(docs, summary_type, cache_dir, force_refresh=False): - """Load the corpus summaries from cache or generate them.""" - if not os.path.exists(cache_dir) or force_refresh: - os.makedirs(cache_dir, exist_ok=True) - if VERBOSE: - print("Summaries do not exist. Generating summaries.") - intermediate_summaries = generate_corpus_summaries(docs, summary_type) - for i, intermediate_summary in enumerate(intermediate_summaries): - with open(os.path.join(cache_dir, f"summary_{i}.txt"), "w") as f: - f.write(intermediate_summary) - else: - if VERBOSE: - print("Summaries already exist. Loading summaries.") - intermediate_summaries = [] - for i in range(len(os.listdir(cache_dir))): - with open(os.path.join(cache_dir, f"summary_{i}.txt")) as f: - intermediate_summaries.append(f.read()) - return intermediate_summaries - - -def generate_characters(corpus_summaries, num_characters): - """Get a list of characters from a list of summaries.""" - GPT4 = ChatOpenAI(model_name="gpt-3.5-turbo") - characters_prompt_template = """Consider the following corpus. - --- - {corpus_summaries} - --- - Give a line-separated list of all the characters, ordered by importance, without punctuation. - """ - characters = LLMChain( - llm=GPT4, prompt=PromptTemplate.from_template(characters_prompt_template) - ).run(corpus_summaries="\n\n".join(corpus_summaries)) - # remove (, ), and " for each element of list - return characters.split("\n")[:num_characters] - - -def get_characters(corpus_summaries, num_characters, cache_dir, force_refresh=False): - cache_file = os.path.join(cache_dir, "characters.json") - if not os.path.exists(cache_file) or force_refresh: - characters = generate_characters(corpus_summaries, num_characters) - with open(cache_file, "w") as f: - json.dump(characters, f) - else: - with open(cache_file, "r") as f: - characters = json.load(f) - return characters diff --git a/spaces/safi842/FashionGen/netdissect/proggan.py b/spaces/safi842/FashionGen/netdissect/proggan.py deleted file mode 100644 index e37ae15f373ef6ad14279bb581042434c5563539..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/proggan.py +++ /dev/null @@ -1,299 +0,0 @@ -import torch, numpy, itertools -import torch.nn as nn -from collections import OrderedDict - - -def print_network(net, verbose=False): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - if verbose: - print(net) - print('Total number of parameters: {:3.3f} M'.format(num_params / 1e6)) - - -def from_pth_file(filename): - ''' - Instantiate from a pth file. - ''' - state_dict = torch.load(filename) - if 'state_dict' in state_dict: - state_dict = state_dict['state_dict'] - # Convert old version of parameter names - if 'features.0.conv.weight' in state_dict: - state_dict = state_dict_from_old_pt_dict(state_dict) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -############################################################################### -# Modules -############################################################################### - -class ProgressiveGenerator(nn.Sequential): - def __init__(self, resolution=None, sizes=None, modify_sequence=None, - output_tanh=False): - ''' - A pytorch progessive GAN generator that can be converted directly - from either a tensorflow model or a theano model. It consists of - a sequence of convolutional layers, organized in pairs, with an - upsampling and reduction of channels at every other layer; and - then finally followed by an output layer that reduces it to an - RGB [-1..1] image. - - The network can be given more layers to increase the output - resolution. The sizes argument indicates the fieature depth at - each upsampling, starting with the input z: [input-dim, 4x4-depth, - 8x8-depth, 16x16-depth...]. The output dimension is 2 * 2**len(sizes) - - Some default architectures can be selected by supplying the - resolution argument instead. - - The optional modify_sequence function can be used to transform the - sequence of layers before the network is constructed. - - If output_tanh is set to True, the network applies a tanh to clamp - the output to [-1,1] before output; otherwise the output is unclamped. - ''' - assert (resolution is None) != (sizes is None) - if sizes is None: - sizes = { - 8: [512, 512, 512], - 16: [512, 512, 512, 512], - 32: [512, 512, 512, 512, 256], - 64: [512, 512, 512, 512, 256, 128], - 128: [512, 512, 512, 512, 256, 128, 64], - 256: [512, 512, 512, 512, 256, 128, 64, 32], - 1024: [512, 512, 512, 512, 512, 256, 128, 64, 32, 16] - }[resolution] - # Follow the schedule of upsampling given by sizes. - # layers are called: layer1, layer2, etc; then output_128x128 - sequence = [] - def add_d(layer, name=None): - if name is None: - name = 'layer%d' % (len(sequence) + 1) - sequence.append((name, layer)) - add_d(NormConvBlock(sizes[0], sizes[1], kernel_size=4, padding=3)) - add_d(NormConvBlock(sizes[1], sizes[1], kernel_size=3, padding=1)) - for i, (si, so) in enumerate(zip(sizes[1:-1], sizes[2:])): - add_d(NormUpscaleConvBlock(si, so, kernel_size=3, padding=1)) - add_d(NormConvBlock(so, so, kernel_size=3, padding=1)) - # Create an output layer. During training, the progressive GAN - # learns several such output layers for various resolutions; we - # just include the last (highest resolution) one. - dim = 4 * (2 ** (len(sequence) // 2 - 1)) - add_d(OutputConvBlock(sizes[-1], tanh=output_tanh), - name='output_%dx%d' % (dim, dim)) - # Allow the sequence to be modified - if modify_sequence is not None: - sequence = modify_sequence(sequence) - super().__init__(OrderedDict(sequence)) - - def forward(self, x): - # Convert vector input to 1x1 featuremap. - x = x.view(x.shape[0], x.shape[1], 1, 1) - return super().forward(x) - -class PixelNormLayer(nn.Module): - def __init__(self): - super(PixelNormLayer, self).__init__() - - def forward(self, x): - return x / torch.sqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - -class DoubleResolutionLayer(nn.Module): - def forward(self, x): - x = nn.functional.interpolate(x, scale_factor=2, mode='nearest') - return x - -class WScaleLayer(nn.Module): - def __init__(self, size, fan_in, gain=numpy.sqrt(2)): - super(WScaleLayer, self).__init__() - self.scale = gain / numpy.sqrt(fan_in) # No longer a parameter - self.b = nn.Parameter(torch.randn(size)) - self.size = size - - def forward(self, x): - x_size = x.size() - x = x * self.scale + self.b.view(1, -1, 1, 1).expand( - x_size[0], self.size, x_size[2], x_size[3]) - return x - -class NormConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class NormUpscaleConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormUpscaleConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.up = DoubleResolutionLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.up(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class OutputConvBlock(nn.Module): - def __init__(self, in_channels, tanh=False): - super().__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, 3, kernel_size=1, padding=0, bias=False) - self.wscale = WScaleLayer(3, in_channels, gain=1) - self.clamp = nn.Hardtanh() if tanh else (lambda x: x) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.wscale(x) - x = self.clamp(x) - return x - -############################################################################### -# Conversion -############################################################################### - -def from_tf_parameters(parameters): - ''' - Instantiate from tensorflow variables. - ''' - state_dict = state_dict_from_tf_parameters(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def from_old_pt_dict(parameters): - ''' - Instantiate from old pytorch state dict. - ''' - state_dict = state_dict_from_old_pt_dict(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def sizes_from_state_dict(params): - ''' - In a progressive GAN, the number of channels can change after each - upsampling. This function reads the state dict to figure the - number of upsamplings and the channel depth of each filter. - ''' - sizes = [] - for i in itertools.count(): - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % pt_layername] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[1]) - if i % 2 == 0: - sizes.append(weight.shape[0]) - return sizes - -def state_dict_from_tf_parameters(parameters): - ''' - Conversion from tensorflow parameters - ''' - def torch_from_tf(data): - return torch.from_numpy(data.eval()) - - params = dict(parameters) - result = {} - sizes = [] - for i in itertools.count(): - resolution = 4 * (2 ** (i // 2)) - # Translate parameter names. For example: - # 4x4/Dense/weight -> layer1.conv.weight - # 32x32/Conv0_up/weight -> layer7.conv.weight - # 32x32/Conv1/weight -> layer8.conv.weight - tf_layername = '%dx%d/%s' % (resolution, resolution, - 'Dense' if i == 0 else 'Conv' if i == 1 else - 'Conv0_up' if i % 2 == 0 else 'Conv1') - pt_layername = 'layer%d' % (i + 1) - # Stop looping when we run out of parameters. - try: - weight = torch_from_tf(params['%s/weight' % tf_layername]) - except KeyError: - break - # Transpose convolution weights into pytorch format. - if i == 0: - # Convert dense layer to 4x4 convolution - weight = weight.view(weight.shape[0], weight.shape[1] // 16, - 4, 4).permute(1, 0, 2, 3).flip(2, 3) - sizes.append(weight.shape[0]) - elif i % 2 == 0: - # Convert inverse convolution to convolution - weight = weight.permute(2, 3, 0, 1).flip(2, 3) - else: - # Ordinary Conv2d conversion. - weight = weight.permute(3, 2, 0, 1) - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - # Copy bias vector. - bias = torch_from_tf(params['%s/bias' % tf_layername]) - result['%s.wscale.b' % (pt_layername)] = bias - # Copy just finest-grained ToRGB output layers. For example: - # ToRGB_lod0/weight -> output.conv.weight - i -= 1 - resolution = 4 * (2 ** (i // 2)) - tf_layername = 'ToRGB_lod0' - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = torch_from_tf( - params['%s/weight' % tf_layername]).permute(3, 2, 0, 1) - result['%s.wscale.b' % pt_layername] = torch_from_tf( - params['%s/bias' % tf_layername]) - # Return parameters - return result - -def state_dict_from_old_pt_dict(params): - ''' - Conversion from the old pytorch model layer names. - ''' - result = {} - sizes = [] - for i in itertools.count(): - old_layername = 'features.%d' % i - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % (old_layername)] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[0]) - if i % 2 == 0: - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - result['%s.wscale.b' % (pt_layername)] = params[ - '%s.wscale.b' % (old_layername)] - # Copy the output layers. - i -= 1 - resolution = 4 * (2 ** (i // 2)) - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = params['output.conv.weight'] - result['%s.wscale.b' % pt_layername] = params['output.wscale.b'] - # Return parameters and also network architecture sizes. - return result - diff --git a/spaces/sanaghani12/Gradio-Huggingface/README.md b/spaces/sanaghani12/Gradio-Huggingface/README.md deleted file mode 100644 index 0ac93c119e1376cc9416c4ee1eb05fb37e8a4328..0000000000000000000000000000000000000000 --- a/spaces/sanaghani12/Gradio-Huggingface/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gradio Huggingface -emoji: 👁 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/AVG PC TuneUp 16.76.3.18604 X64 Key By Zuket Creation.md b/spaces/scedlatioru/img-to-music/example/AVG PC TuneUp 16.76.3.18604 X64 Key By Zuket Creation.md deleted file mode 100644 index 868250d349ac3d575a6a142677f09c7305e2403b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/AVG PC TuneUp 16.76.3.18604 X64 Key By Zuket Creation.md +++ /dev/null @@ -1,6 +0,0 @@ -

    AVG PC TuneUp 16.76.3.18604 x64 Key By Zuket Creation


    DOWNLOADhttps://gohhs.com/2uEzOt



    - - 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Mere Brother Ki Dulhan English Dubbed 720p Torrent HOT Download.md b/spaces/scedlatioru/img-to-music/example/Mere Brother Ki Dulhan English Dubbed 720p Torrent HOT Download.md deleted file mode 100644 index d94e2a28e270f0f90de4401a380bc944218c8406..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Mere Brother Ki Dulhan English Dubbed 720p Torrent HOT Download.md +++ /dev/null @@ -1,123 +0,0 @@ - -

    Mere Brother Ki Dulhan English Dubbed 720p Torrent Download - A Guide for Movie Lovers

    - -

    If you are looking for a fun and entertaining movie to watch, you might want to check out Mere Brother Ki Dulhan, a 2011 Bollywood romantic comedy film directed by Ali Abbas Zafar. The movie stars Imran Khan, Katrina Kaif and Ali Zafar in the lead roles, and tells the story of Kush, who falls in love with his brother's fiancée, Dimple, who happens to be the craziest and wackiest girl he has ever met.

    - -

    The movie is full of hilarious and unpredictable situations, as Kush tries to deal with his feelings for Dimple, while also trying to keep his brother happy and unaware of his dilemma. The movie also features some catchy songs and dances, as well as some cameo appearances by John Abraham and Salman Khan.

    -

    Mere Brother Ki Dulhan English Dubbed 720p Torrent Download


    Download Ziphttps://gohhs.com/2uEAzU



    - -

    If you want to watch this movie in English, you can download it from various torrent sites that offer high-quality 720p versions. However, before you do that, you should be aware of some important things that will help you enjoy the movie better and avoid any legal issues.

    - -

    Why Download Mere Brother Ki Dulhan English Dubbed 720p Torrent?

    - -

    There are several reasons why you might want to download Mere Brother Ki Dulhan English Dubbed 720p Torrent instead of watching it online or buying a DVD. Here are some of them:

    - -
      -
    • You can watch the movie anytime and anywhere you want, without any interruptions or ads.
    • -
    • You can save money and time by not having to pay for a subscription or a rental fee.
    • -
    • You can enjoy the movie in high-definition quality, with clear audio and video.
    • -
    • You can choose the language option that suits you best, whether it is Hindi or English.
    • -
    • You can share the movie with your friends and family, and watch it together.
    • -
    - -

    How to Download Mere Brother Ki Dulhan English Dubbed 720p Torrent Safely?

    - -

    While downloading Mere Brother Ki Dulhan English Dubbed 720p Torrent might seem like a simple and convenient option, you should also be careful about some potential risks and challenges that come with it. Here are some tips on how to download the movie safely and legally:

    - -
      -
    • Use a reliable and reputable torrent site that offers verified and high-quality torrents. You can find such sites by reading reviews and ratings from other users.
    • -
    • Use a VPN (Virtual Private Network) service that will hide your IP address and encrypt your data. This will protect you from hackers, malware, viruses, and legal action from your ISP or authorities.
    • -
    • Use an antivirus software that will scan your downloaded files for any malicious or harmful content. This will prevent your device from getting infected or damaged.
    • -
    • Use a torrent client that will allow you to manage your downloads efficiently and securely. You can choose from various options such as uTorrent, BitTorrent, qBittorrent, etc.
    • -
    • Respect the copyright laws and regulations of your country and region. You should not download or distribute any content that is illegal or infringes on the rights of the creators or owners.
    • -
    - -

    What are Some Alternatives to Mere Brother Ki Dulhan English Dubbed 720p Torrent Download?

    - -

    If you are not comfortable with downloading Mere Brother Ki Dulhan English Dubbed 720p Torrent, or if you cannot find a good torrent site or file, you can also try some other ways to watch the movie in English. Here are some alternatives:

    - -
      -
    • Watch the movie online on streaming platforms that offer legal and licensed content. You can find such platforms by searching for them on Google or other search engines.
    • -
    • Buy or rent the DVD or Blu-ray of the movie from online or offline stores that sell original and authorized copies. You can find such stores by looking for their official websites or social media pages.
    • -
    • Watch the movie on TV channels that broadcast Bollywood movies in English or with subtitles. You can find such channels by checking their schedules or guides online or on your TV.
    • -
    - -

    Conclusion

    - -

    Mere Brother Ki Dulhan is a great movie to watch if you are looking for a laugh-out-loud romantic comedy with a twist. You can download it from torrent sites in English dubbed 720p quality, but make sure you do it safely and legally. Alternatively, you can also watch it online, buy or rent it, or catch it on TV. Whatever option you choose, we hope you enjoy the movie!

    -

    What is the Plot of Mere Brother Ki Dulhan?

    - -

    Mere Brother Ki Dulhan is a movie that revolves around the lives of three main characters: Kush, Luv and Dimple. Kush (Imran Khan) is a young and successful filmmaker who lives in Mumbai. Luv (Ali Zafar) is his elder brother who works as a banker in London. Dimple (Katrina Kaif) is a free-spirited and rebellious girl who dreams of becoming a rock star.

    - -

    The movie begins with Luv breaking up with his girlfriend Piyali (Tara D'Souza) and asking Kush to find him a suitable bride from India. Kush agrees and starts searching for the perfect girl for his brother. He meets several prospective brides from different backgrounds and families, but none of them impress him. Finally, he comes across Dimple, who he had met years ago at a college festival. He remembers her as a fun-loving and adventurous girl who had impressed him with her singing skills.

    - -

    Kush decides that Dimple is the ideal match for his brother and arranges a meeting between them. Luv and Dimple hit it off instantly and agree to get married. Both the families are happy and start preparing for the wedding. However, things take a dramatic turn when Kush realizes that he has fallen in love with Dimple, his brother's dulhan (bride). He tries to suppress his feelings, but Dimple also confesses that she loves him too. They decide to elope, but their plan is foiled by Luv, who finds out about their affair.

    -

    - -

    What follows is a series of hilarious and chaotic events, as Kush and Dimple try to convince Luv to let them be together, while also dealing with their respective families and friends. The movie ends with a twist that reveals the true intentions of Luv and Piyali, and how they help Kush and Dimple to unite.

    - -

    What are the Benefits of Watching Mere Brother Ki Dulhan English Dubbed 720p Torrent?

    - -

    Watching Mere Brother Ki Dulhan English Dubbed 720p Torrent can offer you many benefits, such as:

    - -
      -
    • You can enjoy a fun and entertaining movie that will make you laugh and smile.
    • -
    • You can learn about the culture and traditions of India, especially the wedding rituals and ceremonies.
    • -
    • You can appreciate the performances of the talented actors and actresses, who bring their characters to life with their charm and charisma.
    • -
    • You can listen to some catchy and melodious songs and music, composed by Sohail Sen and sung by various artists.
    • -
    • You can admire the beautiful scenery and locations of India, such as Delhi, Agra, Himachal Pradesh and Punjab.
    • -
    - -

    How to Watch Mere Brother Ki Dulhan English Dubbed 720p Torrent Online?

    - -

    If you want to watch Mere Brother Ki Dulhan English Dubbed 720p Torrent online, you can follow these simple steps:

    - -
      -
    1. Download a torrent client software that will allow you to download torrent files from torrent sites.
    2. -
    3. Go to a torrent site that offers verified and high-quality torrents for Mere Brother Ki Dulhan English Dubbed 720p Torrent Download. You can use the search bar or browse through the categories to find the movie.
    4. -
    5. Select the torrent file that has the most seeds and peers, as this will ensure faster download speed and better quality.
    6. -
    7. Click on the download button or magnet link to start downloading the torrent file to your device.
    8. -
    9. Once the download is complete, open the torrent file with your torrent client software and start watching the movie.
    10. -
    - -

    Note: You should always use a VPN service when downloading torrents, as this will protect your privacy and security online. You should also scan your downloaded files with an antivirus software before opening them.

    -

    Who are the Cast and Crew of Mere Brother Ki Dulhan?

    - -

    Mere Brother Ki Dulhan is a movie that features a talented and popular cast and crew, who have worked on many other successful Bollywood projects. Here are some of the main people behind the movie:

    - -
      -
    • Ali Abbas Zafar: He is the director, writer and co-producer of the movie. He made his debut with this movie, and later went on to direct other hit movies such as Gunday, Sultan, Tiger Zinda Hai and Bharat.
    • -
    • Imran Khan: He is the actor who plays the role of Kush Agnihotri, the younger brother who falls in love with his brother's bride. He is known for his roles in movies such as Jaane Tu... Ya Jaane Na, I Hate Luv Storys, Delhi Belly and Ek Main Aur Ekk Tu.
    • -
    • Katrina Kaif: She is the actress who plays the role of Dimple Dixit, the crazy and wacky girl who is engaged to Luv. She is one of the most popular and highest-paid actresses in Bollywood, and has starred in movies such as Namastey London, Singh Is Kinng, Zindagi Na Milegi Dobara and Dhoom 3.
    • -
    • Ali Zafar: He is the actor who plays the role of Luv Agnihotri, the elder brother who lives in London and wants to get married. He is also a singer, songwriter and musician, and has acted in movies such as Tere Bin Laden, Chashme Baddoor and Kill Dil.
    • -
    • Sohail Sen: He is the music director and composer of the movie. He has composed music for movies such as What's Your Raashee?, Gunday, Ek Tha Tiger and Housefull 4.
    • -
    - -

    What are Some Reviews and Ratings of Mere Brother Ki Dulhan?

    - -

    Mere Brother Ki Dulhan is a movie that has received mixed reviews and ratings from critics and audiences alike. Here are some of them:

    - -
      -
    • The movie has a 5.9/10 rating on IMDb, based on 9,749 user ratings.
    • -
    • The movie has a 50% rating on Rotten Tomatoes, based on 6 critic reviews.
    • -
    • The movie has a 46% rating on Rotten Tomatoes, based on 1,097 audience ratings.
    • -
    • The movie has a 3/5 rating on Times of India, based on a review by Nikhat Kazmi.
    • -
    • The movie has a 2/5 rating on Rediff.com, based on a review by Sukanya Verma.
    • -
    - -

    What are Some Trivia and Facts about Mere Brother Ki Dulhan?

    - -

    Mere Brother Ki Dulhan is a movie that has some interesting trivia and facts associated with it. Here are some of them:

    - -
      -
    • The movie was originally titled Mere Brother Ki Shaadi (My Brother's Wedding), but was changed to Mere Brother Ki Dulhan (My Brother's Bride) to avoid confusion with another movie called Band Baaja Baaraat.
    • -
    • The movie was inspired by the Hollywood movie Dan in Real Life (2007), which also had a similar plot of a man falling in love with his brother's girlfriend.
    • -
    • The movie was shot in various locations across India, such as Delhi, Agra, Himachal Pradesh and Punjab. Some scenes were also shot in London.
    • -
    • The movie features a cameo appearance by Salman Khan, who plays himself in a song sequence. Salman Khan is also the ex-boyfriend of Katrina Kaif in real life.
    • -
    • The movie was a commercial success at the box office, earning over ₹1 billion worldwide. It was also nominated for several awards, such as Filmfare Awards, IIFA Awards and Zee Cine Awards.
    • -
    -

    Conclusion

    - -

    Mere Brother Ki Dulhan is a movie that you should watch if you love romantic comedies with a twist. The movie has a fun and engaging plot, a talented and charming cast, and some catchy and melodious songs. You can download the movie from torrent sites in English dubbed 720p quality, but make sure you do it safely and legally. You can also watch the movie online, buy or rent it, or catch it on TV. Whatever option you choose, we hope you have a great time watching the movie!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/sczhou/ProPainter/core/lr_scheduler.py b/spaces/sczhou/ProPainter/core/lr_scheduler.py deleted file mode 100644 index 1bd1341cdcc64aa1c2a416b837551590ded4a43d..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/core/lr_scheduler.py +++ /dev/null @@ -1,112 +0,0 @@ -""" - LR scheduler from BasicSR https://github.com/xinntao/BasicSR -""" -import math -from collections import Counter -from torch.optim.lr_scheduler import _LRScheduler - - -class MultiStepRestartLR(_LRScheduler): - """ MultiStep with restarts learning rate scheme. - Args: - optimizer (torch.nn.optimizer): Torch optimizer. - milestones (list): Iterations that will decrease learning rate. - gamma (float): Decrease ratio. Default: 0.1. - restarts (list): Restart iterations. Default: [0]. - restart_weights (list): Restart weights at each restart iteration. - Default: [1]. - last_epoch (int): Used in _LRScheduler. Default: -1. - """ - def __init__(self, - optimizer, - milestones, - gamma=0.1, - restarts=(0, ), - restart_weights=(1, ), - last_epoch=-1): - self.milestones = Counter(milestones) - self.gamma = gamma - self.restarts = restarts - self.restart_weights = restart_weights - assert len(self.restarts) == len( - self.restart_weights), 'restarts and their weights do not match.' - super(MultiStepRestartLR, self).__init__(optimizer, last_epoch) - - def get_lr(self): - if self.last_epoch in self.restarts: - weight = self.restart_weights[self.restarts.index(self.last_epoch)] - return [ - group['initial_lr'] * weight - for group in self.optimizer.param_groups - ] - if self.last_epoch not in self.milestones: - return [group['lr'] for group in self.optimizer.param_groups] - return [ - group['lr'] * self.gamma**self.milestones[self.last_epoch] - for group in self.optimizer.param_groups - ] - - -def get_position_from_periods(iteration, cumulative_period): - """Get the position from a period list. - It will return the index of the right-closest number in the period list. - For example, the cumulative_period = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 2. - Args: - iteration (int): Current iteration. - cumulative_period (list[int]): Cumulative period list. - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_period): - if iteration <= period: - return i - - -class CosineAnnealingRestartLR(_LRScheduler): - """ Cosine annealing with restarts learning rate scheme. - An example of config: - periods = [10, 10, 10, 10] - restart_weights = [1, 0.5, 0.5, 0.5] - eta_min=1e-7 - It has four cycles, each has 10 iterations. At 10th, 20th, 30th, the - scheduler will restart with the weights in restart_weights. - Args: - optimizer (torch.nn.optimizer): Torch optimizer. - periods (list): Period for each cosine anneling cycle. - restart_weights (list): Restart weights at each restart iteration. - Default: [1]. - eta_min (float): The mimimum lr. Default: 0. - last_epoch (int): Used in _LRScheduler. Default: -1. - """ - def __init__(self, - optimizer, - periods, - restart_weights=(1, ), - eta_min=1e-7, - last_epoch=-1): - self.periods = periods - self.restart_weights = restart_weights - self.eta_min = eta_min - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - self.cumulative_period = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - super(CosineAnnealingRestartLR, self).__init__(optimizer, last_epoch) - - def get_lr(self): - idx = get_position_from_periods(self.last_epoch, - self.cumulative_period) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_period[idx - 1] - current_period = self.periods[idx] - - return [ - self.eta_min + current_weight * 0.5 * (base_lr - self.eta_min) * - (1 + math.cos(math.pi * ( - (self.last_epoch - nearest_restart) / current_period))) - for base_lr in self.base_lrs - ] diff --git a/spaces/seanbenhur/tamilatis/app.py b/spaces/seanbenhur/tamilatis/app.py deleted file mode 100644 index da74186a9ebfcfa11e2e5ada20059874ec817663..0000000000000000000000000000000000000000 --- a/spaces/seanbenhur/tamilatis/app.py +++ /dev/null @@ -1,40 +0,0 @@ -from tamilatis.predict import TamilATISPredictor -from tamilatis.model import JointATISModel -import numpy as np -from sklearn.preprocessing import LabelEncoder -import gradio as gr - - - -model_name = "microsoft/xlm-align-base" -tokenizer_name = "microsoft/xlm-align-base" -num_labels = 78 -num_intents = 23 -checkpoint_path = "models/xlm_align_base.bin" -intent_encoder_path = "models/intent_classes.npy" -ner_encoder_path = "models/ner_classes.npy" - -def predict_function(text): - label_encoder = LabelEncoder() - label_encoder.classes_ = np.load(ner_encoder_path) - - intent_encoder = LabelEncoder() - intent_encoder.classes_ = np.load(intent_encoder_path) - - model = JointATISModel(model_name,num_labels,num_intents) - predictor = TamilATISPredictor(model,checkpoint_path,tokenizer_name, - label_encoder,intent_encoder,num_labels) - slot_prediction, intent_preds = predictor.get_predictions(text) - return slot_prediction, intent_preds - - -title = "MultiTask Learning in Intent Detection and Slot Prediction for Tamil Conversational Dialogues using Multilingual Pretrained Models" -article="This is a demo for the MultiTask model trained on Tamil Translated conversations from ATIS dataset. The code can be found [here](https://github.com/seanbenhur/tamilatis). Made with ❤ by [Sean Benhur](https://www.linkedin.com/in/seanbenhur/)" -examples = ["ஹைதராபாத்தில் இருந்து உதய்பூர் செல்லும் விமானங்களைக் காட்டு", "எனக்கு டெல்லியில் இருந்து சென்னைக்கு விமானம் வேண்டும்"] - -intent_output = gr.outputs.Textbox(type="auto",label="Intent") -slots_output = gr.outputs.Textbox(type="auto",label="Slots") -iface = gr.Interface(fn=predict_function,article=article, inputs="text", title=title,outputs=[intent_output,slots_output], -examples=examples) -iface.launch() - diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/loss.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/loss.py deleted file mode 100644 index 543049cc0ec87859c8e88a746a3cb6ebfbd02eec..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/transducer/loss.py +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env python3 - -"""Transducer loss module.""" - -import torch - - -class TransLoss(torch.nn.Module): - """Transducer loss module. - - Args: - trans_type (str): type of transducer implementation to calculate loss. - blank_id (int): blank symbol id - """ - - def __init__(self, trans_type, blank_id): - """Construct an TransLoss object.""" - super().__init__() - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - if trans_type == "warp-transducer": - from warprnnt_pytorch import RNNTLoss - - self.trans_loss = RNNTLoss(blank=blank_id) - elif trans_type == "warp-rnnt": - if device.type == "cuda": - try: - from warp_rnnt import rnnt_loss - - self.trans_loss = rnnt_loss - except ImportError: - raise ImportError( - "warp-rnnt is not installed. Please re-setup" - " espnet or use 'warp-transducer'" - ) - else: - raise ValueError("warp-rnnt is not supported in CPU mode") - - self.trans_type = trans_type - self.blank_id = blank_id - - def forward(self, pred_pad, target, pred_len, target_len): - """Compute path-aware regularization transducer loss. - - Args: - pred_pad (torch.Tensor): Batch of predicted sequences - (batch, maxlen_in, maxlen_out+1, odim) - target (torch.Tensor): Batch of target sequences (batch, maxlen_out) - pred_len (torch.Tensor): batch of lengths of predicted sequences (batch) - target_len (torch.tensor): batch of lengths of target sequences (batch) - - Returns: - loss (torch.Tensor): transducer loss - - """ - dtype = pred_pad.dtype - if dtype != torch.float32: - # warp-transducer and warp-rnnt only support float32 - pred_pad = pred_pad.to(dtype=torch.float32) - - if self.trans_type == "warp-rnnt": - log_probs = torch.log_softmax(pred_pad, dim=-1) - - loss = self.trans_loss( - log_probs, - target, - pred_len, - target_len, - reduction="mean", - blank=self.blank_id, - gather=True, - ) - else: - loss = self.trans_loss(pred_pad, target, pred_len, target_len) - loss = loss.to(dtype=dtype) - - return loss diff --git a/spaces/segments-tobias/conex/espnet2/layers/time_warp.py b/spaces/segments-tobias/conex/espnet2/layers/time_warp.py deleted file mode 100644 index 52574aadbf98a7cbc9f2585a3ef7fe541b2af252..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/layers/time_warp.py +++ /dev/null @@ -1,94 +0,0 @@ -from distutils.version import LooseVersion - -import torch - -from espnet.nets.pytorch_backend.nets_utils import pad_list - - -if LooseVersion(torch.__version__) >= LooseVersion("1.1"): - DEFAULT_TIME_WARP_MODE = "bicubic" -else: - # pytorch1.0 doesn't implement bicubic - DEFAULT_TIME_WARP_MODE = "bilinear" - - -def time_warp(x: torch.Tensor, window: int = 80, mode: str = DEFAULT_TIME_WARP_MODE): - """Time warping using torch.interpolate. - - Args: - x: (Batch, Time, Freq) - window: time warp parameter - mode: Interpolate mode - """ - - # bicubic supports 4D or more dimension tensor - org_size = x.size() - if x.dim() == 3: - # x: (Batch, Time, Freq) -> (Batch, 1, Time, Freq) - x = x[:, None] - - t = x.shape[2] - if t - window <= window: - return x.view(*org_size) - - center = torch.randint(window, t - window, (1,))[0] - warped = torch.randint(center - window, center + window, (1,))[0] + 1 - - # left: (Batch, Channel, warped, Freq) - # right: (Batch, Channel, time - warped, Freq) - left = torch.nn.functional.interpolate( - x[:, :, :center], (warped, x.shape[3]), mode=mode, align_corners=False - ) - right = torch.nn.functional.interpolate( - x[:, :, center:], (t - warped, x.shape[3]), mode=mode, align_corners=False - ) - - if x.requires_grad: - x = torch.cat([left, right], dim=-2) - else: - x[:, :, :warped] = left - x[:, :, warped:] = right - - return x.view(*org_size) - - -class TimeWarp(torch.nn.Module): - """Time warping using torch.interpolate. - - Args: - window: time warp parameter - mode: Interpolate mode - """ - - def __init__(self, window: int = 80, mode: str = DEFAULT_TIME_WARP_MODE): - super().__init__() - self.window = window - self.mode = mode - - def extra_repr(self): - return f"window={self.window}, mode={self.mode}" - - def forward(self, x: torch.Tensor, x_lengths: torch.Tensor = None): - """Forward function. - - Args: - x: (Batch, Time, Freq) - x_lengths: (Batch,) - """ - - if x_lengths is None or all(le == x_lengths[0] for le in x_lengths): - # Note that applying same warping for each sample - y = time_warp(x, window=self.window, mode=self.mode) - else: - # FIXME(kamo): I have no idea to batchify Timewarp - ys = [] - for i in range(x.size(0)): - _y = time_warp( - x[i][None, : x_lengths[i]], - window=self.window, - mode=self.mode, - )[0] - ys.append(_y) - y = pad_list(ys, 0.0) - - return y, x_lengths diff --git a/spaces/shibing624/ChatPDF/assets/html/appearance_switcher.html b/spaces/shibing624/ChatPDF/assets/html/appearance_switcher.html deleted file mode 100644 index 9375071fbdfda7bfd622d7f7bd2dfdd0c494341b..0000000000000000000000000000000000000000 --- a/spaces/shibing624/ChatPDF/assets/html/appearance_switcher.html +++ /dev/null @@ -1,11 +0,0 @@ -
    - - {label} - - - - -
    diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Digitbin Presents Kinemaster Diamond Lite APK for Android - The Best Video Editor with No Ads.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Digitbin Presents Kinemaster Diamond Lite APK for Android - The Best Video Editor with No Ads.md deleted file mode 100644 index 35b29c55938248b400ecd48d9a04fd8cfe0cb4ff..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Digitbin Presents Kinemaster Diamond Lite APK for Android - The Best Video Editor with No Ads.md +++ /dev/null @@ -1,113 +0,0 @@ - -

    Kinemaster Diamond Lite APK Download Digitbin: A Complete Guide

    -

    If you are looking for a powerful and easy-to-use video editing app for your Android device, you might want to check out Kinemaster Diamond Lite APK. This is a modified version of the popular Kinemaster app that offers premium features for free. You can download it from Digitbin, a reliable source of modded apps and games. In this article, we will show you how to download, install, and use Kinemaster Diamond Lite APK for your video editing needs.

    -

    Features of Kinemaster Diamond Lite APK

    -

    Kinemaster Diamond Lite APK is a feature-rich video editing app that lets you create stunning videos with just a few taps. Here are some of the features that you can enjoy with this app:

    -

    kinemaster diamond lite apk download digitbin


    DOWNLOAD »»» https://ssurll.com/2uO1rD



    -

    No watermark

    -

    One of the most annoying things about using free video editing apps is the watermark that they put on your videos. With Kinemaster Diamond Lite APK, you can remove the watermark and make your videos look more professional.

    -

    No ads

    -

    Another benefit of using Kinemaster Diamond Lite APK is that you don't have to deal with annoying ads that interrupt your workflow. You can enjoy a smooth and uninterrupted video editing experience with this app.

    -

    Chroma key

    -

    If you want to create amazing green screen effects, you can use the chroma key feature of Kinemaster Diamond Lite APK. This feature allows you to change the background of your videos with any image or video that you want.

    -

    Premium assets

    -

    Kinemaster Diamond Lite APK also gives you access to a huge library of premium assets that you can use for your videos. You can choose from thousands of transitions, effects, stickers, text styles, fonts, music tracks, and more.

    -

    Multiple layers

    -

    Another feature that makes Kinemaster Diamond Lite APK stand out from other video editing apps is the ability to add multiple layers to your videos. You can add up to 10 layers of video, audio, images, text, and stickers to create complex and creative videos.

    -

    How to download and install Kinemaster Diamond Lite APK from Digitbin

    -

    If you are interested in trying out Kinemaster Diamond Lite APK, you can download it from Digitbin, a trusted website that provides modded apps and games. Here are the steps that you need to follow:

    -

    Step 1: Visit the Digitbin website and search for Kinemaster Diamond Lite APK

    -

    First, you need to visit the Digitbin website by clicking [here](^1^). Once you are on the website, you can use the search bar to look for Kinemaster Diamond Lite APK. You will see a list of results that match your query. Click on the one that says "Kinemaster Digitbin Mod APK | No Watermark | Premium

    Step 2: Download the APK file and enable unknown sources on your device

    -

    Next, you need to download the APK file of Kinemaster Diamond Lite from the Digitbin website. You can do this by clicking on the download button and waiting for the file to be downloaded. Once the file is downloaded, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, you can go to your device settings, security, and toggle on the option that says "allow installation of apps from unknown sources".

    -

    kinemaster digitbin mod apk no watermark
    -kinemaster diamond lite apk free download digitbin
    -kinemaster digitbin premium apk latest version
    -kinemaster diamond lite mod apk download digitbin
    -kinemaster digitbin apk video editor for android
    -kinemaster diamond lite apk 2023 digitbin
    -kinemaster digitbin mod apk with chroma key
    -kinemaster diamond lite apk full unlocked digitbin
    -kinemaster digitbin apk download for pc
    -kinemaster diamond lite pro apk download digitbin
    -kinemaster digitbin mod apk without ads
    -kinemaster diamond lite apk old version digitbin
    -kinemaster digitbin apk with music library
    -kinemaster diamond lite mod apk 2022 digitbin
    -kinemaster digitbin premium apk free download
    -kinemaster diamond lite apk no root digitbin
    -kinemaster digitbin mod apk with multiple-track sound
    -kinemaster diamond lite apk latest update digitbin
    -kinemaster digitbin apk for ios
    -kinemaster diamond lite mod apk 2021 digitbin
    -kinemaster digitbin premium apk with animate feature
    -kinemaster diamond lite apk hack digitbin
    -kinemaster digitbin mod apk with video layer
    -kinemaster diamond lite apk offline digitbin
    -kinemaster digitbin apk for windows 10
    -kinemaster diamond lite pro mod apk digitbin
    -kinemaster digitbin premium apk with 4k support
    -kinemaster diamond lite apk online digitbin
    -kinemaster digitbin mod apk with voice changer
    -kinemaster diamond lite modded apk download digitbin
    -kinemaster digitbin premium apk with speed control
    -kinemaster diamond lite cracked apk download digitbin
    -kinemaster digitbin mod apk with transition effects
    -kinemaster diamond lite original apk download digitbin
    -kinemaster digitbin premium apk with stickers and fonts
    -kinemaster diamond lite unlocked apk download digitbin
    -kinemaster digitbin mod apk with reverse feature
    -kinemaster diamond lite pro version download digitbin
    -kinemaster digitbin premium apk with green screen feature
    -kinemaster diamond lite hacked version download digitbin
    -kinemaster digitbin mod apk with split screen feature
    -kinemaster diamond lite official version download digitbin
    -kinemaster digitbin premium apk with trim and crop feature
    -kinemaster diamond lite unlimited version download digitbin
    -kinemaster digitbin mod apk with rotate and zoom feature
    -how to install kinemaster diamond lite apk from digitbin website

    -

    Step 3: Install the APK file and launch the app

    -

    Finally, you need to install the APK file of Kinemaster Diamond Lite on your device. You can do this by locating the file in your downloads folder and tapping on it. You will see a prompt that asks you to confirm the installation. Tap on "install" and wait for the app to be installed. Once the app is installed, you can launch it by tapping on its icon on your home screen or app drawer.

    -

    How to use Kinemaster Diamond Lite APK for video editing

    -

    Now that you have downloaded and installed Kinemaster Diamond Lite APK, you can start using it for your video editing projects. Here are the steps that you need to follow:

    -

    Step 1: Select a project and import media files

    -

    When you open Kinemaster Diamond Lite APK, you will see a screen that asks you to select a project. You can choose from different aspect ratios, such as 16:9, 9:16, or 1:1. You can also choose from different themes, such as basic, travel, or music. After selecting a project, you will see a timeline where you can import media files. You can import videos, photos, or audio files from your device gallery or from the Kinemaster asset store.

    -

    Step 2: Add transitions, effects, stickers, text, and music

    -

    After importing media files, you can start adding transitions, effects, stickers, text, and music to your video. You can do this by using the toolbar at the bottom of the screen. You can select from various options, such as fade, slide, wipe, or zoom for transitions; blur, mosaic, mirror, or glitch for effects; emoji, shapes, or animals for stickers; and fonts, colors, or styles for text. You can also add music tracks from your device or from the Kinemaster asset store.

    -

    Step 3: Adjust the speed, volume, color, and crop of your video

    -

    After adding transitions, effects, stickers, text, and music to your video, you can adjust the speed, volume, color, and crop of your video. You can do this by using the toolbar at the top of the screen. You can select from various options, such as slow motion, fast forward, reverse, or loop for speed; mute, fade in/out or adjust for volume; brightness, contrast, saturation, or hue for color; and free, 16:9, 9:16, or 1:1 for crop. You can also use the slider to trim or split your video clips.

    -

    Step 4: Export and share your video

    -

    After adjusting the speed, volume, color, and crop of your video, you can export and share your video. You can do this by tapping on the export button at the top right corner of the screen. You can choose from different resolutions, such as 1080p, 720p, or 480p. You can also choose from different frame rates, such as 30 fps, 25 fps, or 24 fps. You can also enable or disable the hardware encoder option. After exporting your video, you can share it with your friends or upload it to social media platforms, such as YouTube, Facebook, or Instagram.

    -

    Pros and cons of Kinemaster Diamond Lite APK

    -

    Kinemaster Diamond Lite APK is a great video editing app that offers many advantages. However, it also has some drawbacks that you should be aware of. Here are some of the pros and cons of Kinemaster Diamond Lite APK:

    -

    Pros

    -
      -
    • It is free and has no watermark or ads.
    • -
    • It has a user-friendly interface and a simple workflow.
    • -
    • It has a lot of features and premium assets that you can use for your videos.
    • -
    • It supports multiple layers and chroma key.
    • -
    • It allows you to export and share your videos in high quality.
    • -
    -

    Cons

    -
      -
    • It is not available on the Google Play Store and requires manual installation.
    • -
    • It may not be compatible with some devices or Android versions.
    • -
    • It may cause some performance issues or glitches on some devices.
    • -
    • It may not be updated regularly or have technical support.
    • -
    • It may violate the terms and conditions of the original Kinemaster app.
    • -
    -

    Conclusion

    -

    Kinemaster Diamond Lite APK is a modified version of the Kinemaster app that offers premium features for free. You can download it from Digitbin, a reliable source of modded apps and games. You can use it to create stunning videos with just a few taps. You can enjoy features such as no watermark, no ads, chroma key, premium assets, and multiple layers. You can also export and share your videos in high quality. However, you should also be aware of the drawbacks of using Kinemaster Diamond Lite APK, such as compatibility issues, performance issues, update issues, and legal issues. If you are looking for a powerful and easy-to-use video editing app for your Android device, you might want to give Kinemaster Diamond Lite APK a try.

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Kinemaster Diamond Lite APK:

    -

    What is the difference between Kinemaster Diamond Lite APK and Kinemaster Pro APK?

    -

    Kinemaster Diamond Lite APK and Kinemaster Pro APK are both modified versions of the Kinemaster app that offer premium features for free. However, Kinemaster Diamond Lite APK is a lighter version that has fewer features and assets than Kinemaster Pro APK. Kinemaster Diamond Lite APK is suitable for users who want a simple and fast video editing app. Kinemaster Pro APK is suitable for users who want a more advanced and comprehensive video editing app.

    -

    Is Kinemaster Diamond Lite APK safe to use?

    -

    Kinemaster Diamond Lite APK is generally safe to use as long as you download it from a trusted source like Digitbin. However, you should always be careful when installing apps from unknown sources as they may contain malware or viruses that can harm your device. You should also scan the APK file with an antivirus app before installing it. You should also backup your data before using Kinemaster Diamond Lite APK as it may cause some errors or crashes on your device.

    -

    How can I update Kinemaster Diamond Lite APK?

    -

    Kinemaster Diamond Lite APK is not available on the Google Play Store and does not have an automatic update feature. Therefore, you need to manually check for updates on the Digitbin website or other sources that provide modded apps and games. You need to download the latest version of Kinemaster Diamond Lite APK and install it over the existing one. You should also make sure that you have enough storage space on your device before updating Kinemaster Diamond Lite APK.

    -

    Can I use Kinemaster Diamond Lite APK on my PC?

    -

    K inemaster Diamond Lite APK is an Android app that is designed for mobile devices. However, you can also use it on your PC with the help of an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, such as BlueStacks, Nox Player, or LDPlayer. You can download and install any of these emulators on your PC and then download and install Kinemaster Diamond Lite APK from Digitbin or other sources. You can then use Kinemaster Diamond Lite APK on your PC as you would on your mobile device.

    -

    Can I use Kinemaster Diamond Lite APK for commercial purposes?

    -

    Kinemaster Diamond Lite APK is a modded app that offers premium features for free. However, this also means that it violates the terms and conditions of the original Kinemaster app. Therefore, you should not use Kinemaster Diamond Lite APK for commercial purposes as it may infringe the intellectual property rights of the Kinemaster developers. You may also face legal issues or penalties if you use Kinemaster Diamond Lite APK for commercial purposes. If you want to use Kinemaster for commercial purposes, you should purchase the official Kinemaster app from the Google Play Store and subscribe to the premium plan.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Infinite MP3 Songs for Free - The Ultimate Music Collection.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Infinite MP3 Songs for Free - The Ultimate Music Collection.md deleted file mode 100644 index 18ebe17f1821fa22fe83482175587ba0e93fdff3..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Infinite MP3 Songs for Free - The Ultimate Music Collection.md +++ /dev/null @@ -1,104 +0,0 @@ -
    -

    What is Infinite MP3?

    -

    If you love listening to music, you might have wished that your favorite songs would never end. Imagine being able to enjoy an endless stream of music that sounds just like the original, but with infinite variations and surprises. Sounds too good to be true, right? Well, not anymore. Thanks to the power of artificial intelligence and machine learning, you can now create and enjoy infinite mp3 files from any audio source.

    -

    Infinite mp3 is a term that refers to a type of audio file that can play indefinitely without repeating itself. It is generated by using an algorithm that analyzes the structure and style of the original audio and creates new segments that match it. The algorithm can also add subtle changes and enhancements to make the audio more interesting and dynamic. The result is a seamless and continuous audio experience that sounds natural and authentic.

    -

    infinite mp3


    DOWNLOADhttps://ssurll.com/2uNT9b



    -

    There are many benefits of using infinite mp3 files. For example, you can:

    -
      -
    • Listen to your favorite music for as long as you want without getting bored or tired
    • -
    • Discover new aspects and details of the original audio that you might have missed before
    • -
    • Create unique and personalized soundtracks for your videos, games, podcasts, or presentations
    • -
    • Relax and meditate with soothing and ambient sounds that never end
    • -
    • Learn and practice languages, music, or skills with infinite audio resources
    • -
    -

    In this article, we will show you how to create infinite mp3 files from any audio source, how to enjoy infinite mp3 music on different devices and platforms, and what the future of infinite mp3 technology might look like.

    -

    How to Create Infinite MP3 Files

    -

    Creating infinite mp3 files is easier than you might think. You don't need any special skills or equipment to do it. All you need is an internet connection and a device that can access online tools or software that can generate infinite mp3 files from any audio source.

    -

    There are many online tools and software that can help you create infinite mp3 files. Some of them are free, while others require a subscription or a one-time payment. Here are some examples of online tools and software that you can use:

    -
      -
    • Infinite Music: This is a free online tool that lets you create infinite mp3 files from YouTube videos or SoundCloud tracks. You can choose from different genres, moods, tempos, and styles to customize your infinite music. You can also download your infinite mp3 files or share them with others.
    • -
    • Infinite Loop: This is a paid online tool that lets you create infinite mp3 files from any audio file or URL. You can upload your own audio file or paste a URL from YouTube, Spotify, or other sources. You can also adjust the length, speed, pitch, volume, and effects of your infinite loop. You can also download your infinite mp3 files or share them with others.
    • -
    • Infinite Jukebox: This is a free online tool that lets you create infinite mp3 files from any audio file or URL. You can upload your own audio file or paste a URL from YouTube, Spotify, or other sources. You can also see a visual representation of your infinite jukebox and skip to different parts of it. You can also download your infinite mp3 files or share them with others.
    • -
    -

    How to Enjoy Infinite MP3 Music

    -

    Once you have created your infinite mp3 files, you can enjoy them on different devices and platforms. Here are some tips and tricks on how to listen to infinite mp3 music:

    -
    • Use a compatible media player: Not all media players can support infinite mp3 files. You need to use a media player that can play mp3 files without interruption or buffering. Some examples of compatible media players are VLC, Winamp, iTunes, and Windows Media Player.
    • -
    • Use headphones or speakers: To fully appreciate the quality and variety of infinite mp3 music, you need to use headphones or speakers that can deliver clear and crisp sound. Avoid using low-quality or built-in speakers that can distort or muffle the sound.
    • -
    • Use playlists or shuffle mode: To enjoy a diverse and dynamic infinite mp3 music experience, you can create playlists or use shuffle mode to mix and match different infinite mp3 files. You can also use online platforms like Spotify or YouTube to find and play infinite mp3 music from other users or creators.
    • -
    • Use background or offline mode: To save battery and data, you can use background or offline mode to listen to infinite mp3 music without interruption. You can also download your infinite mp3 files to your device and listen to them without an internet connection.
    • -
    -

    The Future of Infinite MP3 Technology

    -

    Infinite mp3 technology is not only a fun and innovative way to enjoy music, but also a promising and powerful tool for various fields and industries. Here are some of the potential applications and implications of infinite mp3 technology in the future:

    -
      -
    • Education and learning: Infinite mp3 technology can be used to create infinite audio resources for education and learning purposes. For example, students can listen to infinite lectures, podcasts, audiobooks, or language lessons that adapt to their level and preferences. Teachers can also use infinite mp3 technology to create engaging and interactive audio content for their classes.
    • -
    • Entertainment and gaming: Infinite mp3 technology can be used to create infinite soundtracks and sound effects for entertainment and gaming purposes. For example, filmmakers, game developers, musicians, and artists can use infinite mp3 technology to create unique and immersive audio experiences for their audiences. Users can also use infinite mp3 technology to customize and enhance their own entertainment and gaming experiences.
    • -
    • Health and wellness: Infinite mp3 technology can be used to create infinite sounds for health and wellness purposes. For example, people can listen to infinite music, nature sounds, white noise, or binaural beats that help them relax, meditate, sleep, or focus. Therapists and coaches can also use infinite mp3 technology to create personalized and effective audio interventions for their clients.
    • -
    • Business and marketing: Infinite mp3 technology can be used to create infinite audio content for business and marketing purposes. For example, businesses can use infinite mp3 technology to create catchy and memorable jingles, slogans, or ads that attract and retain customers. Marketers can also use infinite mp3 technology to create tailored and targeted audio campaigns that appeal to different segments and niches.
    • -
    -

    Conclusion

    -

    Infinite mp3 is a revolutionary technology that allows you to create and enjoy endless music from any audio source. It is easy to use, versatile, and fun. It can also have many benefits and applications in various fields and industries. Whether you are a music lover, a learner, an entertainer, a gamer, a health enthusiast, a business owner, or a marketer, you can find infinite ways to use and enjoy infinite mp3 technology.

    -

    If you want to try it out for yourself, you can use one of the online tools or software that we mentioned in this article. You can also explore other online platforms or communities that offer infinite mp3 music from different genres, styles, moods, and themes. You will be amazed by the endless possibilities and surprises that await you.

    -

    infinite mp3 download
    -infinite mp3 free
    -infinite mp3 songs
    -infinite mp3 album
    -infinite mp3 eminem
    -infinite mp3 music
    -infinite mp3 player
    -infinite mp3 juice
    -infinite mp3 skull
    -infinite mp3 ringtone
    -infinite mp3 converter
    -infinite mp3 cutter
    -infinite mp3 merger
    -infinite mp3 splitter
    -infinite mp3 joiner
    -infinite mp3 editor
    -infinite mp3 recorder
    -infinite mp3 mixer
    -infinite mp3 streamer
    -infinite mp3 downloader
    -infinite mp3 online
    -infinite mp3 offline
    -infinite mp3 quality
    -infinite mp3 bitrate
    -infinite mp3 format
    -infinite mp3 size
    -infinite mp3 duration
    -infinite mp3 loop
    -infinite mp3 repeat
    -infinite mp3 shuffle
    -infinite mp3 playlist
    -infinite mp3 library
    -infinite mp3 storage
    -infinite mp3 cloud
    -infinite mp3 backup
    -infinite mp3 sync
    -infinite mp3 share
    -infinite mp3 transfer
    -infinite mp3 burn
    -infinite mp3 rip
    -infinite mp3 copy
    -infinite mp3 paste
    -infinite mp3 cut
    -infinite mp3 crop
    -infinite mp3 trim
    -infinite mp3 fade
    -infinite mp3 normalize
    -infinite mp3 equalize
    -infinite mp3 compress
    -infinite mp3 optimize

    -

    So what are you waiting for? Start creating and enjoying your own infinite mp3 music today!

    -

    Frequently Asked Questions

    -

    What is the difference between infinite mp3 and looped mp3?

    -

    A looped mp3 is a type of audio file that repeats the same segment over and over again. An infinite mp3 is a type of audio file that generates new segments that match the original audio without repeating itself.

    -

    How long can an infinite mp3 file play?

    -

    An infinite mp3 file can play indefinitely without stopping or repeating itself. However, the actual duration of an infinite mp3 file may depend on the size of the original audio source, the settings of the algorithm that generates it, and the capacity of the device that plays it.

    -

    Is infinite mp3 music legal?

    -

    Infinite mp3 music is legal as long as you have the permission or license to use the original audio source that you are converting into an infinite mp3 file. You should also respect the intellectual property rights of the creators or owners of the original audio source and the terms and conditions of the online tools or software that you are using to create infinite mp3 music. You should also give proper credit and attribution to the original audio source and the online tools or software that you are using.

    -

    Can I create infinite mp3 music from any audio source?

    -

    Yes, you can create infinite mp3 music from any audio source, as long as it is in a compatible format and has a clear and consistent sound quality. However, some audio sources may work better than others, depending on the genre, style, mood, and complexity of the original audio. You may also need to adjust the settings of the algorithm that generates the infinite mp3 music to achieve the best results.

    -

    Can I share or sell my infinite mp3 music?

    -

    Yes, you can share or sell your infinite mp3 music, as long as you have the permission or license to use the original audio source that you are converting into an infinite mp3 file. You should also respect the intellectual property rights of the creators or owners of the original audio source and the online tools or software that you are using to create infinite mp3 music. You should also follow the rules and regulations of the platforms or channels that you are using to share or sell your infinite mp3 music.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive Your Dream Truck with Truck Real Wheels Simulator Mod APK - Download Now.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive Your Dream Truck with Truck Real Wheels Simulator Mod APK - Download Now.md deleted file mode 100644 index 67a068a7d387c126349990d15d2d5dd97948e7ed..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Drive Your Dream Truck with Truck Real Wheels Simulator Mod APK - Download Now.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    Truck Real Wheels Simulator Mod APK: A Fun and Realistic Truck Driving Game

    -

    Introduction

    -

    Do you love driving trucks and delivering cargo across different terrains? Do you want to experience the thrill and challenge of being a truck driver in a realistic simulation game? If yes, then you should try Truck Real Wheels Simulator, a popular truck driving game for Android devices. In this game, you can drive various types of trucks and trailers, complete different missions, and explore different locations. You can also customize your trucks and trailers with different colors, stickers, wheels, and more.

    -

    However, if you want to enjoy the game to the fullest, you might want to download the mod APK version of Truck Real Wheels Simulator. The mod APK version is a modified version of the original game that gives you access to unlimited money, fuel, and all trucks and trailers unlocked. This way, you can play the game without any limitations or restrictions. You can also enjoy the realistic physics and graphics of the game without any lag or glitches.

    -

    truck real wheels simulator mod apk


    Download ✵✵✵ https://ssurll.com/2uNSrk



    -

    In this article, we will tell you more about the features of Truck Real Wheels Simulator Mod APK, how to download and install it on your device, and why you should give it a try. So, let's get started!

    -

    Features of Truck Real Wheels Simulator Mod APK

    -

    Unlimited money and fuel

    -

    One of the best features of Truck Real Wheels Simulator Mod APK is that it gives you unlimited money and fuel. Money is used to buy new trucks and trailers, upgrade them, and customize them. Fuel is used to drive your truck across different locations. In the original game, you have to earn money by completing missions and refuel your truck at gas stations. This can be time-consuming and frustrating, especially if you run out of money or fuel in the middle of a mission. However, with the mod APK version, you don't have to worry about that. You can buy anything you want with unlimited money and drive as long as you want with unlimited fuel.

    -

    All trucks and trailers unlocked

    -

    Another great feature of Truck Real Wheels Simulator Mod APK is that it gives you access to all trucks and trailers unlocked. In the original game, you have to unlock new trucks and trailers by completing missions and reaching certain levels. This can be tedious and boring, especially if you want to try different types of trucks and trailers. However, with the mod APK version, you can choose any truck or trailer you want from the start. You can drive trucks such as Scania, Volvo, MAN, Mercedes-Benz, DAF, Renault, Iveco, Mack, Kenworth, Peterbilt, Freightliner, International, Western Star, etc. You can also attach different types of trailers such as flatbeds, tankers, refrigerators, containers, dumpers, lowboys, etc.

    -

    Realistic physics and graphics

    -

    Truck Real Wheels Simulator Mod APK also offers realistic physics and graphics that make the game more fun and immersive. The game uses a realistic physics engine that simulates the behavior of trucks and trailers on different terrains such as asphalt roads, dirt roads, snow roads, etc. You can feel the weight of your cargo, the traction of your wheels, the suspension of your truck, etc. You can also see the damage effects on your truck and trailer if you crash or collide with other vehicles or objects.

    -

    The game also has stunning graphics that create a realistic environment for your truck driving experience. You can see the details of your truck and trailer such as lights, mirrors, exhausts, etc. You can also see the scenery of the locations you visit such as cities, villages, forests, mountains, etc. You can also see the weather effects such as rain, snow, fog, etc. The game also has realistic sound effects that match the engine noise, horn, brakes, etc. of your truck and trailer.

    -

    Various missions and locations

    -

    Truck Real Wheels Simulator Mod APK also provides you with various missions and locations to keep you entertained and challenged. The game has over 50 missions that require you to deliver different types of cargo to different destinations. You have to follow the traffic rules, avoid accidents, and manage your time and fuel. The game also has over 40 locations that you can explore with your truck and trailer. You can drive across Europe, America, Asia, Africa, etc. You can see the landmarks, cultures, and landscapes of different countries and regions.

    -

    Customizable trucks and trailers

    -

    Truck Real Wheels Simulator Mod APK also allows you to customize your trucks and trailers with different options and accessories. You can change the color, stickers, wheels, lights, exhausts, etc. of your truck and trailer. You can also upgrade the performance of your truck and trailer such as engine power, fuel capacity, brake system, etc. You can make your truck and trailer look unique and suit your style and preferences.

    -

    How to download and install Truck Real Wheels Simulator Mod APK

    -

    Step 1: Download the mod APK file from a trusted source

    -

    The first step to download and install Truck Real Wheels Simulator Mod APK is to find a reliable source that provides the mod APK file. You can search online for websites that offer the mod APK file for free. However, you have to be careful and avoid downloading from shady or malicious websites that might contain viruses or malware. You can also check the reviews and ratings of the websites before downloading the mod APK file.

    -

    truck real wheels simulator mod apk unlimited money
    -truck real wheels simulator mod apk download for android
    -truck real wheels simulator mod apk latest version
    -truck real wheels simulator mod apk happymod
    -truck real wheels simulator mod apk free shopping
    -truck real wheels simulator mod apk revdl
    -truck real wheels simulator mod apk offline
    -truck real wheels simulator mod apk android 1
    -truck real wheels simulator mod apk rexdl
    -truck real wheels simulator mod apk no ads
    -truck real wheels simulator mod apk all unlocked
    -truck real wheels simulator mod apk 4.13.0
    -truck real wheels simulator mod apk unlimited fuel
    -truck real wheels simulator mod apk hack
    -truck real wheels simulator mod apk obb
    -truck real wheels simulator mod apk 2021
    -truck real wheels simulator mod apk pure
    -truck real wheels simulator mod apk unlimited everything
    -truck real wheels simulator mod apk online
    -truck real wheels simulator mod apk data
    -truck real wheels simulator mod apk 4.12.0
    -truck real wheels simulator mod apk 4.11.0
    -truck real wheels simulator mod apk 4.10.0
    -truck real wheels simulator mod apk 4.9.0
    -truck real wheels simulator mod apk 4.8.0
    -truck real wheels simulator mod apk 4.7.0
    -truck real wheels simulator mod apk 4.6.0
    -truck real wheels simulator mod apk 4.5.0
    -truck real wheels simulator mod apk 4.4.0
    -truck real wheels simulator mod apk 4.3.0
    -truck real wheels simulator mod apk 4.2.0
    -truck real wheels simulator mod apk 4.1.0
    -truck real wheels simulator mod apk 4.0.0
    -truck real wheels simulator mod apk old version
    -truck real wheels simulator mod apk new version
    -download game truck real wheels simulator mod apk
    -how to install truck real wheels simulator mod apk
    -how to play truck real wheels simulator mod apk
    -how to update truck real wheels simulator mod apk
    -how to get truck real wheels simulator mod apk
    -best settings for truck real wheels simulator mod apk
    -best trucks in truck real wheels simulator mod apk
    -best maps in truck real wheels simulator mod apk
    -best mods for truck real wheels simulator mod apk
    -cheats for truck real wheels simulator mod apk
    -tips and tricks for truck real wheels simulator mod apk
    -review of truck real wheels simulator mod apk

    -

    Step 2: Enable unknown sources on your device

    -

    The second step to download and install Truck Real Wheels Simulator Mod APK is to enable unknown sources on your device. This is because the mod APK file is not from the official Google Play Store and your device might block the installation of unknown sources by default. To enable unknown sources, you have to go to your device settings, then security or privacy settings, then find the option to allow installation of apps from unknown sources and turn it on.

    -

    Step 3: Install the mod APK file and enjoy the game

    -

    The third and final step to download and install Truck Real Wheels Simulator Mod APK is to install the mod APK file on your device. To do this, you have to locate the mod APK file that you downloaded in step 1 and tap on it. Then, follow the instructions on the screen to complete the installation process. Once the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Truck Real Wheels Simulator Mod APK with unlimited money, fuel, and all trucks and trailers unlocked.

    -

    Conclusion

    -

    Truck Real Wheels Simulator Mod APK is a fun and realistic truck driving game that lets you drive various types of trucks and trailers across different locations. You can also customize your trucks and trailers with different options and accessories. The mod APK version gives you unlimited money, fuel, and all trucks and trailers unlocked so that you can play the game without any limitations or restrictions. You can also enjoy the realistic physics and graphics of the game without any lag or glitches.

    -

    If you are looking for a truck driving game that offers you a realistic simulation experience with unlimited features, then you should download Truck Real Wheels Simulator Mod APK today. You will not regret it!

    -

    To download Truck Real Wheels Simulator Mod APK for free, click on this link: [Download Truck Real Wheels Simulator Mod APK]

    -

    Frequently Asked Questions

    -

    Here are some of the common questions that people ask about Truck Real Wheels Simulator Mod APK:

    -
      -
    1. Is Truck Real Wheels Simulator Mod APK safe to download?
    2. -

      Yes, Truck Real Wheels Simulator Mod APK is safe to download as long as you download it from a trusted source that provides a virus-free mod APK file. However, you should always scan the mod APK file with an antivirus software before installing it on your device.

      -
    3. Is Truck Real Wheels Simulator Mod APK compatible with my device?
    4. -

      Truck Real Wheels Simulator Mod APK is compatible with most Android devices that run on Android 4.4 or higher versions. However, some devices might not support some features or functions of the game due to hardware or software limitations.

      -
    5. Do I need an internet connection to play Truck Real Wheels Simulator Mod APK?
    6. -

      No, you No, you do not need an internet connection to play Truck Real Wheels Simulator Mod APK. The game can be played offline without any problem. However, you might need an internet connection to download the mod APK file and update the game if there are any new versions available.

      -
    7. How can I update Truck Real Wheels Simulator Mod APK?
    8. -

      To update Truck Real Wheels Simulator Mod APK, you have to download the latest mod APK file from the same source that you downloaded the previous version. Then, you have to uninstall the old version and install the new version on your device. You can also check the website of the mod APK provider for any news or updates regarding the game.

      -
    9. Can I play Truck Real Wheels Simulator Mod APK with my friends?
    10. -

      Yes, you can play Truck Real Wheels Simulator Mod APK with your friends. The game has a multiplayer mode that allows you to connect with other players online and compete with them in different missions and challenges. You can also chat with them and share your truck driving experience.

      -
    -

    I hope this article has helped you learn more about Truck Real Wheels Simulator Mod APK and how to download and install it on your device. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy trucking!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Tekken 3 on Your Android Device - Download the APK File Here.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Tekken 3 on Your Android Device - Download the APK File Here.md deleted file mode 100644 index 58a6045760d12cb2b810c08d31a5401768982fc5..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Tekken 3 on Your Android Device - Download the APK File Here.md +++ /dev/null @@ -1,136 +0,0 @@ -
    -

    How to Download Tekken 3 for Android

    -

    Tekken 3 is one of the most popular and iconic fighting games of all time. It was released in 1997 for the PlayStation and has since been ported to various platforms, including the android. If you are a fan of Tekken or fighting games in general, you should definitely try playing Tekken 3 on your android device. It is a fun, fast-paced, and addictive game that will keep you entertained for hours.

    -

    But how can you download Tekken 3 for android? Well, it is not as simple as downloading any other app from the Google Play Store. You will need to follow some steps and use some tools to make it work. In this article, we will show you how to download Tekken 3 for android in four easy steps. We will also tell you about the features, tips, and tricks of playing Tekken 3 on your android device. So, let's get started!

    -

    download tekken 3 for android


    DOWNLOAD ……… https://ssurll.com/2uNTi0



    -

    Step 1: Download an APK file of Tekken 3 from a reliable source

    -

    An APK file is a package file that contains the installation files of an android app. You can download an APK file of Tekken 3 from various websites that offer free or paid downloads of android games. However, you need to be careful and choose a reliable source that does not contain any viruses or malware. Some of the websites that we recommend are:

    -
      -
    • APKCombo: This website offers a free download of Tekken 3 APK (17 MB) that is compatible with most android devices. It also provides a description, screenshots, and ratings of the game.
    • -
    • APKPure: This website also offers a free download of Tekken 3 APK (42 MB) that is based on the PlayStation game style. It also provides a description, screenshots, and ratings of the game.
    • -
    -

    Once you have downloaded the APK file of Tekken 3, you need to save it in a folder that you can easily access on your android device.

    -

    Step 2: Install an emulator app that can run PlayStation games on your android device

    -

    An emulator app is a software that can simulate the functions of another device or platform on your android device. In this case, you need an emulator app that can run PlayStation games on your android device. There are many emulator apps available on the Google Play Store or other websites, but some of the best ones are:

    -
      -
    • ePSXe: This is one of the most popular and reliable emulator apps for PlayStation games. It has high compatibility, good speed, and sound quality. It also supports cheat codes, save states, and multiplayer mode. It costs $3.75 on the Google Play Store.
    • -
    • FPse: This is another great emulator app for PlayStation games. It has high compatibility, good speed, and sound quality. It also supports cheat codes, save states, and multiplayer mode. It costs $2.99 on the Google Play Store.
    • -
    -

    Once you have installed an emulator app of your choice, you need to grant it the necessary permissions and configure the settings according to your preferences. You can also customize the controls, graphics, and sound options to enhance your gaming experience.

    -

    Step 3: Launch the emulator app and load the Tekken 3 APK file

    -

    Now that you have downloaded the APK file of Tekken 3 and installed an emulator app, you are ready to play the game. To do that, you need to launch the emulator app and load the Tekken 3 APK file. Here are the steps to follow:

    -

    download tekken 3 apk for android
    -tekken 3 android game free download
    -how to download tekken 3 on android phone
    -tekken 3 full version download for android
    -tekken 3 android download without emulator
    -download tekken 3 iso file for android
    -tekken 3 android game download apkcombo
    -tekken 3 for android offline download
    -best site to download tekken 3 for android
    -download tekken 3 mod apk for android
    -tekken 3 android game download pakoption
    -tekken 3 android download highly compressed
    -download tekken 3 original apk for android
    -tekken 3 android game download play store
    -how to download tekken 3 on android tablet
    -tekken 3 full hd download for android
    -tekken 3 android download with cheats
    -download tekken 3 apk + data for android
    -tekken 3 android game download uptodown
    -how to download tekken 3 on android tv
    -tekken 3 latest version download for android
    -tekken 3 android game download mob.org
    -download tekken 3 bios file for android
    -tekken 3 android game download apkpure
    -how to download tekken 3 on android emulator
    -tekken 3 unlocked characters download for android
    -tekken 3 android game download rexdl
    -download tekken 3 exe file for android
    -tekken 3 android game download apkmonk
    -how to download tekken 3 on android without internet
    -tekken 3 new update download for android
    -tekken 3 android game download droidtrix
    -download tekken 3 zip file for android
    -tekken 3 android game download apkhere
    -how to download tekken 3 on android with sound
    -tekken 3 all endings download for android
    -tekken 3 android game download oceanofapk
    -download tekken 3 setup file for android
    -tekken 3 android game download apkmirror
    -how to download tekken 3 on android in hindi
    -tekken 3 online multiplayer download for android
    -tekken 3 android game download happy mod
    -download tekken 3 bin file for android
    -tekken 3 android game download apk4fun
    -how to download tekken 3 on android easy way
    -tekken 3 hd graphics mod download for android
    -tekken 3 android game download revdl.com

    -
      -
    1. Open the emulator app and tap on the menu icon.
    2. -
    3. Select the option to load a game or a file.
    4. -
    5. Browse through your folders and locate the Tekken 3 APK file that you downloaded earlier.
    6. -
    7. Select the file and tap on OK or Open.
    8. -
    9. Wait for a few seconds as the emulator app loads the game.
    10. -
    -

    Congratulations! You have successfully downloaded Tekken 3 for android and can now play it on your device.

    -

    Step 4: Enjoy playing Tekken 3 on your android device with smooth graphics and controls

    -

    Tekken 3 is a classic fighting game that offers a lot of fun and excitement. You can choose from over 20 characters, each with their own unique moves, combos, and styles. You can also play in various modes, such as Arcade, Versus, Team Battle, Survival, Time Attack, Practice, and more. You can also unlock hidden characters, modes, and secrets by completing certain tasks or using cheat codes.

    -

    Playing Tekken 3 on your android device is a great way to relive the nostalgia of the PlayStation era. You can enjoy the smooth graphics and controls of the game on your device's screen. You can also connect with other players online or offline using the multiplayer mode of the emulator app. You can also save your progress and resume it anytime using the save states feature of the emulator app.

    -

    Features of Tekken 3 for Android

    -

    Tekken 3 is not just a simple fighting game. It has many features that make it one of the best games of its genre. Here are some of the features of Tekken 3 for android:

    -
      -
    • The gameplay: Tekken 3 has a fast-paced and fluid gameplay that allows you to perform various moves, combos, and attacks with ease. You can also use sidesteps, throws, counters, and special moves to gain an advantage over your opponent. The game also has a balanced difficulty level that challenges you without being frustrating.
    • -
    • The characters: Tekken 3 has a diverse and colorful cast of characters that you can choose from. You can play as old favorites like Jin Kazama, Paul Phoenix, Nina Williams, King, Yoshimitsu, and more. You can also play as new characters like Hwoarang, Eddy Gordo, Ling Xiaoyu, Bryan Fury, Ogre, and more. Each character has their own personality, backstory, and fighting style that make them unique and fun to play.
    • -
    • The modes: Tekken 3 has many modes that you can play in. You can play in Arcade mode, where you fight against a series of opponents until you reach the final boss. You can play in Versus mode, where you fight against another player or the computer. You can play in Team Battle mode, where you form a team of up to eight characters and fight against another team. You can play in Survival mode, where you try to survive as long as possible against endless waves of enemies. You can play in Time Attack mode, where you try to beat the game as fast as possible. You can play in Practice mode, where you can train your skills and learn new moves. You can also play in Tekken Force mode, where you fight against enemies in a side-scrolling beat 'em up style.
    • -
    • The stages: Tekken 3 has many stages that you can fight in. Each stage has its own theme, music, and background. Some of the stages are based on real-world locations like Hong Kong, Mexico, India, Nepal, Egypt, and more. Some of the stages are based on fantasy settings like Ogre's Temple, Forest Law's Dojo, Jin's Laboratory, and more. Some of the stages also have interactive elements like breakable walls or floors that add more variety to the fights.
    • -

    The benefits: Tekken 3 has many benefits that make it worth playing on your android device. Some of the benefits are:

    -
      -
    • The nostalgia: Tekken 3 is a game that many people grew up with and have fond memories of. Playing it on your android device can bring back those memories and make you feel like a kid again. You can also share your experience with your friends or family who also love Tekken 3.
    • -
    • The convenience: Tekken 3 is a game that you can play anytime and anywhere on your android device. You don't need to have a PlayStation console or a TV to play it. You can play it on your phone or tablet, whether you are at home, at work, at school, or on the go. You can also pause and resume the game whenever you want.
    • -
    • The compatibility: Tekken 3 is a game that is compatible with most android devices, regardless of their specifications or models. You don't need to have a high-end device to play it. You can play it on your old or new device, as long as it has enough storage space and an emulator app.
    • -
    • The cost: Tekken 3 is a game that is free to download and play on your android device. You don't need to pay any money to enjoy it. You can download it from various websites that offer free or paid downloads of android games. You can also use free or paid emulator apps to run it on your device.
    • -
    -

    Tips and Tricks for Playing Tekken 3 on Android

    -

    Tekken 3 is a game that requires skill, strategy, and practice to master. It is not a game that you can win by button-mashing or luck. If you want to improve your performance and beat your opponents, you need to learn some tips and tricks for playing Tekken 3 on android. Here are some of them:

    -
      -
    • How to master the combos, moves, and strategies of each character: Each character in Tekken 3 has their own set of combos, moves, and strategies that make them unique and effective. You need to learn how to use them properly and wisely in different situations. You can do that by playing in Practice mode, where you can train your skills and learn new moves. You can also watch videos or read guides online that show you how to perform the combos, moves, and strategies of each character.
    • -
    • How to unlock hidden characters, modes, and secrets in Tekken 3: Tekken 3 has many hidden characters, modes, and secrets that you can unlock by completing certain tasks or using cheat codes. Some of the hidden characters are Dr. Bosconovitch, Gon, Tiger Jackson, Anna Williams, Armor King, and more. Some of the hidden modes are Ball mode, Beach Ball mode, Theater mode, and more. Some of the secrets are alternate costumes, endings, soundtracks, and more. You can unlock them by playing in Arcade mode or Versus mode and fulfilling certain conditions or entering certain codes.
    • -

    Frequently Asked Questions about Tekken 3 for Android

    -

    Tekken 3 is a game that many people have questions about, especially when it comes to playing it on android devices. Here are some of the most frequently asked questions and answers about Tekken 3 for android:

    -
      -
    1. Is Tekken 3 for android legal?
    2. -

      Tekken 3 for android is not an official app from the developers or publishers of the game. It is a fan-made app that uses an APK file of the game and an emulator app to run it on android devices. Therefore, it is not legal to download or play Tekken 3 for android without the permission or license of the original owners of the game. However, some people argue that it is legal to download or play Tekken 3 for android if you own a copy of the original game or have paid for it in the past.

      -
    3. Is Tekken 3 for android safe?
    4. -

      Tekken 3 for android is not a guaranteed safe app to download or play on your android device. It may contain viruses, malware, or other harmful elements that can damage your device or compromise your privacy. Therefore, you need to be careful and choose a reliable source to download the APK file of Tekken 3 and the emulator app. You also need to scan the files with an antivirus software before installing them on your device. You also need to backup your data and use a VPN service to protect your online security.

      -
    5. Is Tekken 3 for android compatible with my device?
    6. -

      Tekken 3 for android is compatible with most android devices, regardless of their specifications or models. However, some devices may have issues with running the game smoothly or displaying the graphics correctly. Therefore, you need to check the compatibility of your device before downloading or playing Tekken 3 for android. You can do that by reading the reviews, ratings, and comments of other users who have downloaded or played Tekken 3 for android on their devices. You can also test the game on your device by playing it for a few minutes and seeing if it works well.

      -
    7. How can I improve the performance and quality of Tekken 3 for android?
    8. -

      Tekken 3 for android is a game that requires a lot of resources and power from your device to run properly. Therefore, you may experience some lagging, crashing, or freezing issues while playing the game. To improve the performance and quality of Tekken 3 for android, you can do some things such as:

      -
        -
      • Closing other apps and background processes that are running on your device.
      • -
      • Clearing the cache and data of the emulator app and the game app.
      • -
      • Adjusting the settings of the emulator app and the game app according to your device's capabilities and preferences.
      • -
      • Using a stable and fast internet connection.
      • -
      • Updating your device's software and firmware.
      • -
      -
    9. Where can I find more information and help about Tekken 3 for android?
    10. -

      Tekken 3 for android is a game that has a lot of information and help available online. You can find more information and help about Tekken 3 for android by visiting some websites such as:

      -
        -
      • Tekken Wiki: This website is a comprehensive source of information about everything related to Tekken, including Tekken 3. You can find information about the characters, gameplay, modes, stages, secrets, trivia, and more.
      • -
      • Tekken Zaibatsu: This website is a community of Tekken fans and players who share their knowledge, tips, tricks, guides, videos, and more about Tekken, including Tekken 3. You can join their forums, chat rooms, tournaments, and more.
      • -
      -

      Conclusion

      -

      Tekken 3 is one of the best fighting games ever made and you can play it on your android device with ease. All you need to do is follow these four steps:

      -
        -
      1. Download an APK file of Tekken 3 from a reliable source.
      2. -
      3. Install an emulator app that can run PlayStation games on your android device.
      4. -
      5. Launch the emulator app and load the Tekken 3 APK file.
      6. -
      7. Enjoy playing Tekken 3 on your android device with smooth graphics and controls.
      8. -
      -

      Tekken 3 has many features that make it a fun and exciting game to play on your android device. You can choose from over 20 characters, each with their own moves, combos, and styles. You can also play in various modes, such as Arcade, Versus, Team Battle, Survival, Time Attack, Practice, and more. You can also unlock hidden characters, modes, and secrets by completing certain tasks or using cheat codes.

      -

      Playing Tekken 3 on your android device is a great way to relive the nostalgia of the PlayStation era. You can also connect with other players online or offline using the multiplayer mode of the emulator app. You can also save your progress and resume it anytime using the save states feature of the emulator app.

      -

      If you need more information or help about Tekken 3 for android, you can visit some websites that offer comprehensive guides, videos, tips, tricks, and more. You can also join some communities of Tekken fans and players who share their experience and knowledge about the game.

      -

      So, what are you waiting for? Download Tekken 3 for android today and enjoy one of the best fighting games ever made on your device. You will not regret it!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.py b/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.py deleted file mode 100644 index d783590e4ebb9e8069b6a5bebdd36f0be57309e6..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.py +++ /dev/null @@ -1,276 +0,0 @@ -# -*- encoding: utf-8 -*- -''' -Copyright 2022 The International Digital Economy Academy (IDEA). CCNL team. All rights reserved. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@File : train.py -@Time : 2022/11/09 22:27 -@Author : Gan Ruyi -@Version : 1.0 -@Contact : ganruyi@idea.edu.cn -@License : (C)Copyright 2022-2023, CCNL-IDEA -''' -import hashlib -import itertools -import os -from pathlib import Path -from tqdm.auto import tqdm -import torch -import argparse -from pytorch_lightning import ( - LightningModule, - Trainer, -) -from pytorch_lightning.callbacks import ( - LearningRateMonitor, -) -from transformers import BertTokenizer, BertModel, CLIPTokenizer, CLIPTextModel -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from torch.nn import functional as F -from fengshen.data.dreambooth_datasets.dreambooth_datasets import PromptDataset, DreamBoothDataset -from fengshen.data.universal_datamodule import UniversalDataModule -from fengshen.models.model_utils import ( - add_module_args, - configure_optimizers, - get_total_steps, -) -from fengshen.utils.universal_checkpoint import UniversalCheckpoint -from fengshen.data.dreambooth_datasets.dreambooth_datasets import add_data_args - - -class StableDiffusionDreamBooth(LightningModule): - @staticmethod - def add_module_specific_args(parent_parser): - parser = parent_parser.add_argument_group('Taiyi Stable Diffusion Module') - parser.add_argument('--train_text_encoder', action='store_true', default=False) - # dreambooth train unet only default - parser.add_argument('--train_unet', action='store_true', default=True) - return parent_parser - - def __init__(self, args): - super().__init__() - if 'Taiyi-Stable-Diffusion-1B-Chinese-v0.1' in args.model_path: - self.tokenizer = BertTokenizer.from_pretrained( - args.model_path, subfolder="tokenizer") - self.text_encoder = BertModel.from_pretrained( - args.model_path, subfolder="text_encoder") # load from taiyi_finetune-v0 - else: - self.tokenizer = CLIPTokenizer.from_pretrained( - args.model_path, subfolder="tokenizer") - self.text_encoder = CLIPTextModel.from_pretrained( - args.model_path, subfolder="text_encoder") - self.vae = AutoencoderKL.from_pretrained( - args.model_path, subfolder="vae") - self.unet = UNet2DConditionModel.from_pretrained( - args.model_path, subfolder="unet") - self.noise_scheduler = DDPMScheduler.from_config( - args.model_path, subfolder="scheduler") - - # set model - self.vae.requires_grad_(False) - if not args.train_text_encoder: - self.requires_grad_(False) - if not args.train_unet: - self.requires_grad_(False) - - self.save_hyperparameters(args) - - def generate_extra_data(self): - global_rank = self.global_rank - device = self.trainer.device_ids[global_rank] - print('generate on device {} of global_rank {}'.format(device, global_rank)) - class_images_dir = Path(self.hparams.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < self.hparams.num_class_images: - pipeline = StableDiffusionPipeline.from_pretrained( - self.hparams.model_path, - safety_checker=None, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = self.hparams.num_class_images - cur_class_images - print(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(self.hparams.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=self.hparams.sample_batch_size) - - pipeline.to(device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=global_rank != 0 - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - # if torch.cuda.is_available(): - # torch.cuda.empty_cache() - - def setup(self, stage) -> None: - if self.hparams.with_prior_preservation: - self.generate_extra_data() - if stage == 'fit': - self.total_steps = get_total_steps(self.trainer, self.hparams) - print('Total steps: {}' .format(self.total_steps)) - - def configure_optimizers(self): - model_params = [] - if self.hparams.train_unet and self.hparams.train_text_encoder: - model_params = itertools.chain(self.unet.parameters(), self.text_encoder.parameters()) - elif self.hparams.train_unet: - model_params = self.unet.parameters() - elif self.hparams.train_text_encoder: - model_params = self.text_encoder.parameters() - return configure_optimizers(self, model_params=model_params) - - def training_step(self, batch, batch_idx): - if self.hparams.train_text_encoder: - self.text_encoder.train() - if self.hparams.train_unet: - self.unet.train() - - latents = self.vae.encode(batch["pixel_values"]).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn(latents.shape).to(latents.device) - noise = noise.to(dtype=self.unet.dtype) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, self.noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - - noisy_latents = self.noise_scheduler.add_noise(latents, noise, timesteps) - noisy_latents = noisy_latents.to(dtype=self.unet.dtype) - - # Get the text embedding for conditioning - # with torch.no_grad(): - encoder_hidden_states = self.text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - noise_pred = self.unet(noisy_latents, timesteps, encoder_hidden_states).sample - - if self.hparams.with_prior_preservation: - # Chunk the noise and noise_pred into two parts and compute the loss on each part separately. - noise_pred, noise_pred_prior = torch.chunk(noise_pred, 2, dim=0) - noise, noise_prior = torch.chunk(noise, 2, dim=0) - # Compute instance loss - loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean() - # Compute prior loss - prior_loss = F.mse_loss(noise_pred_prior, noise_prior, reduction="mean") - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(noise_pred, noise, reduction="mean") - self.log("train_loss", loss.item(), on_epoch=False, prog_bar=True, logger=True) - - if self.trainer.global_rank == 0: - if (self.global_step+1) % 5000 == 0: - print('saving model...') - pipeline = StableDiffusionPipeline.from_pretrained( - args.model_path, unet=self.unet, text_encoder=self.text_encoder, tokenizer=self.tokenizer, - ) - pipeline.save_pretrained(os.path.join( - args.default_root_dir, f'hf_out_{self.trainer.current_epoch}')) - - return {"loss": loss} - - def on_train_end(self) -> None: - if self.trainer.global_rank == 0: - print('saving model...') - pipeline = StableDiffusionPipeline.from_pretrained( - args.model_path, unet=self.unet, text_encoder=self.text_encoder, tokenizer=self.tokenizer, - ) - pipeline.save_pretrained(os.path.join( - args.default_root_dir, f'hf_out_{self.trainer.current_epoch}')) - - def on_load_checkpoint(self, checkpoint) -> None: - # 兼容低版本lightning,低版本lightning从ckpt起来时steps数会被重置为0 - global_step_offset = checkpoint["global_step"] - if 'global_samples' in checkpoint: - self.consumed_samples = checkpoint['global_samples'] - self.trainer.fit_loop.epoch_loop._batches_that_stepped = global_step_offset - - -if __name__ == '__main__': - args_parser = argparse.ArgumentParser() - args_parser = add_module_args(args_parser) - args_parser = add_data_args(args_parser) - args_parser = UniversalDataModule.add_data_specific_args(args_parser) - args_parser = Trainer.add_argparse_args(args_parser) - args_parser = StableDiffusionDreamBooth.add_module_specific_args(args_parser) - args_parser = UniversalCheckpoint.add_argparse_args(args_parser) - args = args_parser.parse_args() - - model = StableDiffusionDreamBooth(args) - - tokenizer = model.tokenizer - datasets = DreamBoothDataset( - instance_data_dir=args.instance_data_dir, - instance_prompt=args.instance_prompt, - tokenizer=tokenizer, - class_data_dir=args.class_data_dir, - class_prompt=args.class_prompt, - size=512, - center_crop=args.center_crop, - ) - # construct the datasets to a dict for universal_datamodule - datasets = {'train': datasets} - - def collate_fn(examples): - # print(examples) - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad( - {"input_ids": input_ids}, - padding="max_length", - max_length=tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - - return batch - - datamodule = UniversalDataModule( - tokenizer=tokenizer, collate_fn=collate_fn, args=args, datasets=datasets) - - lr_monitor = LearningRateMonitor(logging_interval='step') - checkpoint_callback = UniversalCheckpoint(args) - - trainer = Trainer.from_argparse_args(args, - callbacks=[ - lr_monitor, - checkpoint_callback]) - - trainer.fit(model, datamodule, ckpt_path=args.load_ckpt_path) diff --git a/spaces/stanciu/anon8231489123-vicuna-13b-GPTQ-4bit-128g/app.py b/spaces/stanciu/anon8231489123-vicuna-13b-GPTQ-4bit-128g/app.py deleted file mode 100644 index 20ec293c0e48afb6921000195d2a572929814783..0000000000000000000000000000000000000000 --- a/spaces/stanciu/anon8231489123-vicuna-13b-GPTQ-4bit-128g/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/anon8231489123/vicuna-13b-GPTQ-4bit-128g").launch() \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/CRACK SlickEdit [PORTABLE] VERIFIED.md b/spaces/stomexserde/gpt4-ui/Examples/CRACK SlickEdit [PORTABLE] VERIFIED.md deleted file mode 100644 index d952abb5d09b1ae6f9404445caf25e2d945e96ae..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/CRACK SlickEdit [PORTABLE] VERIFIED.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      CRACK SlickEdit [PORTABLE]: How to Get the Most Powerful Code Editor for Free

      -

      If you are a developer, you know how important it is to have a reliable and versatile code editor that can handle any programming language and platform. You also know how expensive it can be to buy a professional code editor that meets your needs. That's why you might be interested in CRACK SlickEdit [PORTABLE], a hacked version of the popular SlickEdit software that you can download and use for free. But before you do that, you should know what SlickEdit is, what portable software is, and how to use CRACK SlickEdit [PORTABLE] safely and legally. In this article, we will cover all these topics and more.

      -

      What is SlickEdit and Why You Need It

      -

      SlickEdit is a cross-platform commercial source code editor, text editor, and integrated development environment (IDE) developed by SlickEdit, Inc. It supports over 60 programming languages on 9 platforms, including Windows, Linux, macOS, Raspberry Pi, AIX, HP-UX, Solaris SPARC, Solaris x86. It has a rich set of coding tools and powerful time-saving features, such as syntax analysis, code completion, code formatting, debugging, version control integration, file differencing, and more. It is widely used by programmers around the world for various purposes, such as web development, software engineering, data analysis, scripting, etc.

      -

      CRACK SlickEdit [PORTABLE]


      Downloadhttps://urlgoal.com/2uIa7p



      -

      SlickEdit Features and Benefits

      -

      Some of the main features and benefits of SlickEdit are:

      -
        -
      • Syntax expansion: Automatically expands common block structures (e.g. if, for, try) after typing the keyword.
      • -
      • Auto-complete: Reduces keystrokes by completing symbols as you type.
      • -
      • SmartPaste®: Automatically reindents pasted lines of text.
      • -
      • Keystroke emulations: Choose from 15 keystroke emulations including Brief, CodeWright, Vim, and Emacs.
      • -
      • Multiple cursors and selections: Edit multiple lines or regions of code at once.
      • -
      • Symbol analysis and navigation: Navigate source code, jump to a symbol definition, declaration, or reference. List members/methods/properties for a symbol or object. Display function prototype and highlight current argument when entering function arguments.
      • -
      • Debuggers: Debug code for GNU C/C++, Java, WinDbg, Clang C/C++ LLDB, Groovy, Google Go, Python, Perl, Ruby, Scala, PHP, Xcode, and Android JVM/NDK.
      • -
      • Integrated builds: Build projects using makefiles or project files.
      • -
      • Beautifiers: Automatically reformat code when typing, pasting, or performing syntax/alias expansion.
      • -
      • Diffzilla multi-file and folder diff: Compare files or folders side by side with color-coded differences. Merge changes with a click.
      • -
      • Version control support with Shelving: Work with Git , Subversion , CVS , Perforce , Mercurial , Bitbucket , GitHub , SourceSafe , Team Foundation Server , Surround SCM , ClearCase , CA Harvest , - and more. Shelve changes to temporarily store modified files without committing them to the repository.
      • -
      • Customizable interface: Customize menus, toolbars, key bindings, fonts, colors, and more. Create and share your own macros, aliases, commands, and context menus.
      • -
      • Multi-platform support: Run SlickEdit on Windows, Linux, macOS, Raspberry Pi, AIX, HP-UX, Solaris SPARC, Solaris x86. Share your configuration files across platforms.
      • -
      -

      With these features and benefits, SlickEdit can help you code faster, easier, and smarter. It can boost your productivity, improve your code quality, and enhance your coding experience.

      -

      SlickEdit Pricing and Editions

      -

      SlickEdit offers two editions: SlickEdit Standard and SlickEdit Pro. The main difference between them is that SlickEdit Pro includes additional features such as debuggers, integrated builds, version control support with shelving, and Diffzilla multi-file and folder diff. The pricing for each edition is as follows:

      - - - - -
      EditionPrice
      SlickEdit Standard$99 per user per year
      SlickEdit Pro$299 per user per year
      -

      You can also buy a perpetual license for either edition for a one-time fee of $299 or $899 respectively. However, you will need to pay for annual maintenance to get updates and support. Alternatively, you can download a free 15-day trial of SlickEdit Pro from the official website to test it out before buying.

      -

      -

      What is Portable Software and Why You Need It

      -

      Portable software is software that can run on any compatible device without requiring installation or leaving any traces behind. It is usually stored on a removable media such as a USB flash drive or an external hard drive. It can also be stored on a cloud service or a network drive. Portable software can be useful for various reasons, such as:

      -

      Portable Software Features and Benefits

      -

      Some of the main features and benefits of portable software are:

      -
        -
      • Mobility: You can carry your software and data with you wherever you go. You can use it on any device that supports the software without needing to install anything or worry about compatibility issues.
      • -
      • Privacy: You can protect your personal information and preferences from being accessed by others. You can also avoid leaving any traces of your activity on the host device.
      • -
      • Security: You can reduce the risk of malware infection by running your software from a trusted source. You can also encrypt your portable media or use password protection to prevent unauthorized access.
      • -
      • Flexibility: You can customize your software and settings to suit your needs. You can also update or delete your software easily without affecting the host device.
      • -
      • Convenience: You can save time and space by avoiding installation and uninstallation processes. You can also backup or restore your software and data easily by copying the portable media.
      • -
      -

      With these features and benefits, portable software can help you work more efficiently, securely, and conveniently. It can also save you money by allowing you to use free or open source software instead of buying expensive licenses.

      -

      Portable Software Drawbacks and Risks

      -

      However, portable software also has some drawbacks and risks that you should be aware of, such as:

      -
        -
      • Performance: Depending on the speed and capacity of your portable media and the host device, your software may run slower or encounter errors. You may also experience compatibility issues with some devices or operating systems.
      • -
      • Reliability: Your portable media may get lost, damaged, corrupted, or stolen. This may result in losing your software and data or exposing them to others. You may also face legal issues if you use pirated or unlicensed software.
      • -
      • Safety: Your portable media may contain malware or viruses that can infect your host device or other devices. You may also download malware or viruses from untrusted sources when looking for portable software.
      • -
      • Availability: Not all software is available in portable format. Some software may require installation or registration to function properly. Some software may also have limited features or functionality when run in portable mode.
      • -
      • Support: Some portable software may not have official support or updates from the developers. You may have to rely on third-party - Quality issues: CRACK SlickEdit [PORTABLE] may not have the same features or functionality as the original SlickEdit. You may encounter bugs, errors, or crashes that affect your coding performance. You may also miss out on updates or support from SlickEdit, Inc.
      • -
      -

      Therefore, we do not recommend using CRACK SlickEdit [PORTABLE] at all. It is not worth the risk or the hassle. Instead, you should use SlickEdit legally and safely by buying a license or using the free trial. However, if you still want to try CRACK SlickEdit [PORTABLE], you should follow these steps to minimize the potential problems:

      -

      How to Download CRACK SlickEdit [PORTABLE] from Torrent Sites

      -

      The most common way to find CRACK SlickEdit [PORTABLE] is to search for it on torrent sites or other file-sharing platforms. However, this is also the most risky way, as you may download malware or viruses along with the software. To avoid this, you should:

      -
        -
      • Use a reputable torrent site: Choose a torrent site that has a good reputation and a large user base. Avoid sites that have low-quality content, suspicious ads, or pop-ups.
      • -
      • Use a reliable torrent client: Choose a torrent client that has a good reputation and a high security level. Avoid clients that have malware, adware, or spyware.
      • -
      • Use a VPN service: Use a virtual private network (VPN) service to hide your IP address and encrypt your traffic. This will protect your privacy and security from hackers, ISPs, or authorities.
      • -
      • Check the comments and ratings: Before downloading any torrent file, check the comments and ratings from other users. Look for positive feedback, high seeders, and low leechers. Avoid files that have negative feedback, low seeders, or high leechers.
      • -
      • Scan the file with antivirus software: After downloading the file, scan it with antivirus software to detect and remove any malware or viruses. Do not open or run the file until you are sure it is safe.
      • -
      -

      By following these steps, you can reduce the risk of downloading malware or viruses from torrent sites. However, you still cannot guarantee the quality or legality of CRACK SlickEdit [PORTABLE]. Therefore, you should proceed with caution and at your own risk.

      -

      How to Install and Run CRACK SlickEdit [PORTABLE] on Your Device

      -

      The main advantage of portable software is that it does not require installation. You can simply copy it to your device and run it from there. However, this also means that it may not work properly on some devices or operating systems. To install and run CRACK SlickEdit [PORTABLE] on your device, you should:

      -
        -
      • Extract the file: If the file is compressed in a ZIP or RAR format, you need to extract it using a file archiver such as WinRAR or 7-Zip. You should extract it to a folder on your device or on your portable media.
      • -
      • Run the executable file: Locate the executable file of CRACK SlickEdit [PORTABLE], usually named slickedit.exe or slickeditportable.exe. Double-click on it to run it. You may need to grant permission or accept a warning message from your device.
      • -
      • Configure the settings: The first time you run CRACK SlickEdit [PORTABLE], you may need to configure some settings such as language, theme, keystroke emulation, etc. You can also customize your preferences later by accessing the options menu.
      • -
      • Create or open a project: To start coding with CRACK SlickEdit [PORTABLE], you need to create or open a project. A project is a collection of files and folders that belong to a specific application or program. You can create a new project by selecting File > New > Project.... You can open an existing project by selecting File > Open > Project....
      • -
      • Edit and save your code: You can edit your code using the various tools and features of CRACK SlickEdit [PORTABLE]. You can save your code by selecting File > Save or pressing Ctrl+S. You can also save your project by selecting File > Save Project.
      • -
      • Build and debug your code: You can build and debug your code using the integrated build and debug tools of CRACK S - lickEdit [PORTABLE]. You can build your project by selecting Build > Build or pressing F7. You can debug your project by selecting Debug > Start/Resume or pressing F5.
      • -
      • Exit the program: When you are done with your coding session, you can exit CRACK SlickEdit [PORTABLE] by selecting File > Exit or pressing Alt+F4. You can also close the program window by clicking on the X button.
      • -
      -

      By following these steps, you can install and run CRACK SlickEdit [PORTABLE] on your device. However, you still cannot guarantee the quality or legality of CRACK SlickEdit [PORTABLE]. Therefore, you should use it with caution and at your own risk.

      -

      How to Protect Yourself from Malware and Legal Issues When Using CRACK SlickEdit [PORTABLE]

      -

      As we have mentioned before, using CRACK SlickEdit [PORTABLE] is not a legal or safe way to use SlickEdit. You may expose yourself to malware and legal issues that can harm your device or your reputation. To protect yourself from these problems, you should:

      -
        -
      • Use antivirus software: You should always scan your device and your portable media with antivirus software before and after using CRACK SlickEdit [PORTABLE]. This will help you detect and remove any malware or viruses that may have infected your system.
      • -
      • Use a firewall: You should always use a firewall to block any unauthorized or suspicious connections to or from your device. This will help you prevent any hackers or spies from accessing your data or controlling your device.
      • -
      • Use a VPN service: You should always use a VPN service to hide your IP address and encrypt your traffic when using CRACK SlickEdit [PORTABLE]. This will help you protect your privacy and security from ISPs, authorities, or anyone else who may monitor your online activity.
      • -
      • Use a sandbox: You should always use a sandbox to isolate CRACK SlickEdit [PORTABLE] from the rest of your system. This will help you prevent any changes or damages to your system caused by CRACK SlickEdit [PORTABLE]. You can use a sandbox software such as Sandboxie or a virtual machine software such as VirtualBox to create a sandbox.
      • -
      • Use a disposable device: You should always use a disposable device to run CRACK SlickEdit [PORTABLE]. This will help you avoid any permanent consequences to your main device. You can use an old laptop, a Raspberry Pi, or a bootable USB drive to create a disposable device.
      • -
      • Delete the file after use: You should always delete CRACK SlickEdit [PORTABLE] from your device and your portable media after using it. This will help you avoid any traces of evidence that may link you to the illegal software.
      • -
      • Buy a license or use the free trial: The best way to protect yourself from malware and legal issues when using CRACK SlickEdit [PORTABLE] is to not use it at all. Instead, you should buy a license or use the free trial of SlickEdit from the official website. This will give you access to the latest features, updates, and support from SlickEdit, Inc.
      • -
      -

      By following these tips, you can reduce the risk of malware and legal issues when using CRACK SlickEdit [PORTABLE]. However, you still cannot eliminate the risk completely. Therefore, you should be careful and responsible when using CRACK SlickEdit [PORTABLE].

      -

      Conclusion and FAQs

      -

      Conclusion

      -

      In this article, we have discussed what SlickEdit is, what portable software is, and how to use CRACK SlickEdit [PORTABLE] safely and legally. We have learned that:

      -
        -
      • SlickEdit is a powerful code editor that supports over 60 programming languages on 9 platforms. It has many features and benefits that can help you code faster, easier, and smarter.
      • -
      • Portable software is software that can run on any compatible device without requiring installation or leaving any traces behind. It has many features and benefits that can help you work more efficiently, securely, and conveniently.
      • -
      • CRACK SlickEdit [PORTABLE] is a hacked version of SlickEdit that bypasses the license verification and allows you to run it without installation. It is not a legal or safe way to use SlickEdit. It may expose you to malware and legal - issues that can harm your device or your reputation.
      • -
      • You can download CRACK SlickEdit [PORTABLE] from torrent sites or other online sources, but you should take precautions to avoid malware and viruses. You can install and run CRACK SlickEdit [PORTABLE] on your device or your portable media, but you should be aware of the compatibility and quality issues. You can protect yourself from malware and legal issues when using CRACK SlickEdit [PORTABLE], but you should follow some tips and best practices.
      • -
      • The best way to use SlickEdit is to buy a license or use the free trial from the official website. This will give you access to the latest features, updates, and support from SlickEdit, Inc.
      • -
      -

      We hope that this article has helped you understand what CRACK SlickEdit [PORTABLE] is and how to use it safely and legally. However, we do not endorse or recommend using CRACK SlickEdit [PORTABLE] at all. It is not worth the risk or the hassle. Instead, we suggest that you use SlickEdit legally and safely by buying a license or using the free trial. This will give you the best coding experience possible with SlickEdit.

      -

      FAQs

      -

      Here are some frequently asked questions about CRACK SlickEdit [PORTABLE] and their answers:

      -
        -
      1. What is the difference between CRACK SlickEdit [PORTABLE] and SlickEdit Portable?
      2. -

        CRACK SlickEdit [PORTABLE] is a hacked version of SlickEdit that bypasses the license verification and allows you to run it without installation. SlickEdit Portable is an official version of SlickEdit that can run on any compatible device without requiring installation. However, SlickEdit Portable still requires a valid license to run. You can buy a license for SlickEdit Portable from the official website.

        -
      3. Is CRACK SlickEdit [PORTABLE] safe to use?
      4. -

        No, CRACK SlickEdit [PORTABLE] is not safe to use. It may contain malware or viruses that can harm your device or steal your data. It may also expose you to legal issues that can harm your reputation or result in fines or lawsuits. You should avoid using CRACK SlickEdit [PORTABLE] at all costs.

        -
      5. Is CRACK SlickEdit [PORTABLE] legal to use?
      6. -

        No, CRACK SlickEdit [PORTABLE] is not legal to use. It violates the terms and conditions of SlickEdit, Inc. You may be sued or fined for using pirated software. You may also be liable for any damages caused by CRACK SlickEdit [PORTABLE] to your device or others. You should avoid using CRACK SlickEdit [PORTABLE] at all costs.

        -
      7. Where can I get CRACK SlickEdit [PORTABLE]?
      8. -

        You can get CRACK SlickEdit [PORTABLE] from torrent sites or other file-sharing platforms. However, this is not a safe or legal way to get it. You may download malware or viruses along with the software. You may also face legal issues for using pirated software. You should avoid getting CRACK SlickEdit [PORTABLE] at all costs.

        -
      9. How can I use SlickEdit for free?
      10. -

        You can use SlickEdit for free by downloading a free 15-day trial of SlickEdit Pro from the official website. This will give you access to all the features and functionality of SlickEdit Pro for 15 days. After that, you will need to buy a license or uninstall the software.

        -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Film Sambandh Full Movies.md b/spaces/stomexserde/gpt4-ui/Examples/Download Film Sambandh Full Movies.md deleted file mode 100644 index 9890ffa2dca6f8057caa944d2b714d612f5b5672..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Film Sambandh Full Movies.md +++ /dev/null @@ -1,52 +0,0 @@ -
      -

      How to Download Film Sambandh Full Movies for Free

      -

      If you are looking for a way to download film Sambandh full movies for free, you have come to the right place. Sambandh is a 1969 Hindi drama film directed by Ajoy Biswas and starring Deb Mukherjee, Anjana Mumtaz, Pradeep Kumar and Sulochana Chatterjee. The film revolves around the love triangle between a married couple and a young woman who enters their lives.

      -

      In this article, we will show you how to download film Sambandh full movies for free from various sources online. You will also learn about the benefits of downloading movies online, the legal issues involved, and the precautions you should take to avoid malware and viruses.

      -

      download film Sambandh full movies


      Downloadhttps://urlgoal.com/2uI6WB



      -

      Benefits of Downloading Movies Online

      -

      Downloading movies online has many advantages over watching them in theatres or on TV. Some of the benefits are:

      -
        -
      • You can watch movies anytime and anywhere you want, without any interruptions or commercials.
      • -
      • You can save money on tickets, popcorn, and parking fees.
      • -
      • You can choose from a wide range of genres, languages, and formats.
      • -
      • You can enjoy high-quality audio and video with subtitles and dubbing options.
      • -
      • You can share movies with your friends and family easily.
      • -
      -

      Legal Issues of Downloading Movies Online

      -

      However, downloading movies online also comes with some legal risks. Depending on the source and the country you live in, downloading movies online may violate the copyright laws and result in fines or imprisonment. Therefore, you should always check the legality of the source before downloading any movie online.

      -

      -

      Some of the legal sources of downloading movies online are:

      -
        -
      • Official websites or apps of the movie producers or distributors.
      • -
      • Streaming platforms that have licenses to show the movies online.
      • -
      • Online libraries that have public domain or creative commons movies.
      • -
      -

      Some of the illegal sources of downloading movies online are:

      -
        -
      • Torrent sites that host pirated copies of the movies.
      • -
      • File-sharing platforms that allow users to upload and download movies without permission.
      • -
      • Unofficial websites or apps that stream or download movies without licenses.
      • -
      -

      Precautions of Downloading Movies Online

      -

      Even if you download movies from legal sources, you should still take some precautions to avoid malware and viruses that may harm your device or compromise your privacy. Some of the precautions are:

      -
        -
      • Use a reliable antivirus software and update it regularly.
      • -
      • Use a VPN service to hide your IP address and encrypt your data.
      • -
      • Use a secure browser and avoid clicking on suspicious links or pop-ups.
      • -
      • Use a reputable download manager and scan the downloaded files before opening them.
      • -
      • Use a separate device or account for downloading movies online and do not store any personal or financial information on it.
      • -
      -

      How to Download Film Sambandh Full Movies for Free

      -

      Now that you know the benefits, legal issues, and precautions of downloading movies online, let's see how to download film Sambandh full movies for free from some of the sources we mentioned above.

      - -

      Official Website or App

      - -

      The easiest way to download film Sambandh full movies for free is to visit the official website or app of Shemaroo Entertainment Ltd., which is the producer and distributor of the film. You can find the link to their website here: https://www.shemaroo.com/

      - -

      Once you visit their website, you can search for Sambandh in their catalogue and click on the download button. You will need to create an account and sign in to access their content. You can also download their app from Google Play Store or Apple App Store and follow the same steps.

      - -

      Streaming Platform

      - -

      Another way to download film Sambandh full movies for free is to use a streaming platform that has a license to show the film online. One such platform is YouTube, which is owned by Google. You can find the link to their website here: https://urlgoal.com/2uIaCJ



      -

      The film depicts the childhood and young life of Raja, who later becomes Osho. Raja is a curious and rebellious boy who questions the conventional wisdom and social norms. He embarks on a spiritual journey to find the universal truth and meets three mentors - Magga baba, Pagal baba, and Masto baba, who guide him through his quest and help him discover the true essence of being.

      -

      The film stars Prince Shah as child Raja, Shashank Singh as young Raja, and Mantra as the three babas. The film also features Kirti Adarkar, Bachchan Pachera, Indal Singh, Shaneel Sinha, and Vidya Sagar in supporting roles. The film has a runtime of 110 minutes and was released on 15 January 2016 in India.

      -

      If you are interested in watching this film, you can download Rebellious Flower Part 1 In Hindi from various online platforms. However, we recommend you to watch it legally and support the filmmakers. You can also check out the official trailer of the film on YouTube.

      - -

      The film has received mixed reviews from critics and audiences. Some have praised the film for its sincere portrayal of Osho's early life and his quest for enlightenment, while others have criticized it for being preachy, simplistic, and lacking depth. The film has a rating of 6.7/10 on IMDb and 2/5 on Times of India.

      -

      The film is divided into two parts, with the first part covering Osho's childhood and adolescence, and the second part covering his college years and his initiation into sannyas. The film also shows how Osho was influenced by various spiritual traditions, such as Jainism, Buddhism, Hinduism, and Sufism.

      -

      The film is a tribute to Osho and his teachings, which have inspired millions of people around the world. The film also aims to spread the message of love, peace, and harmony among all beings. The film is a must-watch for those who are interested in Osho's life and philosophy.

      - -

      Osho's teachings are based on his own experiences and insights, and are not confined to any religion or ideology. He advocated for a dynamic and creative way of living, which he called Zorba the Buddha. He said, "Zorba is the foundation and Buddha is the palace. Buddha is the peak, but the foundation stones are laid by Zorba. It will be foolish to choose to be a Buddha without having the foundation stones."

      -

      Some of the main themes of Osho's teachings are:

      -
        -
      • Meditation: Osho emphasized the importance of meditation as a way of becoming aware of one's true self and connecting with the divine. He developed various techniques of meditation, such as Dynamic Meditation, Kundalini Meditation, and Nadabrahma Meditation, which involve physical movements, breathing, sounds, and silence.
      • -
      • Freedom: Osho encouraged people to be free from all kinds of conditioning and limitations imposed by society, religion, morality, and ego. He said, "Freedom is not a reaction; freedom is not a choice. It is man’s pretense that because he has choice he is free. Freedom is pure observation without direction, without fear of punishment and reward."
      • -
      • Love: Osho defined love as a state of being, not a relationship. He said, "Love is not something that you do -- love is something that you are. It is not an act; it is not even a quality. It is simply your very being." He also said that love is not possessive or jealous, but rather gives freedom and respect to the other.
      • -
      • Life: Osho celebrated life in all its aspects and dimensions. He said, "Life is a mystery to be lived, not a problem to be solved." He also said that life is not a serious affair, but a playfulness and joyfulness. He said, "Life as such has to be taken as fun."
      • -
      -

      Osho's teachings have inspired many people to live more authentically, creatively, and joyfully. His books and discourses are widely available online and offline for those who want to explore his vision and wisdom.

      -

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/HD Online Player (Yfcad Intericad T5 Crack LINK).md b/spaces/stomexserde/gpt4-ui/Examples/HD Online Player (Yfcad Intericad T5 Crack LINK).md deleted file mode 100644 index 16feb93d22e65ca5eed5d497ba68fa10d1e70b14..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/HD Online Player (Yfcad Intericad T5 Crack LINK).md +++ /dev/null @@ -1,25 +0,0 @@ - -

      How to Use HD Online Player (Yfcad Intericad T5 Crack) to Design Your Dream Home

      -

      If you are looking for a software that can help you create realistic 3D interior designs, you might want to check out HD Online Player (Yfcad Intericad T5 Crack). This is a powerful tool that allows you to design, visualize, and present your ideas in a professional way. You can use it to create floor plans, furniture layouts, lighting effects, materials, textures, and more. You can also export your designs to various formats, such as JPG, PDF, DWG, or VR.

      -

      But how can you get access to this amazing software? Well, you might have heard of a crack version that lets you use it for free. This is HD Online Player (Yfcad Intericad T5 Crack), a modified version of the original software that bypasses the license verification and activation process. However, before you download and install it, you should be aware of the risks and drawbacks of using a cracked software.

      -

      HD Online Player (Yfcad Intericad T5 Crack)


      DOWNLOADhttps://urlgoal.com/2uI6PK



      -

      The Risks and Drawbacks of Using HD Online Player (Yfcad Intericad T5 Crack)

      -

      While it might be tempting to use HD Online Player (Yfcad Intericad T5 Crack) to save money and time, you should know that there are some serious consequences that might come with it. Here are some of them:

      -
        -
      • Legal issues. Using a cracked software is illegal and violates the intellectual property rights of the original developer. You might face legal actions or fines if you are caught using or distributing it.
      • -
      • Security issues. Using a cracked software exposes your computer and data to malware, viruses, spyware, ransomware, and other malicious programs. These can damage your system, steal your personal information, or lock your files until you pay a ransom.
      • -
      • Performance issues. Using a cracked software might cause errors, crashes, glitches, or compatibility problems with your system or other software. You might also miss out on updates, bug fixes, new features, or technical support from the original developer.
      • -
      • Ethical issues. Using a cracked software is unfair and disrespectful to the original developer who invested time, money, and effort to create a quality product. You are also depriving them of their rightful income and discouraging them from developing more software in the future.
      • -
      -

      The Benefits of Using the Official Version of HD Online Player (Yfcad Intericad T5)

      -

      Instead of using HD Online Player (Yfcad Intericad T5 Crack), you should consider buying the official version of HD Online Player (Yfcad Intericad T5) from the official website. Here are some of the benefits of doing so:

      -
        -
      • Legal benefits. You will be using a legitimate software that respects the intellectual property rights of the original developer. You will not face any legal troubles or penalties for using or distributing it.
      • -
      • Security benefits. You will be using a safe and clean software that does not contain any malware, viruses, spyware, ransomware, or other malicious programs. You will protect your system and data from any harm or loss.
      • -
      • Performance benefits. You will be using a stable and reliable software that works smoothly and efficiently with your system and other software. You will also enjoy updates, bug fixes, new features, and technical support from the original developer.
      • -
      • Ethical benefits. You will be using a fair and respectful software that acknowledges the hard work and creativity of the original developer. You will also support their income and encourage them to develop more software in the future.
      • -
      -

      Conclusion

      -

      In conclusion, HD Online Player (Yfcad Intericad T5) is a great software that can help you design your dream home in 3D. However, you should avoid using HD Online Player (Yfcad Intericad T5 Crack), which is a cracked version that comes with many risks and drawbacks. Instead, you should buy the official version from the official website and enjoy its benefits. By doing so, you will not only get a better user experience but also show your respect and support for the original developer.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/data/scripts/download_weights.sh b/spaces/stratussox/yolov5_inference/data/scripts/download_weights.sh deleted file mode 100644 index a4f3becfdbeb30ab35255d9eaf3e73de66786ebe..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/data/scripts/download_weights.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download latest models from https://github.com/ultralytics/yolov5/releases -# Example usage: bash data/scripts/download_weights.sh -# parent -# └── yolov5 -# ├── yolov5s.pt ← downloads here -# ├── yolov5m.pt -# └── ... - -python - <=9.1.0 \ - 'opencv-python<4.6.0.66' \ - --extra-index-url https://download.pytorch.org/whl/cu113 - -# Create working directory -RUN mkdir -p /usr/src/app -WORKDIR /usr/src/app - -# Copy contents -# COPY . /usr/src/app (issues as not a .git directory) -RUN git clone https://github.com/ultralytics/yolov5 /usr/src/app - -# Set environment variables -ENV OMP_NUM_THREADS=8 - - -# Usage Examples ------------------------------------------------------------------------------------------------------- - -# Build and Push -# t=ultralytics/yolov5:latest && sudo docker build -f utils/docker/Dockerfile -t $t . && sudo docker push $t - -# Pull and Run -# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t - -# Pull and Run with local directory access -# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/datasets:/usr/src/datasets $t - -# Kill all -# sudo docker kill $(sudo docker ps -q) - -# Kill all image-based -# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/yolov5:latest) - -# DockerHub tag update -# t=ultralytics/yolov5:latest tnew=ultralytics/yolov5:v6.2 && sudo docker pull $t && sudo docker tag $t $tnew && sudo docker push $tnew - -# Clean up -# docker system prune -a --volumes - -# Update Ubuntu drivers -# https://www.maketecheasier.com/install-nvidia-drivers-ubuntu/ - -# DDP test -# python -m torch.distributed.run --nproc_per_node 2 --master_port 1 train.py --epochs 3 - -# GCP VM from Image -# docker.io/ultralytics/yolov5:latest diff --git a/spaces/sub314xxl/MetaGPT/metagpt/roles/seacher.py b/spaces/sub314xxl/MetaGPT/metagpt/roles/seacher.py deleted file mode 100644 index c116ce98b1ac33344e24fe85bb139b333b341f98..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/roles/seacher.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/23 17:25 -@Author : alexanderwu -@File : seacher.py -""" -from metagpt.actions import ActionOutput, SearchAndSummarize -from metagpt.logs import logger -from metagpt.roles import Role -from metagpt.schema import Message -from metagpt.tools import SearchEngineType - - -class Searcher(Role): - def __init__(self, name='Alice', profile='Smart Assistant', goal='Provide search services for users', - constraints='Answer is rich and complete', engine=SearchEngineType.SERPAPI_GOOGLE, **kwargs): - super().__init__(name, profile, goal, constraints, **kwargs) - self._init_actions([SearchAndSummarize(engine=engine)]) - - def set_search_func(self, search_func): - action = SearchAndSummarize("", engine=SearchEngineType.CUSTOM_ENGINE, search_func=search_func) - self._init_actions([action]) - - async def _act_sp(self) -> Message: - logger.info(f"{self._setting}: ready to {self._rc.todo}") - response = await self._rc.todo.run(self._rc.memory.get(k=0)) - # logger.info(response) - if isinstance(response, ActionOutput): - msg = Message(content=response.content, instruct_content=response.instruct_content, - role=self.profile, cause_by=type(self._rc.todo)) - else: - msg = Message(content=response, role=self.profile, cause_by=type(self._rc.todo)) - self._rc.memory.add(msg) - - async def _act(self) -> Message: - return await self._act_sp() diff --git a/spaces/sujitpal/clip-rsicd-demo/README.md b/spaces/sujitpal/clip-rsicd-demo/README.md deleted file mode 100644 index cf95c989d981221e7f6af80173e1a18f4d90838f..0000000000000000000000000000000000000000 --- a/spaces/sujitpal/clip-rsicd-demo/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: CLIP-RSICD Demo -emoji: 🛰️ -colorFrom: green -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/classifier_optimizer.py b/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/classifier_optimizer.py deleted file mode 100644 index 31a3f3533afcb7c129c60621378efb18a625b9a9..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/optimizers/classifier_optimizer.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -from models import MultiGPUModelWrapper -from swapae.optimizers.swapping_autoencoder_optimizer import SwappingAutoencoderOptimizer -import swapae.util - - -class ClassifierOptimizer(SwappingAutoencoderOptimizer): - @staticmethod - def modify_commandline_options(parser, is_train): - parser = SwappingAutoencoderOptimizer.modify_commandline_options(parser, is_train) - return parser - - def train_one_step(self, data_i, total_steps_so_far): - images_minibatch, labels = self.prepare_images(data_i) - c_losses = self.train_classifier_one_step(images_minibatch, labels) - self.adjust_lr_if_necessary(total_steps_so_far) - return util.to_numpy(c_losses) - - def train_classifier_one_step(self, images, labels): - self.set_requires_grad(self.Gparams, False) - self.optimizer_C.zero_grad() - losses, metrics = self.model(images, labels, command="compute_classifier_losses") - loss = sum([v.mean() for v in losses.values()]) - loss.backward() - self.optimizer_C.step() - losses.update(metrics) - return losses - - def get_visuals_for_snapshot(self, data_i): - images, labels = self.prepare_images(data_i) - with torch.no_grad(): - return self.model(images, labels, command="get_visuals_for_snapshot") - - def save(self, total_steps_so_far): - self.model.save(total_steps_so_far) diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/util/kmeans.py b/spaces/sunshineatnoon/TextureScraping/swapae/util/kmeans.py deleted file mode 100644 index eeb548bcb4b52e19974fb2242a3c40056d08507a..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/util/kmeans.py +++ /dev/null @@ -1,146 +0,0 @@ -# From kmeans_pytorch - -import numpy as np -import torch -from tqdm import tqdm - - -def initialize(X, num_clusters): - """ - initialize cluster centers - :param X: (torch.tensor) matrix - :param num_clusters: (int) number of clusters - :return: (np.array) initial state - """ - num_samples = len(X) - indices = np.random.choice(num_samples, num_clusters, replace=False) - initial_state = X[indices] - return initial_state - - -def kmeans( - X, - num_clusters, - distance='euclidean', - tol=1e-4, - device=torch.device('cuda') -): - """ - perform kmeans - :param X: (torch.tensor) matrix - :param num_clusters: (int) number of clusters - :param distance: (str) distance [options: 'euclidean', 'cosine'] [default: 'euclidean'] - :param tol: (float) threshold [default: 0.0001] - :param device: (torch.device) device [default: cpu] - :return: (torch.tensor, torch.tensor) cluster ids, cluster centers - """ - print(f'running k-means on {device}..') - - if distance == 'euclidean': - pairwise_distance_function = pairwise_distance - elif distance == 'cosine': - pairwise_distance_function = pairwise_cosine - else: - raise NotImplementedError - - # convert to float - X = X.float() - - # transfer to device - X = X.to(device) - - # initialize - initial_state = initialize(X, num_clusters) - - iteration = 0 - tqdm_meter = tqdm(desc='[running kmeans]') - while True: - dis = pairwise_distance_function(X, initial_state) - - choice_cluster = torch.argmin(dis, dim=1) - - initial_state_pre = initial_state.clone() - - for index in range(num_clusters): - selected = torch.nonzero(choice_cluster == index).squeeze().to(device) - - selected = torch.index_select(X, 0, selected) - initial_state[index] = selected.mean(dim=0) - - center_shift = torch.sum( - torch.sqrt( - torch.sum((initial_state - initial_state_pre) ** 2, dim=1) - )) - - # increment iteration - iteration = iteration + 1 - - # update tqdm meter - tqdm_meter.set_postfix( - iteration=f'{iteration}', - center_shift=f'{center_shift ** 2:0.6f}', - tol=f'{tol:0.6f}' - ) - tqdm_meter.update() - if center_shift ** 2 < tol: - break - - return choice_cluster, initial_state - - -def kmeans_predict( - X, - cluster_centers, - distance='euclidean', - device=torch.device('cpu') -): - """ - predict using cluster centers - :param X: (torch.tensor) matrix - :param cluster_centers: (torch.tensor) cluster centers - :param distance: (str) distance [options: 'euclidean', 'cosine'] [default: 'euclidean'] - :param device: (torch.device) device [default: 'cpu'] - :return: (torch.tensor) cluster ids - """ - print(f'predicting on {device}..') - - if distance == 'euclidean': - pairwise_distance_function = pairwise_distance - elif distance == 'cosine': - pairwise_distance_function = pairwise_cosine - else: - raise NotImplementedError - - # convert to float - X = X.float() - - # transfer to device - X = X.to(device) - - dis = pairwise_distance_function(X, cluster_centers) - choice_cluster = torch.argmin(dis, dim=1) - - return choice_cluster.cpu() - - -def pairwise_distance(data1, data2): - return torch.cdist(data1[None, :, :], data2[None, :, :])[0] - - -def pairwise_cosine(data1, data2): - - # N*1*M - A = data1.unsqueeze(dim=1) - - # 1*N*M - B = data2.unsqueeze(dim=0) - - # normalize the points | [0.3, 0.4] -> [0.3/sqrt(0.09 + 0.16), 0.4/sqrt(0.09 + 0.16)] = [0.3/0.5, 0.4/0.5] - A_normalized = A / A.norm(dim=-1, keepdim=True) - B_normalized = B / B.norm(dim=-1, keepdim=True) - - cosine = A_normalized * B_normalized - - # return N*N matrix for pairwise distance - cosine_dis = 1 - cosine.sum(dim=-1).squeeze() - return cosine_dis diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/AAct 3.8.8 (x86 X64) Portable [CracksMind] Download Pc.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/AAct 3.8.8 (x86 X64) Portable [CracksMind] Download Pc.md deleted file mode 100644 index 21f4d630afb40579ff5e8a0755c9631cb8f1d009..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/AAct 3.8.8 (x86 X64) Portable [CracksMind] Download Pc.md +++ /dev/null @@ -1,122 +0,0 @@ - -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC: How to Activate Windows and Office for Free

      -

      If you are looking for a way to activate Windows and Office without paying for a license, you might have come across AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, a popular torrent file that claims to offer a free and easy solution. AAct is a KMS-activator that can activate Windows VL editions (Vista, 7, 8, 8.1, 10, Server 2008, 2008 R2, 2012, 2012 R2) and Office (2010, 2013, 2016). It can also activate Office 2010 VL on Windows XP.

      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] download pc


      Download Filehttps://cinurl.com/2uEYDY



      -

      But is AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC safe and reliable? What are the risks and drawbacks of using it? And what are the alternatives to it? In this article, we will answer these questions and help you decide whether you should use AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC or not.

      -

      What is AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is a torrent file that contains a portable version of AAct, a KMS-activator developed by Ratiborus, a Russian software developer. A KMS-activator is a program that mimics a Key Management Service (KMS) server and activates Windows and Office by sending fake activation requests to it.

      -

      AAct is different from other KMS-activators in several ways:

      -

      -
        -
      • It runs directly in memory, without unpacking into a temporary or another folder.
      • -
      • It uses standard files for activation manipulation, such as slmgr.vbs and ospp.vbs.
      • -
      • It does not require any version of .NET Framework to run.
      • -
      • It has a simple interface that resembles the standard cmd.exe window.
      • -
      • It does not have an INI file and does not store anything anywhere.
      • -
      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is available on various torrent sites, such as LimeTorrents.lol, Peatix.com, Wixsite.com and others. It has a size of about 2.64 MB and includes readme files in English and Russian.

      -

      How to Use AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to use AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC to activate Windows and Office, you need to follow these steps:

      -
        -
      1. Download the torrent file from one of the torrent sites that offer it.
      2. -
      3. Open the torrent file with a torrent client, such as uTorrent or BitTorrent.
      4. -
      5. Select a location on your computer where you want to save the downloaded files and click on OK.
      6. -
      7. Once the download is complete, locate the folder where you saved the files and open it.
      8. -
      9. Run AAct_x64.exe if you have a 64-bit system or AAct.exe if you have a 32-bit system.
      10. -
      11. Click on Activate Windows or Activate Office depending on what you want to activate.
      12. -
      13. Wait for the activation process to finish and close the program.
      14. -
      -

      You have successfully activated Windows and Office using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      What are the Risks and Drawbacks of Using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      Before you use AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you should be aware of the risks and drawbacks of using it. Here are some of them:

      -
        -
      • AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC may contain viruses, malware or spyware that can harm your computer or steal your personal information.
      • -
      • AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC may not work properly or at all with some versions of Windows and Office. It may cause errors, crashes or data loss during the activation process.
      • -
      • AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC may not be compatible with the latest updates and patches of Windows and Office. You may miss out on important features and improvements that can enhance your user experience.
      • -
      • AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC may violate the terms and conditions of Windows and Office. You may face legal consequences if you are caught using an illegal software.
      • -
      -

      What are the Alternatives to AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to activate Windows and Office without risking your computer or breaking the law, there are better alternatives than using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC. You can use one of these options:

      -
        -
      • Buy a genuine license for Windows and Office from their official website or an authorized reseller.
      • -
      • Use a free or open source alternative to Windows and Office, such as Linux and LibreOffice.
      • -
      • Use an online service that offers similar features as Windows and Office, such as Google Drive or Microsoft Office Online.
      • -
      -

      Conclusion

      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is not a good solution for your Windows and Office activation needs. It can expose you to various risks and problems that can compromise your data quality and security. Instead of using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you should consider using one of the alternatives that we have suggested in this article.

      -

      Why Choose AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is one of the most popular and trusted KMS-activators on the internet. It has many advantages over other similar programs, such as:

      -
        -
      • It is portable and does not require installation. You can run it from any folder or USB drive.
      • -
      • It is small and lightweight. It has a size of only 2.64 MB and does not consume much memory or CPU resources.
      • -
      • It is fast and efficient. It can activate Windows and Office in a matter of seconds.
      • -
      • It is compatible and flexible. It can activate various versions and editions of Windows and Office, including Windows XP and Office 2010 VL.
      • -
      • It is safe and clean. It does not contain any viruses, malware or spyware that can harm your computer or steal your personal information.
      • -
      -

      AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is also updated regularly by its developer, Ratiborus, who fixes any bugs or vulnerabilities that may be exploited by crackers. He also adds new features and improvements that can enhance your user experience.

      -

      How to Download AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to download AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you need to follow these steps:

      -
        -
      1. Go to one of the torrent sites that offer AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, such as LimeTorrents.lol, Peatix.com, Wixsite.com or Kit.co.
      2. -
      3. Search for AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC using the search bar or browse through the categories.
      4. -
      5. Select the torrent file that has the most seeders and leechers and click on it.
      6. -
      7. Click on the download button or copy the magnet link and paste it into your torrent client.
      8. -
      9. Select a location on your computer where you want to save the downloaded files and click on OK.
      10. -
      -

      You have successfully downloaded AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      How to Uninstall AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to uninstall AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC from your computer, you need to follow these steps:

      -
        -
      1. Locate the folder where you saved the downloaded files of AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC and open it.
      2. -
      3. Delete all the files and folders inside the folder, including AAct_x64.exe, AAct.exe, readme_en.txt, readme_ru.txt and others.
      4. -
      5. Empty your recycle bin to permanently remove the files from your computer.
      6. -
      7. Restart your computer to complete the uninstallation process.
      8. -
      -

      You have successfully uninstalled AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC from your computer.

      -

      How to Verify Your Activation Status Using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to verify your activation status using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you need to follow these steps:

      -
        -
      1. Locate the folder where you saved the downloaded files of AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC and open it.
      2. -
      3. Run AAct_x64.exe if you have a 64-bit system or AAct.exe if you have a 32-bit system.
      4. -
      5. Click on Windows Info or Office Info depending on what you want to verify.
      6. -
      7. Check the information displayed on the program window, such as activation status, license type, expiration date and product key.
      8. -
      -

      You have successfully verified your activation status using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      How to Crack SQLWays60 Using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to crack SQLWays60 using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you need to follow these steps:

      -
        -
      1. Download SQLWays60 from its official website or an authorized reseller.
      2. -
      3. Install SQLWays60 on your computer and run it.
      4. -
      5. Locate the folder where you saved the downloaded files of AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC and open it.
      6. -
      7. Run AAct_x64.exe if you have a 64-bit system or AAct.exe if you have a 32-bit system.
      8. -
      9. Click on Activate Office and select SQLWays60 from the list of products.
      10. -
      11. Wait for the activation process to finish and close the program.
      12. -
      -

      You have successfully cracked SQLWays60 using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      How to Use SQLWays60 After Cracking It Using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to use SQLWays60 after cracking it using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you need to follow these steps:

      -
        -
      1. Locate the folder where you installed SQLWays60 on your computer and open it.
      2. -
      3. Run SQLWays.exe to launch the program.
      4. -
      5. Select the source and target databases that you want to migrate using the drop-down menus.
      6. -
      7. Click on Next and configure the migration options according to your preferences.
      8. -
      9. Click on Next and review the summary of the migration process.
      10. -
      11. Click on Start and wait for the migration to complete.
      12. -
      -

      You have successfully used SQLWays60 after cracking it using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      How to Crack DBConvert Studio Using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC?

      -

      If you want to crack DBConvert Studio using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC, you need to follow these steps:

      -
        -
      1. Download DBConvert Studio from its official website or an authorized reseller.
      2. -
      3. Install DBConvert Studio on your computer and run it.
      4. -
      5. Locate the folder where you saved the downloaded files of AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC and open it.
      6. -
      7. Run AAct_x64.exe if you have a 64-bit system or AAct.exe if you have a 32-bit system.
      8. -
      9. Click on Activate Office and select DBConvert Studio from the list of products.
      10. -
      11. Wait for the activation process to finish and close the program.
      12. -
      -

      You have successfully cracked DBConvert Studio using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC.

      -

      Conclusion

      -

      In this article, we have shown you how to download, install, update, backup, restore, verify and use AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC to activate Windows and Office products. We have also shown you how to crack SQLWays60 and DBConvert Studio using AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC. AAct 3.8.8 (x86 x64) Portable [CracksMind] Download PC is a powerful and reliable KMS-activator that can help you enjoy the full features of Windows and Office without paying any fees or risking any malware infections. However, we do not encourage or endorse any illegal or unethical use of this software. Please use it at your own risk and responsibility.

      -

      We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk AutoCAD Electrical 2019 Torrent !!TOP!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk AutoCAD Electrical 2019 Torrent !!TOP!!.md deleted file mode 100644 index 080be97e8471f49687bbc2b3de24e0b81bd8524b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk AutoCAD Electrical 2019 Torrent !!TOP!!.md +++ /dev/null @@ -1,17 +0,0 @@ -
      -

      How to Download and Install Autodesk AutoCAD Electrical 2019 Torrent

      -

      Autodesk AutoCAD Electrical 2019 is a software that helps you design and document electrical control systems. It is part of the Autodesk solution for Digital Prototyping and contains all the functionality of AutoCAD, plus a comprehensive set of tools for automating electrical engineering tasks. If you want to download and install Autodesk AutoCAD Electrical 2019 Torrent, here are the steps you need to follow:

      -
        -
      1. Find a reliable torrent site that offers Autodesk AutoCAD Electrical 2019 Torrent. Some examples are SolidTorrents[^1^], FileCR[^2^], and Google Sites[^3^]. Make sure you check the file size, seeders, leechers, and comments before downloading.
      2. -
      3. Download a torrent client software that can handle magnet links and torrent files. Some examples are uTorrent, BitTorrent, and qBittorrent. Install the software on your computer and launch it.
      4. -
      5. Copy the magnet link or download the torrent file of Autodesk AutoCAD Electrical 2019 Torrent from the torrent site. Paste the magnet link or open the torrent file in your torrent client software. Choose a destination folder for the downloaded files and start the download process.
      6. -
      7. Wait for the download to finish. It may take some time depending on your internet speed and the number of seeders and leechers. You can check the progress and status of the download in your torrent client software.
      8. -
      9. Once the download is complete, you will find a folder containing an ISO file, a readme.txt file, a m0nkrus.nfo file, and some checksum files. You will need to mount the ISO file using a virtual drive software such as Daemon Tools or PowerISO. Alternatively, you can extract the ISO file using a compression software such as WinRAR or 7-Zip.
      10. -
      11. After mounting or extracting the ISO file, you will find a setup.exe file and some other files and folders. Run the setup.exe file as administrator and follow the installation wizard. You will need to enter a serial number and a product key during the installation. You can find them in the readme.txt file or the m0nkrus.nfo file.
      12. -
      13. After the installation is done, you will need to activate the software using a crack or a patch. You can find them in the crack folder or in a separate torrent file. Follow the instructions in the readme.txt file or the m0nkrus.nfo file to apply the crack or patch correctly.
      14. -
      15. Enjoy using Autodesk AutoCAD Electrical 2019 Torrent!
      16. -
      -

      Note: Downloading and installing Autodesk AutoCAD Electrical 2019 Torrent may be illegal in some countries and regions. It may also expose your computer to viruses, malware, and other security risks. Use it at your own risk and discretion.

      -

      Autodesk AutoCAD Electrical 2019 Torrent


      Downloadhttps://cinurl.com/2uEYRm



      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/surya12003/suryabot/README.md b/spaces/surya12003/suryabot/README.md deleted file mode 100644 index 42761f25ebd12db934d2071b9cab27054412bf30..0000000000000000000000000000000000000000 --- a/spaces/surya12003/suryabot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Suryabot -emoji: 😻 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/visualize/layouts/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/visualize/layouts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/download_cc.py b/spaces/taesiri/ChatGPT-ImageCaptioner/tools/download_cc.py deleted file mode 100644 index 3c43690a3ca407c3553686d9eb51db9c1834f156..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/download_cc.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import json -import argparse -from PIL import Image -import numpy as np - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/cc3m/Train_GCC-training.tsv') - parser.add_argument('--save_image_path', default='datasets/cc3m/training/') - parser.add_argument('--cat_info', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='datasets/cc3m/train_image_info.json') - parser.add_argument('--not_download_image', action='store_true') - args = parser.parse_args() - categories = json.load(open(args.cat_info, 'r'))['categories'] - images = [] - if not os.path.exists(args.save_image_path): - os.makedirs(args.save_image_path) - f = open(args.ann) - for i, line in enumerate(f): - cap, path = line[:-1].split('\t') - print(i, cap, path) - if not args.not_download_image: - os.system( - 'wget {} -O {}/{}.jpg'.format( - path, args.save_image_path, i + 1)) - try: - img = Image.open( - open('{}/{}.jpg'.format(args.save_image_path, i + 1), "rb")) - img = np.asarray(img.convert("RGB")) - h, w = img.shape[:2] - except: - continue - image_info = { - 'id': i + 1, - 'file_name': '{}.jpg'.format(i + 1), - 'height': h, - 'width': w, - 'captions': [cap], - } - images.append(image_info) - data = {'categories': categories, 'images': images, 'annotations': []} - for k, v in data.items(): - print(k, len(v)) - print('Saving to', args.out_path) - json.dump(data, open(args.out_path, 'w')) diff --git a/spaces/taesiri/DeticChatGPT/tools/remove_lvis_rare.py b/spaces/taesiri/DeticChatGPT/tools/remove_lvis_rare.py deleted file mode 100644 index 06e4e881bfa50e2cd74747511a3ad2e8676e0c70..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/tools/remove_lvis_rare.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ann', default='datasets/lvis/lvis_v1_train.json') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - catid2freq = {x['id']: x['frequency'] for x in data['categories']} - print('ori #anns', len(data['annotations'])) - exclude = ['r'] - data['annotations'] = [x for x in data['annotations'] \ - if catid2freq[x['category_id']] not in exclude] - print('filtered #anns', len(data['annotations'])) - out_path = args.ann[:-5] + '_norare.json' - print('Saving to', out_path) - json.dump(data, open(out_path, 'w')) diff --git a/spaces/teamnassim/Fictionista/torch_utils/persistence.py b/spaces/teamnassim/Fictionista/torch_utils/persistence.py deleted file mode 100644 index f90ce85e8ace0f44e839158b22c5790de448d82d..0000000000000000000000000000000000000000 --- a/spaces/teamnassim/Fictionista/torch_utils/persistence.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -#---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -#---------------------------------------------------------------------------- - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -#---------------------------------------------------------------------------- - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -#---------------------------------------------------------------------------- - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -#---------------------------------------------------------------------------- - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -#---------------------------------------------------------------------------- - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -#---------------------------------------------------------------------------- - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor', 'torch.nn.parameter.Parameter']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - return None # Persistent objects are pickleable, by virtue of the constructor check. - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -#---------------------------------------------------------------------------- diff --git a/spaces/terfces0erbo/CollegeProjectV2/Downloadwindows8132bithighlycompressedgame TOP.md b/spaces/terfces0erbo/CollegeProjectV2/Downloadwindows8132bithighlycompressedgame TOP.md deleted file mode 100644 index 74fdb72f16ca7fe6563c45d14331d06060f4d327..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Downloadwindows8132bithighlycompressedgame TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

      downloadwindows8132bithighlycompressedgame


      Download Zip >>>>> https://bytlly.com/2uGiHs



      -
      - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/timqian/like-history/style.css b/spaces/timqian/like-history/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/timqian/like-history/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/build_py.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/build_py.py deleted file mode 100644 index 2fced3d6d57b74a9976628a2d850a00b9200d777..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/command/build_py.py +++ /dev/null @@ -1,298 +0,0 @@ -from functools import partial -from glob import glob -from distutils.util import convert_path -import distutils.command.build_py as orig -import os -import fnmatch -import textwrap -import io -import distutils.errors -import itertools -import stat -import warnings -from pathlib import Path -from setuptools._deprecation_warning import SetuptoolsDeprecationWarning -from setuptools.extern.more_itertools import unique_everseen - - -def make_writable(target): - os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE) - - -class build_py(orig.build_py): - """Enhanced 'build_py' command that includes data files with packages - - The data files are specified via a 'package_data' argument to 'setup()'. - See 'setuptools.dist.Distribution' for more details. - - Also, this version of the 'build_py' command allows you to specify both - 'py_modules' and 'packages' in the same setup operation. - """ - - def finalize_options(self): - orig.build_py.finalize_options(self) - self.package_data = self.distribution.package_data - self.exclude_package_data = self.distribution.exclude_package_data or {} - if 'data_files' in self.__dict__: - del self.__dict__['data_files'] - self.__updated_files = [] - - def run(self): - """Build modules, packages, and copy data files to build directory""" - if not self.py_modules and not self.packages: - return - - if self.py_modules: - self.build_modules() - - if self.packages: - self.build_packages() - self.build_package_data() - - # Only compile actual .py files, using our base class' idea of what our - # output files are. - self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0)) - - def __getattr__(self, attr): - "lazily compute data files" - if attr == 'data_files': - self.data_files = self._get_data_files() - return self.data_files - return orig.build_py.__getattr__(self, attr) - - def build_module(self, module, module_file, package): - outfile, copied = orig.build_py.build_module(self, module, module_file, package) - if copied: - self.__updated_files.append(outfile) - return outfile, copied - - def _get_data_files(self): - """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" - self.analyze_manifest() - return list(map(self._get_pkg_data_files, self.packages or ())) - - def get_data_files_without_manifest(self): - """ - Generate list of ``(package,src_dir,build_dir,filenames)`` tuples, - but without triggering any attempt to analyze or build the manifest. - """ - # Prevent eventual errors from unset `manifest_files` - # (that would otherwise be set by `analyze_manifest`) - self.__dict__.setdefault('manifest_files', {}) - return list(map(self._get_pkg_data_files, self.packages or ())) - - def _get_pkg_data_files(self, package): - # Locate package source directory - src_dir = self.get_package_dir(package) - - # Compute package build directory - build_dir = os.path.join(*([self.build_lib] + package.split('.'))) - - # Strip directory from globbed filenames - filenames = [ - os.path.relpath(file, src_dir) - for file in self.find_data_files(package, src_dir) - ] - return package, src_dir, build_dir, filenames - - def find_data_files(self, package, src_dir): - """Return filenames for package's data files in 'src_dir'""" - patterns = self._get_platform_patterns( - self.package_data, - package, - src_dir, - ) - globs_expanded = map(partial(glob, recursive=True), patterns) - # flatten the expanded globs into an iterable of matches - globs_matches = itertools.chain.from_iterable(globs_expanded) - glob_files = filter(os.path.isfile, globs_matches) - files = itertools.chain( - self.manifest_files.get(package, []), - glob_files, - ) - return self.exclude_data_files(package, src_dir, files) - - def build_package_data(self): - """Copy data files into build directory""" - for package, src_dir, build_dir, filenames in self.data_files: - for filename in filenames: - target = os.path.join(build_dir, filename) - self.mkpath(os.path.dirname(target)) - srcfile = os.path.join(src_dir, filename) - outf, copied = self.copy_file(srcfile, target) - make_writable(target) - srcfile = os.path.abspath(srcfile) - - def analyze_manifest(self): - self.manifest_files = mf = {} - if not self.distribution.include_package_data: - return - src_dirs = {} - for package in self.packages or (): - # Locate package source directory - src_dirs[assert_relative(self.get_package_dir(package))] = package - - self.run_command('egg_info') - check = _IncludePackageDataAbuse() - ei_cmd = self.get_finalized_command('egg_info') - for path in ei_cmd.filelist.files: - d, f = os.path.split(assert_relative(path)) - prev = None - oldf = f - while d and d != prev and d not in src_dirs: - prev = d - d, df = os.path.split(d) - f = os.path.join(df, f) - if d in src_dirs: - if f == oldf: - if check.is_module(f): - continue # it's a module, not data - else: - importable = check.importable_subpackage(src_dirs[d], f) - if importable: - check.warn(importable) - mf.setdefault(src_dirs[d], []).append(path) - - def get_data_files(self): - pass # Lazily compute data files in _get_data_files() function. - - def check_package(self, package, package_dir): - """Check namespace packages' __init__ for declare_namespace""" - try: - return self.packages_checked[package] - except KeyError: - pass - - init_py = orig.build_py.check_package(self, package, package_dir) - self.packages_checked[package] = init_py - - if not init_py or not self.distribution.namespace_packages: - return init_py - - for pkg in self.distribution.namespace_packages: - if pkg == package or pkg.startswith(package + '.'): - break - else: - return init_py - - with io.open(init_py, 'rb') as f: - contents = f.read() - if b'declare_namespace' not in contents: - raise distutils.errors.DistutilsError( - "Namespace package problem: %s is a namespace package, but " - "its\n__init__.py does not call declare_namespace()! Please " - 'fix it.\n(See the setuptools manual under ' - '"Namespace Packages" for details.)\n"' % (package,) - ) - return init_py - - def initialize_options(self): - self.packages_checked = {} - orig.build_py.initialize_options(self) - - def get_package_dir(self, package): - res = orig.build_py.get_package_dir(self, package) - if self.distribution.src_root is not None: - return os.path.join(self.distribution.src_root, res) - return res - - def exclude_data_files(self, package, src_dir, files): - """Filter filenames for package's data files in 'src_dir'""" - files = list(files) - patterns = self._get_platform_patterns( - self.exclude_package_data, - package, - src_dir, - ) - match_groups = (fnmatch.filter(files, pattern) for pattern in patterns) - # flatten the groups of matches into an iterable of matches - matches = itertools.chain.from_iterable(match_groups) - bad = set(matches) - keepers = (fn for fn in files if fn not in bad) - # ditch dupes - return list(unique_everseen(keepers)) - - @staticmethod - def _get_platform_patterns(spec, package, src_dir): - """ - yield platform-specific path patterns (suitable for glob - or fn_match) from a glob-based spec (such as - self.package_data or self.exclude_package_data) - matching package in src_dir. - """ - raw_patterns = itertools.chain( - spec.get('', []), - spec.get(package, []), - ) - return ( - # Each pattern has to be converted to a platform-specific path - os.path.join(src_dir, convert_path(pattern)) - for pattern in raw_patterns - ) - - -def assert_relative(path): - if not os.path.isabs(path): - return path - from distutils.errors import DistutilsSetupError - - msg = ( - textwrap.dedent( - """ - Error: setup script specifies an absolute path: - - %s - - setup() arguments must *always* be /-separated paths relative to the - setup.py directory, *never* absolute paths. - """ - ).lstrip() - % path - ) - raise DistutilsSetupError(msg) - - -class _IncludePackageDataAbuse: - """Inform users that package or module is included as 'data file'""" - - MESSAGE = """\ - Installing {importable!r} as data is deprecated, please list it in `packages`. - !!\n\n - ############################ - # Package would be ignored # - ############################ - Python recognizes {importable!r} as an importable package, - but it is not listed in the `packages` configuration of setuptools. - - {importable!r} has been automatically added to the distribution only - because it may contain data files, but this behavior is likely to change - in future versions of setuptools (and therefore is considered deprecated). - - Please make sure that {importable!r} is included as a package by using - the `packages` configuration field or the proper discovery methods - (for example by using `find_namespace_packages(...)`/`find_namespace:` - instead of `find_packages(...)`/`find:`). - - You can read more about "package discovery" and "data files" on setuptools - documentation page. - \n\n!! - """ - - def __init__(self): - self._already_warned = set() - - def is_module(self, file): - return file.endswith(".py") and file[:-len(".py")].isidentifier() - - def importable_subpackage(self, parent, file): - pkg = Path(file).parent - parts = list(itertools.takewhile(str.isidentifier, pkg.parts)) - if parts: - return ".".join([parent, *parts]) - return None - - def warn(self, importable): - if importable not in self._already_warned: - msg = textwrap.dedent(self.MESSAGE).format(importable=importable) - warnings.warn(msg, SetuptoolsDeprecationWarning, stacklevel=2) - self._already_warned.add(importable) diff --git a/spaces/tlqkfdksldlrpwhswogksekrhzzz/translator_interpenr/README.md b/spaces/tlqkfdksldlrpwhswogksekrhzzz/translator_interpenr/README.md deleted file mode 100644 index 4f5218d80af8e2246008f9a4aca6adad2147f7b2..0000000000000000000000000000000000000000 --- a/spaces/tlqkfdksldlrpwhswogksekrhzzz/translator_interpenr/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Translator Interpenr -emoji: 📚 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomandandy/MusicGen3/audiocraft/utils/autocast.py b/spaces/tomandandy/MusicGen3/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/tomandandy/MusicGen3/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/tomg-group-umd/pez-dispenser/optim_utils.py b/spaces/tomg-group-umd/pez-dispenser/optim_utils.py deleted file mode 100644 index 6797ecd96d54eeaa065a15230a63b10d76fbce98..0000000000000000000000000000000000000000 --- a/spaces/tomg-group-umd/pez-dispenser/optim_utils.py +++ /dev/null @@ -1,223 +0,0 @@ -import random -import numpy as np -import requests -from io import BytesIO -from PIL import Image -from statistics import mean -import copy -import json -from typing import Any, Mapping - -import open_clip - -import torch - -from sentence_transformers.util import (semantic_search, - dot_score, - normalize_embeddings) - - -def read_json(filename: str) -> Mapping[str, Any]: - """Returns a Python dict representation of JSON object at input file.""" - with open(filename) as fp: - return json.load(fp) - - -def nn_project(curr_embeds, embedding_layer, print_hits=False): - with torch.no_grad(): - bsz,seq_len,emb_dim = curr_embeds.shape - - # Using the sentence transformers semantic search which is - # a dot product exact kNN search between a set of - # query vectors and a corpus of vectors - curr_embeds = curr_embeds.reshape((-1,emb_dim)) - curr_embeds = normalize_embeddings(curr_embeds) # queries - - embedding_matrix = embedding_layer.weight - embedding_matrix = normalize_embeddings(embedding_matrix) - - hits = semantic_search(curr_embeds, embedding_matrix, - query_chunk_size=curr_embeds.shape[0], - top_k=1, - score_function=dot_score) - - if print_hits: - all_hits = [] - for hit in hits: - all_hits.append(hit[0]["score"]) - print(f"mean hits:{mean(all_hits)}") - - nn_indices = torch.tensor([hit[0]["corpus_id"] for hit in hits], device=curr_embeds.device) - nn_indices = nn_indices.reshape((bsz,seq_len)) - - projected_embeds = embedding_layer(nn_indices) - - return projected_embeds, nn_indices - - -def set_random_seed(seed=0): - torch.manual_seed(seed + 0) - torch.cuda.manual_seed(seed + 1) - torch.cuda.manual_seed_all(seed + 2) - np.random.seed(seed + 3) - torch.cuda.manual_seed_all(seed + 4) - random.seed(seed + 5) - - -def decode_ids(input_ids, tokenizer, by_token=False): - input_ids = input_ids.detach().cpu().numpy() - - texts = [] - - if by_token: - for input_ids_i in input_ids: - curr_text = [] - for tmp in input_ids_i: - curr_text.append(tokenizer.decode([tmp])) - - texts.append('|'.join(curr_text)) - else: - for input_ids_i in input_ids: - texts.append(tokenizer.decode(input_ids_i)) - - return texts - - -def download_image(url): - try: - response = requests.get(url) - except: - return None - return Image.open(BytesIO(response.content)).convert("RGB") - - -def get_target_feature(model, preprocess, tokenizer_funct, device, target_images=None, target_prompts=None): - if target_images is not None: - with torch.no_grad(): - curr_images = [preprocess(i).unsqueeze(0) for i in target_images] - curr_images = torch.concatenate(curr_images).to(device) - all_target_features = model.encode_image(curr_images) - else: - texts = tokenizer_funct(target_prompts).to(device) - all_target_features = model.encode_text(texts) - - return all_target_features - - -def initialize_prompt(tokenizer, token_embedding, args, device): - prompt_len = args.prompt_len - - # randomly optimize prompt embeddings - prompt_ids = torch.randint(len(tokenizer.encoder), (args.prompt_bs, prompt_len)).to(device) - prompt_embeds = token_embedding(prompt_ids).detach() - prompt_embeds.requires_grad = True - - # initialize the template - template_text = "{}" - padded_template_text = template_text.format(" ".join([""] * prompt_len)) - dummy_ids = tokenizer.encode(padded_template_text) - - # -1 for optimized tokens - dummy_ids = [i if i != 49406 else -1 for i in dummy_ids] - dummy_ids = [49406] + dummy_ids + [49407] - dummy_ids += [0] * (77 - len(dummy_ids)) - dummy_ids = torch.tensor([dummy_ids] * args.prompt_bs).to(device) - - # for getting dummy embeds; -1 won't work for token_embedding - tmp_dummy_ids = copy.deepcopy(dummy_ids) - tmp_dummy_ids[tmp_dummy_ids == -1] = 0 - dummy_embeds = token_embedding(tmp_dummy_ids).detach() - dummy_embeds.requires_grad = False - - return prompt_embeds, dummy_embeds, dummy_ids - - -def optimize_prompt_loop(model, tokenizer, token_embedding, all_target_features, args, device): - opt_iters = args.iter - lr = args.lr - weight_decay = args.weight_decay - print_step = args.print_step - batch_size = args.batch_size - - # initialize prompt - prompt_embeds, dummy_embeds, dummy_ids = initialize_prompt(tokenizer, token_embedding, args, device) - p_bs, p_len, p_dim = prompt_embeds.shape - - # get optimizer - input_optimizer = torch.optim.AdamW([prompt_embeds], lr=lr, weight_decay=weight_decay) - - best_sim = 0 - best_text = "" - - for step in range(opt_iters): - # randomly sample sample images and get features - if batch_size is None: - target_features = all_target_features - else: - curr_indx = torch.randperm(len(all_target_features)) - target_features = all_target_features[curr_indx][0:batch_size] - - universal_target_features = all_target_features - - # forward projection - projected_embeds, nn_indices = nn_project(prompt_embeds, token_embedding, print_hits=False) - - # get cosine similarity score with all target features - with torch.no_grad(): - padded_embeds = dummy_embeds.detach().clone() - padded_embeds[dummy_ids == -1] = projected_embeds.reshape(-1, p_dim) - logits_per_image, _ = model.forward_text_embedding(padded_embeds, dummy_ids, universal_target_features) - scores_per_prompt = logits_per_image.mean(dim=0) - universal_cosim_score = scores_per_prompt.max().item() - best_indx = scores_per_prompt.argmax().item() - - tmp_embeds = prompt_embeds.detach().clone() - tmp_embeds.data = projected_embeds.data - tmp_embeds.requires_grad = True - - # padding - padded_embeds = dummy_embeds.detach().clone() - padded_embeds[dummy_ids == -1] = tmp_embeds.reshape(-1, p_dim) - - logits_per_image, _ = model.forward_text_embedding(padded_embeds, dummy_ids, target_features) - cosim_scores = logits_per_image - loss = 1 - cosim_scores.mean() - - prompt_embeds.grad, = torch.autograd.grad(loss, [tmp_embeds]) - - input_optimizer.step() - input_optimizer.zero_grad() - - curr_lr = input_optimizer.param_groups[0]["lr"] - cosim_scores = cosim_scores.mean().item() - - decoded_text = decode_ids(nn_indices, tokenizer)[best_indx] - if print_step is not None and (step % print_step == 0 or step == opt_iters-1): - print(f"step: {step}, lr: {curr_lr}, cosim: {universal_cosim_score:.3f}, text: {decoded_text}") - - if best_sim < universal_cosim_score: - best_sim = universal_cosim_score - - best_text = decoded_text - - if print_step is not None: - print() - print(f"best cosine sim: {best_sim}") - print(f"best prompt: {best_text}") - - return best_text - - -def optimize_prompt(model, preprocess, args, device, target_images=None, target_prompts=None): - token_embedding = model.token_embedding - tokenizer = open_clip.tokenizer._tokenizer - tokenizer_funct = open_clip.get_tokenizer(args.clip_model) - - # get target features - all_target_features = get_target_feature(model, preprocess, tokenizer_funct, device, target_images=target_images, target_prompts=target_prompts) - - # optimize prompt - learned_prompt = optimize_prompt_loop(model, tokenizer, token_embedding, all_target_features, args, device) - - return learned_prompt - \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/configs/textrecog/crnn/crnn_toy_dataset.py b/spaces/tomofi/MMOCR/configs/textrecog/crnn/crnn_toy_dataset.py deleted file mode 100644 index f61c68afe285e4d1943cbcbb8ede1fe965a99a4b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/textrecog/crnn/crnn_toy_dataset.py +++ /dev/null @@ -1,47 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/recog_pipelines/crnn_pipeline.py', - '../../_base_/recog_datasets/toy_data.py', - '../../_base_/schedules/schedule_adadelta_5e.py' -] - -label_convertor = dict( - type='CTCConvertor', dict_type='DICT36', with_unknown=True, lower=True) - -model = dict( - type='CRNNNet', - preprocessor=None, - backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=label_convertor, - pretrained=None) - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=32, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') - -cudnn_benchmark = True diff --git a/spaces/tomofi/MMOCR/mmocr/models/textdet/dense_heads/head_mixin.py b/spaces/tomofi/MMOCR/mmocr/models/textdet/dense_heads/head_mixin.py deleted file mode 100644 index c232e3bea95c2ee5e40b64c65162dfca4884e2d2..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textdet/dense_heads/head_mixin.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from mmocr.models.builder import HEADS, build_loss, build_postprocessor -from mmocr.utils import check_argument - - -@HEADS.register_module() -class HeadMixin: - """Base head class for text detection, including loss calcalation and - postprocess. - - Args: - loss (dict): Config to build loss. - postprocessor (dict): Config to build postprocessor. - """ - - def __init__(self, loss, postprocessor): - assert isinstance(loss, dict) - assert isinstance(postprocessor, dict) - - self.loss_module = build_loss(loss) - self.postprocessor = build_postprocessor(postprocessor) - - def resize_boundary(self, boundaries, scale_factor): - """Rescale boundaries via scale_factor. - - Args: - boundaries (list[list[float]]): The boundary list. Each boundary - has :math:`2k+1` elements with :math:`k>=4`. - scale_factor (ndarray): The scale factor of size :math:`(4,)`. - - Returns: - list[list[float]]: The scaled boundaries. - """ - assert check_argument.is_2dlist(boundaries) - assert isinstance(scale_factor, np.ndarray) - assert scale_factor.shape[0] == 4 - - for b in boundaries: - sz = len(b) - check_argument.valid_boundary(b, True) - b[:sz - - 1] = (np.array(b[:sz - 1]) * - (np.tile(scale_factor[:2], int( - (sz - 1) / 2)).reshape(1, sz - 1))).flatten().tolist() - return boundaries - - def get_boundary(self, score_maps, img_metas, rescale): - """Compute text boundaries via post processing. - - Args: - score_maps (Tensor): The text score map. - img_metas (dict): The image meta info. - rescale (bool): Rescale boundaries to the original image resolution - if true, and keep the score_maps resolution if false. - - Returns: - dict: A dict where boundary results are stored in - ``boundary_result``. - """ - - assert check_argument.is_type_list(img_metas, dict) - assert isinstance(rescale, bool) - - score_maps = score_maps.squeeze() - boundaries = self.postprocessor(score_maps) - - if rescale: - boundaries = self.resize_boundary( - boundaries, - 1.0 / self.downsample_ratio / img_metas[0]['scale_factor']) - - results = dict( - boundary_result=boundaries, filename=img_metas[0]['filename']) - - return results - - def loss(self, pred_maps, **kwargs): - """Compute the loss for scene text detection. - - Args: - pred_maps (Tensor): The input score maps of shape - :math:`(NxCxHxW)`. - - Returns: - dict: The dict for losses. - """ - losses = self.loss_module(pred_maps, self.downsample_ratio, **kwargs) - - return losses diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py deleted file mode 100644 index ad6ad47696e6aeb2b3505abab0bd2d49d3b7aa83..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict(plugins=[ - dict( - cfg=dict(type='ContextBlock', ratio=1. / 16), - stages=(False, True, True, True), - position='after_conv3') - ])) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/rpn_r50_caffe_c4_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/rpn_r50_caffe_c4_1x_coco.py deleted file mode 100644 index 6da0ee94906fd8febaf69786976e478ef8f35c9e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/rpn_r50_caffe_c4_1x_coco.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = [ - '../_base_/models/rpn_r50_caffe_c4.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# dataset settings -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_label=False), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='proposal_fast') diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/bbox_heads/dii_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/bbox_heads/dii_head.py deleted file mode 100644 index cf708eb090eda99c2a88764318cff60ebf8feb2e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/bbox_heads/dii_head.py +++ /dev/null @@ -1,421 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (bias_init_with_prob, build_activation_layer, - build_norm_layer) -from mmcv.cnn.bricks.transformer import FFN, MultiheadAttention -from mmcv.runner import auto_fp16, force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.builder import HEADS, build_loss -from mmdet.models.dense_heads.atss_head import reduce_mean -from mmdet.models.losses import accuracy -from mmdet.models.utils import build_transformer -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class DIIHead(BBoxHead): - r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object - Detection with Learnable Proposals `_ - - Args: - num_classes (int): Number of class in dataset. - Defaults to 80. - num_ffn_fcs (int): The number of fully-connected - layers in FFNs. Defaults to 2. - num_heads (int): The hidden dimension of FFNs. - Defaults to 8. - num_cls_fcs (int): The number of fully-connected - layers in classification subnet. Defaults to 1. - num_reg_fcs (int): The number of fully-connected - layers in regression subnet. Defaults to 3. - feedforward_channels (int): The hidden dimension - of FFNs. Defaults to 2048 - in_channels (int): Hidden_channels of MultiheadAttention. - Defaults to 256. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - ffn_act_cfg (dict): The activation config for FFNs. - dynamic_conv_cfg (dict): The convolution config - for DynamicConv. - loss_iou (dict): The config for iou or giou loss. - - """ - - def __init__(self, - num_classes=80, - num_ffn_fcs=2, - num_heads=8, - num_cls_fcs=1, - num_reg_fcs=3, - feedforward_channels=2048, - in_channels=256, - dropout=0.0, - ffn_act_cfg=dict(type='ReLU', inplace=True), - dynamic_conv_cfg=dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - init_cfg=None, - **kwargs): - assert init_cfg is None, 'To prevent abnormal initialization ' \ - 'behavior, init_cfg is not allowed to be set' - super(DIIHead, self).__init__( - num_classes=num_classes, - reg_decoded_bbox=True, - reg_class_agnostic=True, - init_cfg=init_cfg, - **kwargs) - self.loss_iou = build_loss(loss_iou) - self.in_channels = in_channels - self.fp16_enabled = False - self.attention = MultiheadAttention(in_channels, num_heads, dropout) - self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.instance_interactive_conv = build_transformer(dynamic_conv_cfg) - self.instance_interactive_conv_dropout = nn.Dropout(dropout) - self.instance_interactive_conv_norm = build_norm_layer( - dict(type='LN'), in_channels)[1] - - self.ffn = FFN( - in_channels, - feedforward_channels, - num_ffn_fcs, - act_cfg=ffn_act_cfg, - dropout=dropout) - self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1] - - self.cls_fcs = nn.ModuleList() - for _ in range(num_cls_fcs): - self.cls_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.cls_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.cls_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - - # over load the self.fc_cls in BBoxHead - if self.loss_cls.use_sigmoid: - self.fc_cls = nn.Linear(in_channels, self.num_classes) - else: - self.fc_cls = nn.Linear(in_channels, self.num_classes + 1) - - self.reg_fcs = nn.ModuleList() - for _ in range(num_reg_fcs): - self.reg_fcs.append( - nn.Linear(in_channels, in_channels, bias=False)) - self.reg_fcs.append( - build_norm_layer(dict(type='LN'), in_channels)[1]) - self.reg_fcs.append( - build_activation_layer(dict(type='ReLU', inplace=True))) - # over load the self.fc_cls in BBoxHead - self.fc_reg = nn.Linear(in_channels, 4) - - assert self.reg_class_agnostic, 'DIIHead only ' \ - 'suppport `reg_class_agnostic=True` ' - assert self.reg_decoded_bbox, 'DIIHead only ' \ - 'suppport `reg_decoded_bbox=True`' - - def init_weights(self): - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - super(DIIHead, self).init_weights() - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - else: - # adopt the default initialization for - # the weight and bias of the layer norm - pass - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - nn.init.constant_(self.fc_cls.bias, bias_init) - - @auto_fp16() - def forward(self, roi_feat, proposal_feat): - """Forward function of Dynamic Instance Interactive Head. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size, num_proposals, feature_dimensions) - - Returns: - tuple[Tensor]: Usually a tuple of classification scores - and bbox prediction and a intermediate feature. - - - cls_scores (Tensor): Classification scores for - all proposals, has shape - (batch_size, num_proposals, num_classes). - - bbox_preds (Tensor): Box energies / deltas for - all proposals, has shape - (batch_size, num_proposals, 4). - - obj_feat (Tensor): Object feature before classification - and regression subnet, has shape - (batch_size, num_proposal, feature_dimensions). - """ - N, num_proposals = proposal_feat.shape[:2] - - # Self attention - proposal_feat = proposal_feat.permute(1, 0, 2) - proposal_feat = self.attention_norm(self.attention(proposal_feat)) - - # instance interactive - proposal_feat = proposal_feat.permute(1, 0, - 2).reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - proposal_feat = proposal_feat + self.instance_interactive_conv_dropout( - proposal_feat_iic) - obj_feat = self.instance_interactive_conv_norm(proposal_feat) - - # FFN - obj_feat = self.ffn_norm(self.ffn(obj_feat)) - - cls_feat = obj_feat - reg_feat = obj_feat - - for cls_layer in self.cls_fcs: - cls_feat = cls_layer(cls_feat) - for reg_layer in self.reg_fcs: - reg_feat = reg_layer(reg_feat) - - cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1) - bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, -1) - - return cls_score, bbox_delta, obj_feat.view(N, num_proposals, -1) - - @force_fp32(apply_to=('cls_score', 'bbox_pred')) - def loss(self, - cls_score, - bbox_pred, - labels, - label_weights, - bbox_targets, - bbox_weights, - imgs_whwh=None, - reduction_override=None, - **kwargs): - """"Loss function of DIIHead, get loss of all images. - - Args: - cls_score (Tensor): Classification prediction - results of all class, has shape - (batch_size * num_proposals_single_image, num_classes) - bbox_pred (Tensor): Regression prediction results, - has shape - (batch_size * num_proposals_single_image, 4), the last - dimension 4 represents [tl_x, tl_y, br_x, br_y]. - labels (Tensor): Label of each proposals, has shape - (batch_size * num_proposals_single_image - label_weights (Tensor): Classification loss - weight of each proposals, has shape - (batch_size * num_proposals_single_image - bbox_targets (Tensor): Regression targets of each - proposals, has shape - (batch_size * num_proposals_single_image, 4), - the last dimension 4 represents - [tl_x, tl_y, br_x, br_y]. - bbox_weights (Tensor): Regression loss weight of each - proposals's coordinate, has shape - (batch_size * num_proposals_single_image, 4), - imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\ - shape (batch_size, num_proposals, 4), the last - dimension means - [img_width,img_height, img_width, img_height]. - reduction_override (str, optional): The reduction - method used to override the original reduction - method of the loss. Options are "none", - "mean" and "sum". Defaults to None, - - Returns: - dict[str, Tensor]: Dictionary of loss components - """ - losses = dict() - bg_class_ind = self.num_classes - # note in spare rcnn num_gt == num_pos - pos_inds = (labels >= 0) & (labels < bg_class_ind) - num_pos = pos_inds.sum().float() - avg_factor = reduce_mean(num_pos) - if cls_score is not None: - if cls_score.numel() > 0: - losses['loss_cls'] = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=avg_factor, - reduction_override=reduction_override) - losses['pos_acc'] = accuracy(cls_score[pos_inds], - labels[pos_inds]) - if bbox_pred is not None: - # 0~self.num_classes-1 are FG, self.num_classes is BG - # do not perform bounding box regression for BG anymore. - if pos_inds.any(): - pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0), - 4)[pos_inds.type(torch.bool)] - losses['loss_bbox'] = self.loss_bbox( - pos_bbox_pred / imgs_whwh, - bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh, - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - losses['loss_iou'] = self.loss_iou( - pos_bbox_pred, - bbox_targets[pos_inds.type(torch.bool)], - bbox_weights[pos_inds.type(torch.bool)], - avg_factor=avg_factor) - else: - losses['loss_bbox'] = bbox_pred.sum() * 0 - losses['loss_iou'] = bbox_pred.sum() * 0 - return losses - - def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes, - pos_gt_bboxes, pos_gt_labels, cfg): - """Calculate the ground truth for proposals in the single image - according to the sampling results. - - Almost the same as the implementation in `bbox_head`, - we add pos_inds and neg_inds to select positive and - negative samples instead of selecting the first num_pos - as positive samples. - - Args: - pos_inds (Tensor): The length is equal to the - positive sample numbers contain all index - of the positive sample in the origin proposal set. - neg_inds (Tensor): The length is equal to the - negative sample numbers contain all index - of the negative sample in the origin proposal set. - pos_bboxes (Tensor): Contains all the positive boxes, - has shape (num_pos, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - neg_bboxes (Tensor): Contains all the negative boxes, - has shape (num_neg, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_bboxes (Tensor): Contains all the gt_boxes, - has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - pos_gt_labels (Tensor): Contains all the gt_labels, - has shape (num_gt). - cfg (obj:`ConfigDict`): `train_cfg` of R-CNN. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following Tensors: - - - labels(Tensor): Gt_labels for all proposals, has - shape (num_proposals,). - - label_weights(Tensor): Labels_weights for all proposals, has - shape (num_proposals,). - - bbox_targets(Tensor):Regression target for all proposals, has - shape (num_proposals, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights(Tensor):Regression weights for all proposals, - has shape (num_proposals, 4). - """ - num_pos = pos_bboxes.size(0) - num_neg = neg_bboxes.size(0) - num_samples = num_pos + num_neg - - # original implementation uses new_zeros since BG are set to be 0 - # now use empty & fill because BG cat_id = num_classes, - # FG cat_id = [0, num_classes-1] - labels = pos_bboxes.new_full((num_samples, ), - self.num_classes, - dtype=torch.long) - label_weights = pos_bboxes.new_zeros(num_samples) - bbox_targets = pos_bboxes.new_zeros(num_samples, 4) - bbox_weights = pos_bboxes.new_zeros(num_samples, 4) - if num_pos > 0: - labels[pos_inds] = pos_gt_labels - pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight - label_weights[pos_inds] = pos_weight - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - pos_bboxes, pos_gt_bboxes) - else: - pos_bbox_targets = pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1 - if num_neg > 0: - label_weights[neg_inds] = 1.0 - - return labels, label_weights, bbox_targets, bbox_weights - - def get_targets(self, - sampling_results, - gt_bboxes, - gt_labels, - rcnn_train_cfg, - concat=True): - """Calculate the ground truth for all samples in a batch according to - the sampling_results. - - Almost the same as the implementation in bbox_head, we passed - additional parameters pos_inds_list and neg_inds_list to - `_get_target_single` function. - - Args: - sampling_results (List[obj:SamplingResults]): Assign results of - all images in a batch after sampling. - gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch, - each tensor has shape (num_gt, 4), the last dimension 4 - represents [tl_x, tl_y, br_x, br_y]. - gt_labels (list[Tensor]): Gt_labels of all images in a batch, - each tensor has shape (num_gt,). - rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN. - concat (bool): Whether to concatenate the results of all - the images in a single batch. - - Returns: - Tuple[Tensor]: Ground truth for proposals in a single image. - Containing the following list of Tensors: - - - labels (list[Tensor],Tensor): Gt_labels for all - proposals in a batch, each tensor in list has - shape (num_proposals,) when `concat=False`, otherwise just - a single tensor has shape (num_all_proposals,). - - label_weights (list[Tensor]): Labels_weights for - all proposals in a batch, each tensor in list has shape - (num_proposals,) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals,). - - bbox_targets (list[Tensor],Tensor): Regression target - for all proposals in a batch, each tensor in list has - shape (num_proposals, 4) when `concat=False`, otherwise - just a single tensor has shape (num_all_proposals, 4), - the last dimension 4 represents [tl_x, tl_y, br_x, br_y]. - - bbox_weights (list[tensor],Tensor): Regression weights for - all proposals in a batch, each tensor in list has shape - (num_proposals, 4) when `concat=False`, otherwise just a - single tensor has shape (num_all_proposals, 4). - """ - pos_inds_list = [res.pos_inds for res in sampling_results] - neg_inds_list = [res.neg_inds for res in sampling_results] - pos_bboxes_list = [res.pos_bboxes for res in sampling_results] - neg_bboxes_list = [res.neg_bboxes for res in sampling_results] - pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results] - pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results] - labels, label_weights, bbox_targets, bbox_weights = multi_apply( - self._get_target_single, - pos_inds_list, - neg_inds_list, - pos_bboxes_list, - neg_bboxes_list, - pos_gt_bboxes_list, - pos_gt_labels_list, - cfg=rcnn_train_cfg) - if concat: - labels = torch.cat(labels, 0) - label_weights = torch.cat(label_weights, 0) - bbox_targets = torch.cat(bbox_targets, 0) - bbox_weights = torch.cat(bbox_weights, 0) - return labels, label_weights, bbox_targets, bbox_weights diff --git a/spaces/tracinginsights/api/Dockerfile b/spaces/tracinginsights/api/Dockerfile deleted file mode 100644 index 0b384523095644475fbc2e474fcd6ae6818c4b9c..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/api/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -# FROM python:3.10.9 - -# WORKDIR /code - -# RUN mkdir /code/cache - -# COPY ./requirements.txt /code/requirements.txt - -# RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# COPY . . - -# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] - -FROM python:3.10.9 - -WORKDIR /code - -RUN mkdir /code/cache && chmod a+rwx /code/cache && chmod a+rwx /code - - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -ENV FASTF1_CACHE_DIR="/code/cache" - -# Add the following lines to create the main.db file -RUN touch /code/main.db -RUN chmod a+rwx /code/main.db - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] - diff --git a/spaces/uSerNameDDHL/bingo/src/lib/isomorphic/node.ts b/spaces/uSerNameDDHL/bingo/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/scripts/train.py b/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/scripts/train.py deleted file mode 100644 index 4141436fb3edee8ab5f7576fde0c0e53b529ef66..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/scripts/train.py +++ /dev/null @@ -1,32 +0,0 @@ -""" -This file runs the main training/val loop -""" -import os -import json -import sys -import pprint - -sys.path.append(".") -sys.path.append("..") - -from mapper.options.train_options import TrainOptions -from mapper.training.coach import Coach - - -def main(opts): - if os.path.exists(opts.exp_dir): - raise Exception('Oops... {} already exists'.format(opts.exp_dir)) - os.makedirs(opts.exp_dir, exist_ok=True) - - opts_dict = vars(opts) - pprint.pprint(opts_dict) - with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f: - json.dump(opts_dict, f, indent=4, sort_keys=True) - - coach = Coach(opts) - coach.train() - - -if __name__ == '__main__': - opts = TrainOptions().parse() - main(opts) diff --git a/spaces/umoubuton/atri-bert-vits2/text/tone_sandhi.py b/spaces/umoubuton/atri-bert-vits2/text/tone_sandhi.py deleted file mode 100644 index 6a6e4c3e64f1a9e8b9da73fc6fbebf8a33e5602d..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/text/tone_sandhi.py +++ /dev/null @@ -1,769 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi: - def __init__(self): - self.must_neural_tone_words = { - "麻烦", - "麻利", - "鸳鸯", - "高粱", - "骨头", - "骆驼", - "马虎", - "首饰", - "馒头", - "馄饨", - "风筝", - "难为", - "队伍", - "阔气", - "闺女", - "门道", - "锄头", - "铺盖", - "铃铛", - "铁匠", - "钥匙", - "里脊", - "里头", - "部分", - "那么", - "道士", - "造化", - "迷糊", - "连累", - "这么", - "这个", - "运气", - "过去", - "软和", - "转悠", - "踏实", - "跳蚤", - "跟头", - "趔趄", - "财主", - "豆腐", - "讲究", - "记性", - "记号", - "认识", - "规矩", - "见识", - "裁缝", - "补丁", - "衣裳", - "衣服", - "衙门", - "街坊", - "行李", - "行当", - "蛤蟆", - "蘑菇", - "薄荷", - "葫芦", - "葡萄", - "萝卜", - "荸荠", - "苗条", - "苗头", - "苍蝇", - "芝麻", - "舒服", - "舒坦", - "舌头", - "自在", - "膏药", - "脾气", - "脑袋", - "脊梁", - "能耐", - "胳膊", - "胭脂", - "胡萝", - "胡琴", - "胡同", - "聪明", - "耽误", - "耽搁", - "耷拉", - "耳朵", - "老爷", - "老实", - "老婆", - "老头", - "老太", - "翻腾", - "罗嗦", - "罐头", - "编辑", - "结实", - "红火", - "累赘", - "糨糊", - "糊涂", - "精神", - "粮食", - "簸箕", - "篱笆", - "算计", - "算盘", - "答应", - "笤帚", - "笑语", - "笑话", - "窟窿", - "窝囊", - "窗户", - "稳当", - "稀罕", - "称呼", - "秧歌", - "秀气", - "秀才", - "福气", - "祖宗", - "砚台", - "码头", - "石榴", - "石头", - "石匠", - "知识", - "眼睛", - "眯缝", - "眨巴", - "眉毛", - "相声", - "盘算", - "白净", - "痢疾", - "痛快", - "疟疾", - "疙瘩", - "疏忽", - "畜生", - "生意", - "甘蔗", - "琵琶", - "琢磨", - "琉璃", - "玻璃", - "玫瑰", - "玄乎", - "狐狸", - "状元", - "特务", - "牲口", - "牙碜", - "牌楼", - "爽快", - "爱人", - "热闹", - "烧饼", - "烟筒", - "烂糊", - "点心", - "炊帚", - "灯笼", - "火候", - "漂亮", - "滑溜", - "溜达", - "温和", - "清楚", - "消息", - "浪头", - "活泼", - "比方", - "正经", - "欺负", - "模糊", - "槟榔", - "棺材", - "棒槌", - "棉花", - "核桃", - "栅栏", - "柴火", - "架势", - "枕头", - "枇杷", - "机灵", - "本事", - "木头", - "木匠", - "朋友", - "月饼", - "月亮", - "暖和", - "明白", - "时候", - "新鲜", - "故事", - "收拾", - "收成", - "提防", - "挖苦", - "挑剔", - "指甲", - "指头", - "拾掇", - "拳头", - "拨弄", - "招牌", - "招呼", - "抬举", - "护士", - "折腾", - "扫帚", - "打量", - "打算", - "打点", - "打扮", - "打听", - "打发", - "扎实", - "扁担", - "戒指", - "懒得", - "意识", - "意思", - "情形", - "悟性", - "怪物", - "思量", - "怎么", - "念头", - "念叨", - "快活", - "忙活", - "志气", - "心思", - "得罪", - "张罗", - "弟兄", - "开通", - "应酬", - "庄稼", - "干事", - "帮手", - "帐篷", - "希罕", - "师父", - "师傅", - "巴结", - "巴掌", - "差事", - "工夫", - "岁数", - "屁股", - "尾巴", - "少爷", - "小气", - "小伙", - "将就", - "对头", - "对付", - "寡妇", - "家伙", - "客气", - "实在", - "官司", - "学问", - "学生", - "字号", - "嫁妆", - "媳妇", - "媒人", - "婆家", - "娘家", - "委屈", - "姑娘", - "姐夫", - "妯娌", - "妥当", - "妖精", - "奴才", - "女婿", - "头发", - "太阳", - "大爷", - "大方", - "大意", - "大夫", - "多少", - "多么", - "外甥", - "壮实", - "地道", - "地方", - "在乎", - "困难", - "嘴巴", - "嘱咐", - "嘟囔", - "嘀咕", - "喜欢", - "喇嘛", - "喇叭", - "商量", - "唾沫", - "哑巴", - "哈欠", - "哆嗦", - "咳嗽", - "和尚", - "告诉", - "告示", - "含糊", - "吓唬", - "后头", - "名字", - "名堂", - "合同", - "吆喝", - "叫唤", - "口袋", - "厚道", - "厉害", - "千斤", - "包袱", - "包涵", - "匀称", - "勤快", - "动静", - "动弹", - "功夫", - "力气", - "前头", - "刺猬", - "刺激", - "别扭", - "利落", - "利索", - "利害", - "分析", - "出息", - "凑合", - "凉快", - "冷战", - "冤枉", - "冒失", - "养活", - "关系", - "先生", - "兄弟", - "便宜", - "使唤", - "佩服", - "作坊", - "体面", - "位置", - "似的", - "伙计", - "休息", - "什么", - "人家", - "亲戚", - "亲家", - "交情", - "云彩", - "事情", - "买卖", - "主意", - "丫头", - "丧气", - "两口", - "东西", - "东家", - "世故", - "不由", - "不在", - "下水", - "下巴", - "上头", - "上司", - "丈夫", - "丈人", - "一辈", - "那个", - "菩萨", - "父亲", - "母亲", - "咕噜", - "邋遢", - "费用", - "冤家", - "甜头", - "介绍", - "荒唐", - "大人", - "泥鳅", - "幸福", - "熟悉", - "计划", - "扑腾", - "蜡烛", - "姥爷", - "照顾", - "喉咙", - "吉他", - "弄堂", - "蚂蚱", - "凤凰", - "拖沓", - "寒碜", - "糟蹋", - "倒腾", - "报复", - "逻辑", - "盘缠", - "喽啰", - "牢骚", - "咖喱", - "扫把", - "惦记", - } - self.must_not_neural_tone_words = { - "男子", - "女子", - "分子", - "原子", - "量子", - "莲子", - "石子", - "瓜子", - "电子", - "人人", - "虎虎", - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, finals: List[str]) -> List[str]: - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if ( - j - 1 >= 0 - and item == word[j - 1] - and pos[0] in {"n", "v", "a"} - and word not in self.must_not_neural_tone_words - ): - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif ( - len(word) > 1 - and word[-1] in "们子" - and pos in {"r", "n"} - and word not in self.must_not_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif ( - ge_idx >= 1 - and (word[ge_idx - 1].isnumeric() or word[ge_idx - 1] in "几有两半多各整每做是") - ) or word == "个": - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"] - ): - return finals - # "一" between reduplication words should be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword) :] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[: -len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif ( - i == 1 - and not self._all_tone_three(sub) - and finals_list[i][0][-1] == "3" - and finals_list[0][-1][-1] == "3" - ): - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, "d")) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and word == "一" - and i + 1 < len(seg) - and seg[i - 1][0] == seg[i + 1][0] - and seg[i - 1][1] == "v" - ): - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if ( - i - 2 >= 0 - and seg[i - 1][0] == "一" - and seg[i - 2][0] == word - and pos == "v" - ): - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and self._all_tone_three(sub_finals_list[i - 1]) - and self._all_tone_three(sub_finals_list[i]) - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and sub_finals_list[i - 1][-1][-1] == "3" - and sub_finals_list[i][0][-1] == "3" - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Crack No Cd De Age Of Empires 3 The Warchiefs How to Get the Latest Version for Free.md b/spaces/usbethFlerru/sovits-modelsV2/example/Crack No Cd De Age Of Empires 3 The Warchiefs How to Get the Latest Version for Free.md deleted file mode 100644 index 13baceefbf48bb608d0736483f2e9de68f48d7fb..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Crack No Cd De Age Of Empires 3 The Warchiefs How to Get the Latest Version for Free.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      horror story full movie 720p free download
      businessman full video songs hd free download
      clave para activar usb master clean
      Dum Laga Ke Haisha movie in hindi dubbed torrent
      sigma key dongle crack
      Yaar Ghaddar 1080p movies
      abc chemistry book pdf free download
      LOOSIE 014 Kanako
      Microsoft Office 2016 PT-PT Ativador download
      Www Etvshow Com Eurotic Tv 6

      -

      the darkest hour tamil dubbed mp4 20
      Comics milftoon completo en espanol
      Greenturtlegirl video
      ngoma ya vhatei pdf 62
      daqin 3d mobile beauty master software crack downloadk
      Antarvasna 36 Hindi PDF Stories.rar.exe
      circuiti integrati digitali rabaey pdf download
      terjemahan kitab al wajiz pdf download
      kiki's delivery service 1080p mkv
      ivt bluesoleil 10.0.417.0 serial key

      -

      Crack No Cd De Age Of Empires 3 The Warchiefs


      DOWNLOAD ——— https://urlcod.com/2uyWQ5



      -

      frivolous dress order clips hit
      MX vs ATV All Out Slash Track Pack-CODEX CODEX
      TENCHU - Shadow Assassins WII WBFS-NTSC
      Computer-Aided Thermodynamic Tables 3 -- CATT3 utorrent
      Succeeding in the World of Work, Student Edition (SUCCEEDING IN THE WOW) mobi download book
      gas production engineering sanjay kumar free pdf
      find my font pro crack
      Top Rated 82a114 Paraloid Solid Grade Resins Solvent Selection C
      Video anak smp gay 17
      bibcam boys 11 yo

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/vagmi/isai/MERT-v1-95M/README.md b/spaces/vagmi/isai/MERT-v1-95M/README.md deleted file mode 100644 index 1236769b2c602495af55162cb11be75c1ef1f102..0000000000000000000000000000000000000000 --- a/spaces/vagmi/isai/MERT-v1-95M/README.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -license: mit -inference: false -tags: -- music ---- - -# Introduction to our series work - -The development log of our Music Audio Pre-training (m-a-p) model family: -- 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks. -- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public) -- 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks. -- 29/10/2022: a pre-trained MIR model [music2vec](https://huggingface.co/m-a-p/music2vec-v1) trained with **BYOL** paradigm. - - - -Here is a table for quick model pick-up: - -| Name | Pre-train Paradigm | Training Data (hour) | Pre-train Context (second) | Model Size | Transformer Layer-Dimension | Feature Rate | Sample Rate | Release Date | -| ------------------------------------------------------------ | ------------------ | -------------------- | ---------------------------- | ---------- | --------------------------- | ------------ | ----------- | ------------ | -| [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) | MLM | 160K | 5 | 330M | 24-1024 | 75 Hz | 24K Hz | 17/03/2023 | -| [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) | MLM | 20K | 5 | 95M | 12-768 | 75 Hz | 24K Hz | 17/03/2023 | -| [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public) | MLM | 900 | 5 | 95M | 12-768 | 50 Hz | 16K Hz | 14/03/2023 | -| [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) | MLM | 1000 | 5 | 95 M | 12-768 | 50 Hz | 16K Hz | 29/12/2022 | -| [music2vec-v1](https://huggingface.co/m-a-p/music2vec-v1) | BYOL | 1000 | 30 | 95 M | 12-768 | 50 Hz | 16K Hz | 30/10/2022 | - -## Explanation - -The m-a-p models share the similar model architecture and the most distinguished difference is the paradigm in used pre-training. Other than that, there are several nuance technical configuration needs to know before using: - -- **Model Size**: the number of parameters that would be loaded to memory. Please select the appropriate size fitting your hardware. -- **Transformer Layer-Dimension**: The number of transformer layers and the corresponding feature dimensions can be outputted from our model. This is marked out because features extracted by **different layers could have various performance depending on tasks**. -- **Feature Rate**: Given a 1-second audio input, the number of features output by the model. -- **Sample Rate**: The frequency of audio that the model is trained with. - - - -# Introduction to MERT-v1 - -Compared to MERT-v0, we introduce multiple new things in the MERT-v1 pre-training: - -- Change the pseudo labels to 8 codebooks from [encodec](https://github.com/facebookresearch/encodec), which potentially has higher quality and empower our model to support music generation. -- MLM prediction with in-batch noise mixture. -- Train with higher audio frequency (24K Hz). -- Train with more audio data (up to 160 thousands of hours). -- More available model sizes 95M and 330M. - - - -More details will be written in our coming-soon paper. - - - -# Model Usage - -```python -# from transformers import Wav2Vec2Processor -from transformers import Wav2Vec2FeatureExtractor -from transformers import AutoModel -import torch -from torch import nn -import torchaudio.transforms as T -from datasets import load_dataset - - -# loading our model weights -model = AutoModel.from_pretrained("m-a-p/MERT-v1-95M", trust_remote_code=True) -# loading the corresponding preprocessor config -processor = Wav2Vec2FeatureExtractor.from_pretrained("m-a-p/MERT-v1-95M",trust_remote_code=True) - -# load demo audio and set processor -dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") -dataset = dataset.sort("id") -sampling_rate = dataset.features["audio"].sampling_rate - -resample_rate = processor.sampling_rate -# make sure the sample_rate aligned -if resample_rate != sampling_rate: - print(f'setting rate from {sampling_rate} to {resample_rate}') - resampler = T.Resample(sampling_rate, resample_rate) -else: - resampler = None - -# audio file is decoded on the fly -if resampler is None: - input_audio = dataset[0]["audio"]["array"] -else: - input_audio = resampler(torch.from_numpy(dataset[0]["audio"]["array"])) - -inputs = processor(input_audio, sampling_rate=resample_rate, return_tensors="pt") -with torch.no_grad(): - outputs = model(**inputs, output_hidden_states=True) - -# take a look at the output shape, there are 13 layers of representation -# each layer performs differently in different downstream tasks, you should choose empirically -all_layer_hidden_states = torch.stack(outputs.hidden_states).squeeze() -print(all_layer_hidden_states.shape) # [13 layer, Time steps, 768 feature_dim] - -# for utterance level classification tasks, you can simply reduce the representation in time -time_reduced_hidden_states = all_layer_hidden_states.mean(-2) -print(time_reduced_hidden_states.shape) # [13, 768] - -# you can even use a learnable weighted average representation -aggregator = nn.Conv1d(in_channels=13, out_channels=1, kernel_size=1) -weighted_avg_hidden_states = aggregator(time_reduced_hidden_states.unsqueeze(0)).squeeze() -print(weighted_avg_hidden_states.shape) # [768] -``` - - - -# Citation - -```shell -@article{li2022large, - title={Large-Scale Pretrained Model for Self-Supervised Music Audio Representation Learning}, - author={Li, Yizhi and Yuan, Ruibin and Zhang, Ge and Ma, Yinghao and Lin, Chenghua and Chen, Xingran and Ragni, Anton and Yin, Hanzhi and Hu, Zhijie and He, Haoyu and others}, - year={2022} -} - -``` \ No newline at end of file diff --git a/spaces/venz/AW-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md b/spaces/venz/AW-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md deleted file mode 100644 index 5c8263a0d4cf200bf09c7a07d3244a40e57d018b..0000000000000000000000000000000000000000 --- a/spaces/venz/AW-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 06 SL AI Image Music Video UI UX URL -emoji: 📊 -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py deleted file mode 100644 index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py +++ /dev/null @@ -1,273 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from torch import Tensor, nn -from torchvision.ops.boxes import nms -from transformers import BertConfig, BertModel, BertPreTrainedModel -from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions - - -class BertModelWarper(nn.Module): - def __init__(self, bert_model): - super().__init__() - # self.bert = bert_modelc - - self.config = bert_model.config - self.embeddings = bert_model.embeddings - self.encoder = bert_model.encoder - self.pooler = bert_model.pooler - - self.get_extended_attention_mask = bert_model.get_extended_attention_mask - self.invert_attention_mask = bert_model.invert_attention_mask - self.get_head_mask = bert_model.get_head_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions if output_attentions is not None else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - batch_size, seq_length = input_shape - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] if past_key_values is not None else 0 - ) - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - if token_type_ids is None: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask( - attention_mask, input_shape, device - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class TextEncoderShell(nn.Module): - def __init__(self, text_encoder): - super().__init__() - self.text_encoder = text_encoder - self.config = self.text_encoder.config - - def forward(self, **kw): - # feed into text encoder - return self.text_encoder(**kw) - - -def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - - previous_col = col - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long) - - -def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - cate_to_token_mask_list = [[] for _ in range(bs)] - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - c2t_maski = torch.zeros((num_token), device=input_ids.device).bool() - c2t_maski[previous_col + 1 : col] = True - cate_to_token_mask_list[row].append(c2t_maski) - previous_col = col - - cate_to_token_mask_list = [ - torch.stack(cate_to_token_mask_listi, dim=0) - for cate_to_token_mask_listi in cate_to_token_mask_list - ] - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list diff --git a/spaces/wangrongsheng/ChatImprovement/toolbox.py b/spaces/wangrongsheng/ChatImprovement/toolbox.py deleted file mode 100644 index d57fee63275186c6eb63d44eef22f3537be1b5cf..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/ChatImprovement/toolbox.py +++ /dev/null @@ -1,140 +0,0 @@ -import markdown, mdtex2html, threading -from show_math import convert as convert_math -from functools import wraps - -def predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]): - """ - 调用简单的predict_no_ui接口,但是依然保留了些许界面心跳功能,当对话太长时,会自动采用二分法截断 - """ - import time - try: from config_private import TIMEOUT_SECONDS, MAX_RETRY - except: from config import TIMEOUT_SECONDS, MAX_RETRY - from predict import predict_no_ui - mutable = [None, ''] - def mt(i_say, history): - while True: - try: - mutable[0] = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature, history=history) - break - except ConnectionAbortedError as e: - if len(history) > 0: - history = [his[len(his)//2:] for his in history if his is not None] - mutable[1] = 'Warning! History conversation is too long, cut into half. ' - else: - i_say = i_say[:len(i_say)//2] - mutable[1] = 'Warning! Input file is too long, cut into half. ' - except TimeoutError as e: - mutable[0] = '[Local Message] Failed with timeout' - - thread_name = threading.Thread(target=mt, args=(i_say, history)); thread_name.start() - cnt = 0 - while thread_name.is_alive(): - cnt += 1 - chatbot[-1] = (i_say_show_user, f"[Local Message] {mutable[1]}waiting gpt response {cnt}/{TIMEOUT_SECONDS*2*(MAX_RETRY+1)}"+''.join(['.']*(cnt%4))) - yield chatbot, history, '正常' - time.sleep(1) - gpt_say = mutable[0] - return gpt_say - -def write_results_to_file(history, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os, time - if file_name is None: - file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w') as f: - f.write('# chatGPT 分析报告\n') - for i, content in enumerate(history): - if i%2==0: f.write('## ') - f.write(content) - f.write('\n\n') - res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - -def regular_txt_to_markdown(text): - """ - 将普通文本转换为Markdown格式的文本。 - """ - text = text.replace('\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - return text - -def CatchException(f): - """ - 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 - """ - @wraps(f) - def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - try: - yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) - except Exception as e: - import traceback - from check_proxy import check_proxy - try: from config_private import proxies - except: from config import proxies - tb_str = regular_txt_to_markdown(traceback.format_exc()) - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 实验性函数调用出错: \n\n {tb_str} \n\n 当前代理可用性: \n\n {check_proxy(proxies)}") - yield chatbot, history, f'异常 {e}' - return decorated - -def report_execption(chatbot, history, a, b): - """ - 向chatbot中添加错误信息 - """ - chatbot.append((a, b)) - history.append(a); history.append(b) - -def text_divide_paragraph(text): - """ - 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - """ - if '```' in text: - # careful input - return text - else: - # wtf input - lines = text.split("\n") - for i, line in enumerate(lines): - if i!=0: lines[i] = "

      "+lines[i].replace(" ", " ")+"

      " - text = "".join(lines) - return text - -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - if ('$' in txt) and ('```' not in txt): - return markdown.markdown(txt,extensions=['fenced_code','tables']) + '

      ' + \ - markdown.markdown(convert_math(txt, splitParagraphs=False),extensions=['fenced_code','tables']) - else: - return markdown.markdown(txt,extensions=['fenced_code','tables']) - - -def format_io(self, y): - """ - 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。 - """ - if y is None: return [] - i_ask, gpt_reply = y[-1] - i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波 - y[-1] = ( - None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code','tables']), - None if gpt_reply is None else markdown_convertion(gpt_reply) - ) - return y - - -def find_free_port(): - """ - 返回当前系统中可用的未使用端口。 - """ - import socket - from contextlib import closing - with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: - s.bind(('', 0)) - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - return s.getsockname()[1] \ No newline at end of file diff --git a/spaces/webis-huggingface-workshop/sebastian_sentiments_demo/app.py b/spaces/webis-huggingface-workshop/sebastian_sentiments_demo/app.py deleted file mode 100644 index c7382bd6b5a13016635c0b55e277c00fd4674cb7..0000000000000000000000000000000000000000 --- a/spaces/webis-huggingface-workshop/sebastian_sentiments_demo/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/distilbert-base-uncased-finetuned-sst-2-english").launch() \ No newline at end of file diff --git a/spaces/weiren119/AudiogramDigitization/src/interfaces.py b/spaces/weiren119/AudiogramDigitization/src/interfaces.py deleted file mode 100644 index 720c05c6997439cc8f2ba2fa6f0ae8991416d839..0000000000000000000000000000000000000000 --- a/spaces/weiren119/AudiogramDigitization/src/interfaces.py +++ /dev/null @@ -1,132 +0,0 @@ -#!/usr/bin/env python3 -""" -Copyright (c) 2020, Carleton University Biomedical Informatics Collaboratory - -This source code is licensed under the MIT license found in the -LICENSE file in the root directory of this source tree. -""" - -from typing import List, Optional -from typing_extensions import TypedDict - -class ThresholdDict(TypedDict): - """Represents a hearing threshold (measurement). - """ - frequency: int - threshold: int - ear: str - masking: bool - conduction: str - measurementType: str - response: bool - -class BoundingBox(TypedDict): - """Represents the dictionary holding the minimum information - for a bounding box. - """ - x: int - y: int - width: int - height: int - -class AudiogramDict(TypedDict): - """Represents the dictionary for an audiogram as extracted - by the Yolo model. - """ - boundingBox: BoundingBox - confidence: Optional[float] - -class LabelDict(TypedDict): - """Represents the dictionary for a label as extracted - by the Yolo model. - """ - boundingBox: BoundingBox - value: str - confidence: Optional[float] - -class SymbolDict(TypedDict): - """Represents the dictionary for a symbol as extracted - by the Yolo model. - """ - boundingBox: BoundingBox - measurementType: str - confidence: Optional[float] - -class CornerDict(TypedDict): - """Represents a corner, as annotated. - """ - frequency: int - threshold: int - position: TypedDict("PositionDict", { "horizontal": str, "vertical": str }) - x: float - y: float - -class AudiogramAnnotationDict(TypedDict): - """Represents an audiogram as structured within an annotation. - """ - confidence: Optional[float] - correctionAngle: Optional[float] - boundingBox: BoundingBox - corners: List[CornerDict] - labels: List[LabelDict] - symbols: List[SymbolDict] - -class ClaimantProfileDict(TypedDict): - """Profile of the claimant. - """ - age: int - exposure: List[dict] # out of scope for me - thresholds: List[ThresholdDict] - -class CalculationsDict(TypedDict): - """Values calculated for the claim. - """ - bestEarPta: float - correctedBestEarPta: float - worstEarPta: float - correctedWorstEarPta: float - bestEarRabinowitzNotchIndex: Optional[float] - worstEarRabinowitzNotchIndex: Optional[float] - -class HearingLossCriteriaDict(TypedDict): - """Information related to the hearing loss for the claim. - Includes different calculated values, etc. - """ - preliminaryDecisionAvailable: bool - calculations: CalculationsDict - eligible: bool - comment: str - awardPercentage: float - reviewNeeded: bool - -class MeasurementType(TypedDict): - """Type of measurement. - """ - conduction: str - masking: bool - -class SettingsDict(TypedDict): - """Settings used in computing the eligibility. - """ - left: TypedDict("EarSettings", { - "measurementType": TypedDict("MeasurementType", { - "conduction": str, - "masking": bool - }), - "ptaFrequencies": List[int] - }) - right: TypedDict("EarSettings", { - "measurementType": TypedDict("MeasurementType", { - "conduction": str, - "masking": bool - }), - "ptaFrequencies": List[int] - }) - -class EligibilityDict(TypedDict): - """Eligibility information. - """ - claimantProfile: ClaimantProfileDict - settings: SettingsDict - hearingLossCriteria: HearingLossCriteriaDict - exposureCriteria: dict diff --git a/spaces/xcchen/vits-uma-genshin-honkai/utils.py b/spaces/xcchen/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/xcchen/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/attention.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/attention.py deleted file mode 100644 index a0eadeee1454cfbea58a96595af7c9e552088c6a..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/modules/attention.py +++ /dev/null @@ -1,489 +0,0 @@ -# Code copy from PyTorch, modified by Xueyan Zou - -import warnings -from typing import Optional, Tuple - -import torch -import torch.nn as nn -from torch import Tensor -from torch.nn.init import constant_, xavier_normal_, xavier_uniform_ -from torch.nn.parameter import Parameter -from torch.overrides import has_torch_function, handle_torch_function -from torch.nn.functional import pad, linear, softmax, dropout - - -def multi_head_attention_forward( - query: Tensor, - key: Tensor, - value: Tensor, - embed_dim_to_check: int, - num_heads: int, - in_proj_weight: Tensor, - in_proj_bias: Tensor, - bias_k: Optional[Tensor], - bias_v: Optional[Tensor], - add_zero_attn: bool, - dropout_p: float, - out_proj_weight: Tensor, - out_proj_bias: Tensor, - training: bool = True, - key_padding_mask: Optional[Tensor] = None, - need_weights: bool = True, - attn_mask: Optional[Tensor] = None, - use_separate_proj_weight: bool = False, - q_proj_weight: Optional[Tensor] = None, - k_proj_weight: Optional[Tensor] = None, - v_proj_weight: Optional[Tensor] = None, - static_k: Optional[Tensor] = None, - static_v: Optional[Tensor] = None, -) -> Tuple[Tensor, Optional[Tensor]]: - r""" - Args: - query, key, value: map a query and a set of key-value pairs to an output. - See "Attention Is All You Need" for more details. - embed_dim_to_check: total dimension of the model. - num_heads: parallel attention heads. - in_proj_weight, in_proj_bias: input projection weight and bias. - bias_k, bias_v: bias of the key and value sequences to be added at dim=0. - add_zero_attn: add a new batch of zeros to the key and - value sequences at dim=1. - dropout_p: probability of an element to be zeroed. - out_proj_weight, out_proj_bias: the output projection weight and bias. - training: apply dropout if is ``True``. - key_padding_mask: if provided, specified padding elements in the key will - be ignored by the attention. This is an binary mask. When the value is True, - the corresponding value on the attention layer will be filled with -inf. - need_weights: output attn_output_weights. - attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all - the batches while a 3D mask allows to specify a different mask for the entries of each batch. - use_separate_proj_weight: the function accept the proj. weights for query, key, - and value in different forms. If false, in_proj_weight will be used, which is - a combination of q_proj_weight, k_proj_weight, v_proj_weight. - q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias. - static_k, static_v: static key and value used for attention operators. - - - Shape: - Inputs: - - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is - the embedding dimension. - - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is - the embedding dimension. - - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is - the embedding dimension. - - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length. - If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions - will be unchanged. If a BoolTensor is provided, the positions with the - value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged. - - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length. - 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length, - S is the source sequence length. attn_mask ensures that position i is allowed to attend the unmasked - positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend - while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True`` - are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor - is provided, it will be added to the attention weight. - - static_k: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length, - N is the batch size, E is the embedding dimension. E/num_heads is the head dimension. - - static_v: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length, - N is the batch size, E is the embedding dimension. E/num_heads is the head dimension. - - Outputs: - - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, - E is the embedding dimension. - - attn_output_weights: :math:`(N, L, S)` where N is the batch size, - L is the target sequence length, S is the source sequence length. - """ - tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias) - if has_torch_function(tens_ops): - return handle_torch_function( - multi_head_attention_forward, - tens_ops, - query, - key, - value, - embed_dim_to_check, - num_heads, - in_proj_weight, - in_proj_bias, - bias_k, - bias_v, - add_zero_attn, - dropout_p, - out_proj_weight, - out_proj_bias, - training=training, - key_padding_mask=key_padding_mask, - need_weights=need_weights, - attn_mask=attn_mask, - use_separate_proj_weight=use_separate_proj_weight, - q_proj_weight=q_proj_weight, - k_proj_weight=k_proj_weight, - v_proj_weight=v_proj_weight, - static_k=static_k, - static_v=static_v, - ) - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == embed_dim_to_check - # allow MHA to have different sizes for the feature dimension - assert key.size(0) == value.size(0) and key.size(1) == value.size(1) - - head_dim = embed_dim // num_heads - assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads" - scaling = float(head_dim) ** -0.5 - - if not use_separate_proj_weight: - if (query is key or torch.equal(query, key)) and (key is value or torch.equal(key, value)): - # self-attention - q, k, v = linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1) - - elif key is value or torch.equal(key, value): - # encoder-decoder attention - # This is inline in_proj function with in_proj_weight and in_proj_bias - _b = in_proj_bias - _start = 0 - _end = embed_dim - _w = in_proj_weight[_start:_end, :] - if _b is not None: - _b = _b[_start:_end] - q = linear(query, _w, _b) - - if key is None: - assert value is None - k = None - v = None - else: - - # This is inline in_proj function with in_proj_weight and in_proj_bias - _b = in_proj_bias - _start = embed_dim - _end = None - _w = in_proj_weight[_start:, :] - if _b is not None: - _b = _b[_start:] - k, v = linear(key, _w, _b).chunk(2, dim=-1) - - else: - # This is inline in_proj function with in_proj_weight and in_proj_bias - _b = in_proj_bias - _start = 0 - _end = embed_dim - _w = in_proj_weight[_start:_end, :] - if _b is not None: - _b = _b[_start:_end] - q = linear(query, _w, _b) - - # This is inline in_proj function with in_proj_weight and in_proj_bias - _b = in_proj_bias - _start = embed_dim - _end = embed_dim * 2 - _w = in_proj_weight[_start:_end, :] - if _b is not None: - _b = _b[_start:_end] - k = linear(key, _w, _b) - - # This is inline in_proj function with in_proj_weight and in_proj_bias - _b = in_proj_bias - _start = embed_dim * 2 - _end = None - _w = in_proj_weight[_start:, :] - if _b is not None: - _b = _b[_start:] - v = linear(value, _w, _b) - else: - q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight) - len1, len2 = q_proj_weight_non_opt.size() - assert len1 == embed_dim and len2 == query.size(-1) - - k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight) - len1, len2 = k_proj_weight_non_opt.size() - assert len1 == embed_dim and len2 == key.size(-1) - - v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight) - len1, len2 = v_proj_weight_non_opt.size() - assert len1 == embed_dim and len2 == value.size(-1) - - if in_proj_bias is not None: - q = linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim]) - k = linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim : (embed_dim * 2)]) - v = linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2) :]) - else: - q = linear(query, q_proj_weight_non_opt, in_proj_bias) - k = linear(key, k_proj_weight_non_opt, in_proj_bias) - v = linear(value, v_proj_weight_non_opt, in_proj_bias) - q = q * scaling - - if attn_mask is not None: - assert ( - attn_mask.dtype == torch.float32 - or attn_mask.dtype == torch.float64 - or attn_mask.dtype == torch.float16 - or attn_mask.dtype == torch.uint8 - or attn_mask.dtype == torch.bool - ), "Only float, byte, and bool types are supported for attn_mask, not {}".format(attn_mask.dtype) - if attn_mask.dtype == torch.uint8: - warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.") - attn_mask = attn_mask.to(torch.bool) - - if attn_mask.dim() == 2: - attn_mask = attn_mask.unsqueeze(0) - if list(attn_mask.size()) != [1, query.size(0), key.size(0)]: - raise RuntimeError("The size of the 2D attn_mask is not correct.") - elif attn_mask.dim() == 3: - if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]: - raise RuntimeError("The size of the 3D attn_mask is not correct.") - else: - raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim())) - # attn_mask's dim is 3 now. - - # convert ByteTensor key_padding_mask to bool - if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8: - warnings.warn( - "Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead." - ) - key_padding_mask = key_padding_mask.to(torch.bool) - - if bias_k is not None and bias_v is not None: - if static_k is None and static_v is None: - k = torch.cat([k, bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = pad(attn_mask, (0, 1)) - if key_padding_mask is not None: - key_padding_mask = pad(key_padding_mask, (0, 1)) - else: - assert static_k is None, "bias cannot be added to static key." - assert static_v is None, "bias cannot be added to static value." - else: - assert bias_k is None - assert bias_v is None - - q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1) - if k is not None: - k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1) - if v is not None: - v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1) - - if static_k is not None: - assert static_k.size(0) == bsz * num_heads - assert static_k.size(2) == head_dim - k = static_k - - if static_v is not None: - assert static_v.size(0) == bsz * num_heads - assert static_v.size(2) == head_dim - v = static_v - - src_len = k.size(1) - - if key_padding_mask is not None: - # assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if add_zero_attn: - src_len += 1 - k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1) - v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1) - if attn_mask is not None: - attn_mask = pad(attn_mask, (0, 1)) - if key_padding_mask is not None: - key_padding_mask = pad(key_padding_mask, (0, 1)) - - attn_output_weights = torch.bmm(q, k.transpose(1, 2)) - assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len] - - if attn_mask is not None: - if attn_mask.dtype == torch.bool: - attn_output_weights.masked_fill_(attn_mask, float("-inf")) - else: - attn_output_weights += attn_mask - - if key_padding_mask is not None: - attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len) - attn_output_weights = attn_output_weights.masked_fill( - key_padding_mask.unsqueeze(1), - float("-inf"), - ) - attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len) - - attn_output_weights = softmax(attn_output_weights, dim=-1) - attn_output_weights = dropout(attn_output_weights, p=dropout_p, training=training) - - attn_output = torch.bmm(attn_output_weights, v) - assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim] - attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn_output = linear(attn_output, out_proj_weight, out_proj_bias) - - if need_weights: - # average attention weights over heads - attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len) - return attn_output, attn_output_weights.sum(dim=1) / num_heads - else: - return attn_output, None - - -# This class exists solely for Transformer; it has an annotation stating -# that bias is never None, which appeases TorchScript -class _LinearWithBias(nn.Linear): - bias: Tensor # type: ignore - - def __init__(self, in_features: int, out_features: int) -> None: - super().__init__(in_features, out_features, bias=True) # type: ignore - - -class MultiheadAttention(nn.Module): - r"""Allows the model to jointly attend to information - from different representation subspaces. - See `Attention Is All You Need `_ - - .. math:: - \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O - - where :math:`head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)`. - - Args: - embed_dim: total dimension of the model. - num_heads: parallel attention heads. - dropout: a Dropout layer on attn_output_weights. Default: 0.0. - bias: add bias as module parameter. Default: True. - add_bias_kv: add bias to the key and value sequences at dim=0. - add_zero_attn: add a new batch of zeros to the key and - value sequences at dim=1. - kdim: total number of features in key. Default: None. - vdim: total number of features in value. Default: None. - - Note that if :attr:`kdim` and :attr:`vdim` are None, they will be set - to :attr:`embed_dim` such that query, key, and value have the same - number of features. - - Examples:: - - >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads) - >>> attn_output, attn_output_weights = multihead_attn(query, key, value) - """ - bias_k: Optional[torch.Tensor] - bias_v: Optional[torch.Tensor] - - def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None): - super(MultiheadAttention, self).__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" - - if self._qkv_same_embed_dim is False: - self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim)) - self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim)) - self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim)) - self.register_parameter('in_proj_weight', None) - else: - self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim)) - self.register_parameter('q_proj_weight', None) - self.register_parameter('k_proj_weight', None) - self.register_parameter('v_proj_weight', None) - - if bias: - self.in_proj_bias = Parameter(torch.empty(3 * embed_dim)) - else: - self.register_parameter('in_proj_bias', None) - self.out_proj = _LinearWithBias(embed_dim, embed_dim) - - if add_bias_kv: - self.bias_k = Parameter(torch.empty(1, 1, embed_dim)) - self.bias_v = Parameter(torch.empty(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self._reset_parameters() - - def _reset_parameters(self): - if self._qkv_same_embed_dim: - xavier_uniform_(self.in_proj_weight) - else: - xavier_uniform_(self.q_proj_weight) - xavier_uniform_(self.k_proj_weight) - xavier_uniform_(self.v_proj_weight) - - if self.in_proj_bias is not None: - constant_(self.in_proj_bias, 0.) - constant_(self.out_proj.bias, 0.) - if self.bias_k is not None: - xavier_normal_(self.bias_k) - if self.bias_v is not None: - xavier_normal_(self.bias_v) - - def __setstate__(self, state): - # Support loading old MultiheadAttention checkpoints generated by v1.1.0 - if '_qkv_same_embed_dim' not in state: - state['_qkv_same_embed_dim'] = True - - super(MultiheadAttention, self).__setstate__(state) - - def forward(self, query: Tensor, key: Tensor, value: Tensor, key_padding_mask: Optional[Tensor] = None, - need_weights: bool = True, attn_mask: Optional[Tensor] = None) -> Tuple[Tensor, Optional[Tensor]]: - r""" - Args: - query, key, value: map a query and a set of key-value pairs to an output. - See "Attention Is All You Need" for more details. - key_padding_mask: if provided, specified padding elements in the key will - be ignored by the attention. When given a binary mask and a value is True, - the corresponding value on the attention layer will be ignored. When given - a byte mask and a value is non-zero, the corresponding value on the attention - layer will be ignored - need_weights: output attn_output_weights. - attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all - the batches while a 3D mask allows to specify a different mask for the entries of each batch. - - Shapes for inputs: - - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is - the embedding dimension. - - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is - the embedding dimension. - - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is - the embedding dimension. - - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length. - If a ByteTensor is provided, the non-zero positions will be ignored while the position - with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the - value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged. - - attn_mask: if a 2D mask: :math:`(L, S)` where L is the target sequence length, S is the - source sequence length. - - If a 3D mask: :math:`(N\cdot\text{num\_heads}, L, S)` where N is the batch size, L is the target sequence - length, S is the source sequence length. ``attn_mask`` ensure that position i is allowed to attend - the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend - while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True`` - is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor - is provided, it will be added to the attention weight. - - Shapes for outputs: - - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, - E is the embedding dimension. - - attn_output_weights: :math:`(N, L, S)` where N is the batch size, - L is the target sequence length, S is the source sequence length. - """ - if not self._qkv_same_embed_dim: - return multi_head_attention_forward( - query, key, value, self.embed_dim, self.num_heads, - self.in_proj_weight, self.in_proj_bias, - self.bias_k, self.bias_v, self.add_zero_attn, - self.dropout, self.out_proj.weight, self.out_proj.bias, - training=self.training, - key_padding_mask=key_padding_mask, need_weights=need_weights, - attn_mask=attn_mask, use_separate_proj_weight=True, - q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight, - v_proj_weight=self.v_proj_weight) - else: - return multi_head_attention_forward( - query, key, value, self.embed_dim, self.num_heads, - self.in_proj_weight, self.in_proj_bias, - self.bias_k, self.bias_v, self.add_zero_attn, - self.dropout, self.out_proj.weight, self.out_proj.bias, - training=self.training, - key_padding_mask=key_padding_mask, need_weights=need_weights, - attn_mask=attn_mask) \ No newline at end of file diff --git a/spaces/xiangdy/chatGPT/modules/overwrites.py b/spaces/xiangdy/chatGPT/modules/overwrites.py deleted file mode 100644 index d17f56873c156e9fb883d35b50e2a28740f2cf90..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/modules/overwrites.py +++ /dev/null @@ -1,101 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html -from gradio_client import utils as client_utils - -from modules.presets import * -from modules.llama_func import * -from modules.config import render_latex - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, - y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple], - ) -> List[List[str | Dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0], "user"), - self._postprocess_chat_messages(message_pair[1], "bot"), - ] - ) - return processed_messages - -def postprocess_chat_messages( - self, chat_message: str | Tuple | List | None, message_type: str - ) -> str | Dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - filepath = chat_message[0] - mime_type = client_utils.get_mimetype(filepath) - filepath = self.make_temp_copy_if_needed(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - if message_type == "bot": - if not detect_converted_mark(chat_message): - chat_message = convert_mdtext(chat_message) - elif message_type == "user": - if not detect_converted_mark(chat_message): - chat_message = convert_asis(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - -with open("./assets/custom.js", "r", encoding="utf-8") as f, \ - open("./assets/external-scripts.js", "r", encoding="utf-8") as f1: - customJS = f.read() - externalScripts = f1.read() - - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - if render_latex: - js += """\ - - - """ - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/xxie92/antibody_visulization/diffab/tools/runner/design_for_testset.py b/spaces/xxie92/antibody_visulization/diffab/tools/runner/design_for_testset.py deleted file mode 100644 index c0ce6c96bc06a3c921514255de758707d78baf63..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/diffab/tools/runner/design_for_testset.py +++ /dev/null @@ -1,243 +0,0 @@ -import os -import argparse -import copy -import json -from tqdm.auto import tqdm -from torch.utils.data import DataLoader - -from diffab.datasets import get_dataset -from diffab.models import get_model -from diffab.modules.common.geometry import reconstruct_backbone_partially -from diffab.modules.common.so3 import so3vec_to_rotation -from diffab.utils.inference import RemoveNative -from diffab.utils.protein.writers import save_pdb -from diffab.utils.train import recursive_to -from diffab.utils.misc import * -from diffab.utils.data import * -from diffab.utils.transforms import * -from diffab.utils.inference import * - - -def create_data_variants(config, structure_factory): - structure = structure_factory() - structure_id = structure['id'] - - data_variants = [] - if config.mode == 'single_cdr': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - for cdr_name in cdrs: - transform = Compose([ - MaskSingleCDR(cdr_name, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - residue_first, residue_last = get_residue_first_last(data_var) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-{cdr_name}', - 'tag': f'{cdr_name}', - 'cdr': cdr_name, - 'residue_first': residue_first, - 'residue_last': residue_last, - }) - elif config.mode == 'multiple_cdrs': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - transform = Compose([ - MaskMultipleCDRs(selection=cdrs, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-MultipleCDRs', - 'tag': 'MultipleCDRs', - 'cdrs': cdrs, - 'residue_first': None, - 'residue_last': None, - }) - elif config.mode == 'full': - transform = Compose([ - MaskAntibody(), - MergeChains(), - ]) - data_var = transform(structure_factory()) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-Full', - 'tag': 'Full', - 'residue_first': None, - 'residue_last': None, - }) - elif config.mode == 'abopt': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - for cdr_name in cdrs: - transform = Compose([ - MaskSingleCDR(cdr_name, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - residue_first, residue_last = get_residue_first_last(data_var) - for opt_step in config.sampling.optimize_steps: - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-{cdr_name}-O{opt_step}', - 'tag': f'{cdr_name}-O{opt_step}', - 'cdr': cdr_name, - 'opt_step': opt_step, - 'residue_first': residue_first, - 'residue_last': residue_last, - }) - else: - raise ValueError(f'Unknown mode: {config.mode}.') - return data_variants - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('index', type=int) - parser.add_argument('-c', '--config', type=str, default='./configs/test/codesign_single.yml') - parser.add_argument('-o', '--out_root', type=str, default='./results') - parser.add_argument('-t', '--tag', type=str, default='') - parser.add_argument('-s', '--seed', type=int, default=None) - parser.add_argument('-d', '--device', type=str, default='cuda') - parser.add_argument('-b', '--batch_size', type=int, default=16) - args = parser.parse_args() - - # Load configs - config, config_name = load_config(args.config) - seed_all(args.seed if args.seed is not None else config.sampling.seed) - - # Testset - dataset = get_dataset(config.dataset.test) - get_structure = lambda: dataset[args.index] - - # Logging - structure_ = get_structure() - structure_id = structure_['id'] - tag_postfix = '_%s' % args.tag if args.tag else '' - log_dir = get_new_log_dir(os.path.join(args.out_root, config_name + tag_postfix), prefix='%04d_%s' % (args.index, structure_['id'])) - logger = get_logger('sample', log_dir) - logger.info('Data ID: %s' % structure_['id']) - data_native = MergeChains()(structure_) - save_pdb(data_native, os.path.join(log_dir, 'reference.pdb')) - - # Load checkpoint and model - logger.info('Loading model config and checkpoints: %s' % (config.model.checkpoint)) - ckpt = torch.load(config.model.checkpoint, map_location='cpu') - cfg_ckpt = ckpt['config'] - model = get_model(cfg_ckpt.model).to(args.device) - lsd = model.load_state_dict(ckpt['model']) - logger.info(str(lsd)) - - # Make data variants - data_variants = create_data_variants( - config = config, - structure_factory = get_structure, - ) - - # Save metadata - metadata = { - 'identifier': structure_id, - 'index': args.index, - 'config': args.config, - 'items': [{kk: vv for kk, vv in var.items() if kk != 'data'} for var in data_variants], - } - with open(os.path.join(log_dir, 'metadata.json'), 'w') as f: - json.dump(metadata, f, indent=2) - - # Start sampling - collate_fn = PaddingCollate(eight=False) - inference_tfm = [ PatchAroundAnchor(), ] - if 'abopt' not in config.mode: # Don't remove native CDR in optimization mode - inference_tfm.append(RemoveNative( - remove_structure = config.sampling.sample_structure, - remove_sequence = config.sampling.sample_sequence, - )) - inference_tfm = Compose(inference_tfm) - - for variant in data_variants: - os.makedirs(os.path.join(log_dir, variant['tag']), exist_ok=True) - logger.info(f"Start sampling for: {variant['tag']}") - - save_pdb(data_native, os.path.join(log_dir, variant['tag'], 'REF1.pdb')) # w/ OpenMM minimization - - data_cropped = inference_tfm( - copy.deepcopy(variant['data']) - ) - data_list_repeat = [ data_cropped ] * config.sampling.num_samples - loader = DataLoader(data_list_repeat, batch_size=args.batch_size, shuffle=False, collate_fn=collate_fn) - - count = 0 - for batch in tqdm(loader, desc=variant['name'], dynamic_ncols=True): - torch.set_grad_enabled(False) - model.eval() - batch = recursive_to(batch, args.device) - if 'abopt' in config.mode: - # Antibody optimization starting from native - traj_batch = model.optimize(batch, opt_step=variant['opt_step'], optimize_opt={ - 'pbar': True, - 'sample_structure': config.sampling.sample_structure, - 'sample_sequence': config.sampling.sample_sequence, - }) - else: - # De novo design - traj_batch = model.sample(batch, sample_opt={ - 'pbar': True, - 'sample_structure': config.sampling.sample_structure, - 'sample_sequence': config.sampling.sample_sequence, - }) - - aa_new = traj_batch[0][2] # 0: Last sampling step. 2: Amino acid. - pos_atom_new, mask_atom_new = reconstruct_backbone_partially( - pos_ctx = batch['pos_heavyatom'], - R_new = so3vec_to_rotation(traj_batch[0][0]), - t_new = traj_batch[0][1], - aa = aa_new, - chain_nb = batch['chain_nb'], - res_nb = batch['res_nb'], - mask_atoms = batch['mask_heavyatom'], - mask_recons = batch['generate_flag'], - ) - aa_new = aa_new.cpu() - pos_atom_new = pos_atom_new.cpu() - mask_atom_new = mask_atom_new.cpu() - - for i in range(aa_new.size(0)): - data_tmpl = variant['data'] - aa = apply_patch_to_tensor(data_tmpl['aa'], aa_new[i], data_cropped['patch_idx']) - mask_ha = apply_patch_to_tensor(data_tmpl['mask_heavyatom'], mask_atom_new[i], data_cropped['patch_idx']) - pos_ha = ( - apply_patch_to_tensor( - data_tmpl['pos_heavyatom'], - pos_atom_new[i] + batch['origin'][i].view(1, 1, 3).cpu(), - data_cropped['patch_idx'] - ) - ) - - save_path = os.path.join(log_dir, variant['tag'], '%04d.pdb' % (count, )) - save_pdb({ - 'chain_nb': data_tmpl['chain_nb'], - 'chain_id': data_tmpl['chain_id'], - 'resseq': data_tmpl['resseq'], - 'icode': data_tmpl['icode'], - # Generated - 'aa': aa, - 'mask_heavyatom': mask_ha, - 'pos_heavyatom': pos_ha, - }, path=save_path) - # save_pdb({ - # 'chain_nb': data_cropped['chain_nb'], - # 'chain_id': data_cropped['chain_id'], - # 'resseq': data_cropped['resseq'], - # 'icode': data_cropped['icode'], - # # Generated - # 'aa': aa_new[i], - # 'mask_heavyatom': mask_atom_new[i], - # 'pos_heavyatom': pos_atom_new[i] + batch['origin'][i].view(1, 1, 3).cpu(), - # }, path=os.path.join(log_dir, variant['tag'], '%04d_patch.pdb' % (count, ))) - count += 1 - - logger.info('Finished.\n') - - -if __name__ == '__main__': - main() diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot/tokenization_blenderbot.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot/tokenization_blenderbot.py deleted file mode 100644 index 9a81e73b8da37add74298f0ecc1666c1acf747f8..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot/tokenization_blenderbot.py +++ /dev/null @@ -1,433 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Facebook Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization class for Blenderbot.""" - -import json -import os -from functools import lru_cache -from typing import List, Optional, Tuple - -import regex as re - -from ...tokenization_utils import AddedToken, PreTrainedTokenizer -from ...utils import logging - - -logger = logging.get_logger(__name__) - - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "merges_file": "merges.txt", - "tokenizer_config_file": "tokenizer_config.json", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": {"facebook/blenderbot-3B": "https://huggingface.co/facebook/blenderbot-3B/resolve/main/vocab.json"}, - "merges_file": {"facebook/blenderbot-3B": "https://huggingface.co/facebook/blenderbot-3B/resolve/main/merges.txt"}, - "tokenizer_config_file": { - "facebook/blenderbot-3B": "https://huggingface.co/facebook/blenderbot-3B/resolve/main/tokenizer_config.json" - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {"facebook/blenderbot-3B": 128} - - -@lru_cache() -# Copied from transformers.models.roberta.tokenization_roberta.bytes_to_unicode -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control - characters the bpe code barfs on. - - The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab - if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for - decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup - tables between utf-8 bytes and unicode strings. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -# Copied from transformers.models.roberta.tokenization_roberta.get_pairs -def get_pairs(word): - """ - Return set of symbol pairs in a word. - - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class BlenderbotTokenizer(PreTrainedTokenizer): - """ - Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding. - - This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will - be encoded differently whether it is at the beginning of the sentence (without space) or not: - - ```python - >>> from transformers import BlenderbotTokenizer - - >>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B") - >>> tokenizer.add_prefix_space = False - >>> tokenizer("Hello world")["input_ids"] - [47, 921, 86, 1085, 2] - - >>> tokenizer(" Hello world")["input_ids"] - [6950, 1085, 2] - ``` - - You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you - call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. - - - - When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). - - - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - - - - When building a sequence using special tokens, this is not the token that is used for the beginning of - sequence. The token used is the `cls_token`. - - - - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - cls_token (`str`, *optional*, defaults to `""`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - mask_token (`str`, *optional*, defaults to `""`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - add_prefix_space (`bool`, *optional*, defaults to `False`): - Whether or not to add an initial space to the input. This allows to treat the leading word just as any - other word. (Blenderbot tokenizer detect beginning of words by the preceding space). - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.__init__ with Roberta->Blenderbot, RoBERTa->Blenderbot - def __init__( - self, - vocab_file, - merges_file, - errors="replace", - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - add_prefix_space=False, - **kwargs, - ): - bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token - pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token - eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token - unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token - sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token - cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token - - # Mask token behave like a normal word, i.e. include the space before it - mask_token = ( - AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False) - if isinstance(mask_token, str) - else mask_token - ) - - # these special tokens are not part of the vocab.json, let's add them in the correct order - - with open(vocab_file, encoding="utf-8") as vocab_handle: - self.encoder = json.load(vocab_handle) - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - with open(merges_file, encoding="utf-8") as merges_handle: - bpe_merges = merges_handle.read().split("\n")[1:-1] - bpe_merges = [tuple(merge.split()) for merge in bpe_merges] - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - self.add_prefix_space = add_prefix_space - - # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") - - super().__init__( - errors=errors, - bos_token=bos_token, - eos_token=eos_token, - unk_token=unk_token, - sep_token=sep_token, - cls_token=cls_token, - pad_token=pad_token, - mask_token=mask_token, - add_prefix_space=add_prefix_space, - **kwargs, - ) - - @property - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.vocab_size with Roberta->Blenderbot, RoBERTa->Blenderbot - def vocab_size(self): - return len(self.encoder) - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_vocab with Roberta->Blenderbot, RoBERTa->Blenderbot - def get_vocab(self): - vocab = dict(self.encoder).copy() - vocab.update(self.added_tokens_encoder) - return vocab - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.bpe with Roberta->Blenderbot, RoBERTa->Blenderbot - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - except ValueError: - new_word.extend(word[i:]) - break - else: - new_word.extend(word[i:j]) - i = j - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._tokenize with Roberta->Blenderbot, RoBERTa->Blenderbot - def _tokenize(self, text): - """Tokenize a string.""" - bpe_tokens = [] - for token in re.findall(self.pat, text): - token = "".join( - self.byte_encoder[b] for b in token.encode("utf-8") - ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case) - bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" ")) - return bpe_tokens - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._convert_token_to_id with Roberta->Blenderbot, RoBERTa->Blenderbot - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._convert_id_to_token with Roberta->Blenderbot, RoBERTa->Blenderbot - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.decoder.get(index) - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.convert_tokens_to_string with Roberta->Blenderbot, RoBERTa->Blenderbot - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - text = "".join(tokens) - text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) - return text - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.save_vocabulary with Roberta->Blenderbot, RoBERTa->Blenderbot - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - merge_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] - ) - - with open(vocab_file, "w", encoding="utf-8") as f: - f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") - - index = 0 - with open(merge_file, "w", encoding="utf-8") as writer: - writer.write("#version: 0.2\n") - for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." - " Please check that the tokenizer is not corrupted!" - ) - index = token_index - writer.write(" ".join(bpe_tokens) + "\n") - index += 1 - - return vocab_file, merge_file - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_special_tokens_mask with Roberta->Blenderbot, RoBERTa->Blenderbot - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - if token_ids_1 is None: - return [1] + ([0] * len(token_ids_0)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1] - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.create_token_type_ids_from_sequences with Roberta->Blenderbot, RoBERTa->Blenderbot - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does - not make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of zeros. - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] - - # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.prepare_for_tokenization with Roberta->Blenderbot, RoBERTa->Blenderbot - def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs): - add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space) - if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()): - text = " " + text - return (text, kwargs) - - def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None): - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A Blenderbot sequence has the following format: - - single sequence: ` X ` - - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added - token_ids_1 (`List[int]`, *optional*): - Will be ignored - Returns: - `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - return token_ids_0 + [self.eos_token_id] - - @property - def default_chat_template(self): - """ - A very simple chat template that just adds whitespace between messages. - """ - return ( - "{% for message in messages %}" - "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}" - "{{ message['content'] }}" - "{% if not loop.last %}{{ ' ' }}{% endif %}" - "{% endfor %}" - "{{ eos_token }}" - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/detr/convert_detr_original_pytorch_checkpoint_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/detr/convert_detr_original_pytorch_checkpoint_to_pytorch.py deleted file mode 100644 index 72de2be8701a9cf97a4e152be38da54bf87ac3d9..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/detr/convert_detr_original_pytorch_checkpoint_to_pytorch.py +++ /dev/null @@ -1,278 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert DETR checkpoints with timm backbone.""" - - -import argparse -import json -from collections import OrderedDict -from pathlib import Path - -import requests -import torch -from huggingface_hub import hf_hub_download -from PIL import Image - -from transformers import DetrConfig, DetrForObjectDetection, DetrForSegmentation, DetrImageProcessor -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - -# here we list all keys to be renamed (original name on the left, our name on the right) -rename_keys = [] -for i in range(6): - # encoder layers: output projection, 2 feedforward neural networks and 2 layernorms - rename_keys.append( - (f"transformer.encoder.layers.{i}.self_attn.out_proj.weight", f"encoder.layers.{i}.self_attn.out_proj.weight") - ) - rename_keys.append( - (f"transformer.encoder.layers.{i}.self_attn.out_proj.bias", f"encoder.layers.{i}.self_attn.out_proj.bias") - ) - rename_keys.append((f"transformer.encoder.layers.{i}.linear1.weight", f"encoder.layers.{i}.fc1.weight")) - rename_keys.append((f"transformer.encoder.layers.{i}.linear1.bias", f"encoder.layers.{i}.fc1.bias")) - rename_keys.append((f"transformer.encoder.layers.{i}.linear2.weight", f"encoder.layers.{i}.fc2.weight")) - rename_keys.append((f"transformer.encoder.layers.{i}.linear2.bias", f"encoder.layers.{i}.fc2.bias")) - rename_keys.append( - (f"transformer.encoder.layers.{i}.norm1.weight", f"encoder.layers.{i}.self_attn_layer_norm.weight") - ) - rename_keys.append((f"transformer.encoder.layers.{i}.norm1.bias", f"encoder.layers.{i}.self_attn_layer_norm.bias")) - rename_keys.append((f"transformer.encoder.layers.{i}.norm2.weight", f"encoder.layers.{i}.final_layer_norm.weight")) - rename_keys.append((f"transformer.encoder.layers.{i}.norm2.bias", f"encoder.layers.{i}.final_layer_norm.bias")) - # decoder layers: 2 times output projection, 2 feedforward neural networks and 3 layernorms - rename_keys.append( - (f"transformer.decoder.layers.{i}.self_attn.out_proj.weight", f"decoder.layers.{i}.self_attn.out_proj.weight") - ) - rename_keys.append( - (f"transformer.decoder.layers.{i}.self_attn.out_proj.bias", f"decoder.layers.{i}.self_attn.out_proj.bias") - ) - rename_keys.append( - ( - f"transformer.decoder.layers.{i}.multihead_attn.out_proj.weight", - f"decoder.layers.{i}.encoder_attn.out_proj.weight", - ) - ) - rename_keys.append( - ( - f"transformer.decoder.layers.{i}.multihead_attn.out_proj.bias", - f"decoder.layers.{i}.encoder_attn.out_proj.bias", - ) - ) - rename_keys.append((f"transformer.decoder.layers.{i}.linear1.weight", f"decoder.layers.{i}.fc1.weight")) - rename_keys.append((f"transformer.decoder.layers.{i}.linear1.bias", f"decoder.layers.{i}.fc1.bias")) - rename_keys.append((f"transformer.decoder.layers.{i}.linear2.weight", f"decoder.layers.{i}.fc2.weight")) - rename_keys.append((f"transformer.decoder.layers.{i}.linear2.bias", f"decoder.layers.{i}.fc2.bias")) - rename_keys.append( - (f"transformer.decoder.layers.{i}.norm1.weight", f"decoder.layers.{i}.self_attn_layer_norm.weight") - ) - rename_keys.append((f"transformer.decoder.layers.{i}.norm1.bias", f"decoder.layers.{i}.self_attn_layer_norm.bias")) - rename_keys.append( - (f"transformer.decoder.layers.{i}.norm2.weight", f"decoder.layers.{i}.encoder_attn_layer_norm.weight") - ) - rename_keys.append( - (f"transformer.decoder.layers.{i}.norm2.bias", f"decoder.layers.{i}.encoder_attn_layer_norm.bias") - ) - rename_keys.append((f"transformer.decoder.layers.{i}.norm3.weight", f"decoder.layers.{i}.final_layer_norm.weight")) - rename_keys.append((f"transformer.decoder.layers.{i}.norm3.bias", f"decoder.layers.{i}.final_layer_norm.bias")) - -# convolutional projection + query embeddings + layernorm of decoder + class and bounding box heads -rename_keys.extend( - [ - ("input_proj.weight", "input_projection.weight"), - ("input_proj.bias", "input_projection.bias"), - ("query_embed.weight", "query_position_embeddings.weight"), - ("transformer.decoder.norm.weight", "decoder.layernorm.weight"), - ("transformer.decoder.norm.bias", "decoder.layernorm.bias"), - ("class_embed.weight", "class_labels_classifier.weight"), - ("class_embed.bias", "class_labels_classifier.bias"), - ("bbox_embed.layers.0.weight", "bbox_predictor.layers.0.weight"), - ("bbox_embed.layers.0.bias", "bbox_predictor.layers.0.bias"), - ("bbox_embed.layers.1.weight", "bbox_predictor.layers.1.weight"), - ("bbox_embed.layers.1.bias", "bbox_predictor.layers.1.bias"), - ("bbox_embed.layers.2.weight", "bbox_predictor.layers.2.weight"), - ("bbox_embed.layers.2.bias", "bbox_predictor.layers.2.bias"), - ] -) - - -def rename_key(state_dict, old, new): - val = state_dict.pop(old) - state_dict[new] = val - - -def rename_backbone_keys(state_dict): - new_state_dict = OrderedDict() - for key, value in state_dict.items(): - if "backbone.0.body" in key: - new_key = key.replace("backbone.0.body", "backbone.conv_encoder.model") - new_state_dict[new_key] = value - else: - new_state_dict[key] = value - - return new_state_dict - - -def read_in_q_k_v(state_dict, is_panoptic=False): - prefix = "" - if is_panoptic: - prefix = "detr." - - # first: transformer encoder - for i in range(6): - # read in weights + bias of input projection layer (in PyTorch's MultiHeadAttention, this is a single matrix + bias) - in_proj_weight = state_dict.pop(f"{prefix}transformer.encoder.layers.{i}.self_attn.in_proj_weight") - in_proj_bias = state_dict.pop(f"{prefix}transformer.encoder.layers.{i}.self_attn.in_proj_bias") - # next, add query, keys and values (in that order) to the state dict - state_dict[f"encoder.layers.{i}.self_attn.q_proj.weight"] = in_proj_weight[:256, :] - state_dict[f"encoder.layers.{i}.self_attn.q_proj.bias"] = in_proj_bias[:256] - state_dict[f"encoder.layers.{i}.self_attn.k_proj.weight"] = in_proj_weight[256:512, :] - state_dict[f"encoder.layers.{i}.self_attn.k_proj.bias"] = in_proj_bias[256:512] - state_dict[f"encoder.layers.{i}.self_attn.v_proj.weight"] = in_proj_weight[-256:, :] - state_dict[f"encoder.layers.{i}.self_attn.v_proj.bias"] = in_proj_bias[-256:] - # next: transformer decoder (which is a bit more complex because it also includes cross-attention) - for i in range(6): - # read in weights + bias of input projection layer of self-attention - in_proj_weight = state_dict.pop(f"{prefix}transformer.decoder.layers.{i}.self_attn.in_proj_weight") - in_proj_bias = state_dict.pop(f"{prefix}transformer.decoder.layers.{i}.self_attn.in_proj_bias") - # next, add query, keys and values (in that order) to the state dict - state_dict[f"decoder.layers.{i}.self_attn.q_proj.weight"] = in_proj_weight[:256, :] - state_dict[f"decoder.layers.{i}.self_attn.q_proj.bias"] = in_proj_bias[:256] - state_dict[f"decoder.layers.{i}.self_attn.k_proj.weight"] = in_proj_weight[256:512, :] - state_dict[f"decoder.layers.{i}.self_attn.k_proj.bias"] = in_proj_bias[256:512] - state_dict[f"decoder.layers.{i}.self_attn.v_proj.weight"] = in_proj_weight[-256:, :] - state_dict[f"decoder.layers.{i}.self_attn.v_proj.bias"] = in_proj_bias[-256:] - # read in weights + bias of input projection layer of cross-attention - in_proj_weight_cross_attn = state_dict.pop( - f"{prefix}transformer.decoder.layers.{i}.multihead_attn.in_proj_weight" - ) - in_proj_bias_cross_attn = state_dict.pop(f"{prefix}transformer.decoder.layers.{i}.multihead_attn.in_proj_bias") - # next, add query, keys and values (in that order) of cross-attention to the state dict - state_dict[f"decoder.layers.{i}.encoder_attn.q_proj.weight"] = in_proj_weight_cross_attn[:256, :] - state_dict[f"decoder.layers.{i}.encoder_attn.q_proj.bias"] = in_proj_bias_cross_attn[:256] - state_dict[f"decoder.layers.{i}.encoder_attn.k_proj.weight"] = in_proj_weight_cross_attn[256:512, :] - state_dict[f"decoder.layers.{i}.encoder_attn.k_proj.bias"] = in_proj_bias_cross_attn[256:512] - state_dict[f"decoder.layers.{i}.encoder_attn.v_proj.weight"] = in_proj_weight_cross_attn[-256:, :] - state_dict[f"decoder.layers.{i}.encoder_attn.v_proj.bias"] = in_proj_bias_cross_attn[-256:] - - -# We will verify our results on an image of cute cats -def prepare_img(): - url = "http://images.cocodataset.org/val2017/000000039769.jpg" - im = Image.open(requests.get(url, stream=True).raw) - - return im - - -@torch.no_grad() -def convert_detr_checkpoint(model_name, pytorch_dump_folder_path): - """ - Copy/paste/tweak model's weights to our DETR structure. - """ - - # load default config - config = DetrConfig() - # set backbone and dilation attributes - if "resnet101" in model_name: - config.backbone = "resnet101" - if "dc5" in model_name: - config.dilation = True - is_panoptic = "panoptic" in model_name - if is_panoptic: - config.num_labels = 250 - else: - config.num_labels = 91 - repo_id = "huggingface/label-files" - filename = "coco-detection-id2label.json" - id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) - id2label = {int(k): v for k, v in id2label.items()} - config.id2label = id2label - config.label2id = {v: k for k, v in id2label.items()} - - # load image processor - format = "coco_panoptic" if is_panoptic else "coco_detection" - image_processor = DetrImageProcessor(format=format) - - # prepare image - img = prepare_img() - encoding = image_processor(images=img, return_tensors="pt") - pixel_values = encoding["pixel_values"] - - logger.info(f"Converting model {model_name}...") - - # load original model from torch hub - detr = torch.hub.load("facebookresearch/detr", model_name, pretrained=True).eval() - state_dict = detr.state_dict() - # rename keys - for src, dest in rename_keys: - if is_panoptic: - src = "detr." + src - rename_key(state_dict, src, dest) - state_dict = rename_backbone_keys(state_dict) - # query, key and value matrices need special treatment - read_in_q_k_v(state_dict, is_panoptic=is_panoptic) - # important: we need to prepend a prefix to each of the base model keys as the head models use different attributes for them - prefix = "detr.model." if is_panoptic else "model." - for key in state_dict.copy().keys(): - if is_panoptic: - if ( - key.startswith("detr") - and not key.startswith("class_labels_classifier") - and not key.startswith("bbox_predictor") - ): - val = state_dict.pop(key) - state_dict["detr.model" + key[4:]] = val - elif "class_labels_classifier" in key or "bbox_predictor" in key: - val = state_dict.pop(key) - state_dict["detr." + key] = val - elif key.startswith("bbox_attention") or key.startswith("mask_head"): - continue - else: - val = state_dict.pop(key) - state_dict[prefix + key] = val - else: - if not key.startswith("class_labels_classifier") and not key.startswith("bbox_predictor"): - val = state_dict.pop(key) - state_dict[prefix + key] = val - # finally, create HuggingFace model and load state dict - model = DetrForSegmentation(config) if is_panoptic else DetrForObjectDetection(config) - model.load_state_dict(state_dict) - model.eval() - # verify our conversion - original_outputs = detr(pixel_values) - outputs = model(pixel_values) - assert torch.allclose(outputs.logits, original_outputs["pred_logits"], atol=1e-4) - assert torch.allclose(outputs.pred_boxes, original_outputs["pred_boxes"], atol=1e-4) - if is_panoptic: - assert torch.allclose(outputs.pred_masks, original_outputs["pred_masks"], atol=1e-4) - - # Save model and image processor - logger.info(f"Saving PyTorch model and image processor to {pytorch_dump_folder_path}...") - Path(pytorch_dump_folder_path).mkdir(exist_ok=True) - model.save_pretrained(pytorch_dump_folder_path) - image_processor.save_pretrained(pytorch_dump_folder_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--model_name", default="detr_resnet50", type=str, help="Name of the DETR model you'd like to convert." - ) - parser.add_argument( - "--pytorch_dump_folder_path", default=None, type=str, help="Path to the folder to output PyTorch model." - ) - args = parser.parse_args() - convert_detr_checkpoint(args.model_name, args.pytorch_dump_folder_path) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/led/configuration_led.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/led/configuration_led.py deleted file mode 100644 index 34c286ce18910f5d32a7067d4a941f80f23bad20..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/led/configuration_led.py +++ /dev/null @@ -1,166 +0,0 @@ -# coding=utf-8 -# Copyright 2021 Iz Beltagy, Matthew E. Peters, Arman Cohan and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" LED model configuration""" - -from typing import List, Union - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -LED_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "allenai/led-base-16384": "https://huggingface.co/allenai/led-base-16384/resolve/main/config.json", - # See all LED models at https://huggingface.co/models?filter=led -} - - -class LEDConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`LEDModel`]. It is used to instantiate an LED - model according to the specified arguments, defining the model architecture. Instantiating a configuration with the - defaults will yield a similar configuration to that of the LED - [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 50265): - Vocabulary size of the LED model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`LEDModel`] or [`TFLEDModel`]. - d_model (`int`, *optional*, defaults to 1024): - Dimensionality of the layers and the pooler layer. - encoder_layers (`int`, *optional*, defaults to 12): - Number of encoder layers. - decoder_layers (`int`, *optional*, defaults to 12): - Number of decoder layers. - encoder_attention_heads (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - decoder_attention_heads (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer decoder. - decoder_ffn_dim (`int`, *optional*, defaults to 4096): - Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. - encoder_ffn_dim (`int`, *optional*, defaults to 4096): - Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. - activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - dropout (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_dropout (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - activation_dropout (`float`, *optional*, defaults to 0.0): - The dropout ratio for activations inside the fully connected layer. - classifier_dropout (`float`, *optional*, defaults to 0.0): - The dropout ratio for classifier. - max_encoder_position_embeddings (`int`, *optional*, defaults to 16384): - The maximum sequence length that the encoder might ever be used with. - max_decoder_position_embeddings (`int`, *optional*, defaults to 16384): - The maximum sequence length that the decoder might ever be used with. - init_std (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - encoder_layerdrop (`float`, *optional*, defaults to 0.0): - The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) - for more details. - decoder_layerdrop (`float`, *optional*, defaults to 0.0): - The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) - for more details. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models) - - Example: - - ```python - >>> from transformers import LEDModel, LEDConfig - - >>> # Initializing a LED allenai/led-base-16384 style configuration - >>> configuration = LEDConfig() - - >>> # Initializing a model from the allenai/led-base-16384 style configuration - >>> model = LEDModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "led" - attribute_map = { - "num_attention_heads": "encoder_attention_heads", - "hidden_size": "d_model", - "attention_probs_dropout_prob": "attention_dropout", - "initializer_range": "init_std", - } - - def __init__( - self, - vocab_size=50265, - max_encoder_position_embeddings=16384, - max_decoder_position_embeddings=1024, - encoder_layers=12, - encoder_ffn_dim=4096, - encoder_attention_heads=16, - decoder_layers=12, - decoder_ffn_dim=4096, - decoder_attention_heads=16, - encoder_layerdrop=0.0, - decoder_layerdrop=0.0, - use_cache=True, - is_encoder_decoder=True, - activation_function="gelu", - d_model=1024, - dropout=0.1, - attention_dropout=0.0, - activation_dropout=0.0, - init_std=0.02, - decoder_start_token_id=2, - classifier_dropout=0.0, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - attention_window: Union[List[int], int] = 512, - **kwargs, - ): - self.vocab_size = vocab_size - self.max_encoder_position_embeddings = max_encoder_position_embeddings - self.max_decoder_position_embeddings = max_decoder_position_embeddings - self.d_model = d_model - self.encoder_ffn_dim = encoder_ffn_dim - self.encoder_layers = encoder_layers - self.encoder_attention_heads = encoder_attention_heads - self.decoder_ffn_dim = decoder_ffn_dim - self.decoder_layers = decoder_layers - self.decoder_attention_heads = decoder_attention_heads - self.dropout = dropout - self.attention_dropout = attention_dropout - self.activation_dropout = activation_dropout - self.activation_function = activation_function - self.init_std = init_std - self.encoder_layerdrop = encoder_layerdrop - self.decoder_layerdrop = decoder_layerdrop - self.classifier_dropout = classifier_dropout - self.use_cache = use_cache - self.num_hidden_layers = encoder_layers - self.attention_window = attention_window - - super().__init__( - pad_token_id=pad_token_id, - bos_token_id=bos_token_id, - eos_token_id=eos_token_id, - is_encoder_decoder=is_encoder_decoder, - decoder_start_token_id=decoder_start_token_id, - **kwargs, - ) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/infer_gt_mel.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/infer_gt_mel.py deleted file mode 100644 index 033b821a5d21a1232f1786bce5616b12e01488ad..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/infer_gt_mel.py +++ /dev/null @@ -1,74 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from diffusion.unit2mel import load_model_vocoder - - -class DiffGtMel: - def __init__(self, project_path=None, device=None): - self.project_path = project_path - if device is not None: - self.device = device - else: - self.device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.model = None - self.vocoder = None - self.args = None - - def flush_model(self, project_path, ddsp_config=None): - if (self.model is None) or (project_path != self.project_path): - model, vocoder, args = load_model_vocoder(project_path, device=self.device) - if self.check_args(ddsp_config, args): - self.model = model - self.vocoder = vocoder - self.args = args - - def check_args(self, args1, args2): - if args1.data.block_size != args2.data.block_size: - raise ValueError("DDSP与DIFF模型的block_size不一致") - if args1.data.sampling_rate != args2.data.sampling_rate: - raise ValueError("DDSP与DIFF模型的sampling_rate不一致") - if args1.data.encoder != args2.data.encoder: - raise ValueError("DDSP与DIFF模型的encoder不一致") - return True - - def __call__(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', - spk_mix_dict=None, start_frame=0): - input_mel = self.vocoder.extract(audio, self.args.data.sampling_rate) - out_mel = self.model( - hubert, - f0, - volume, - spk_id=spk_id, - spk_mix_dict=spk_mix_dict, - gt_spec=input_mel, - infer=True, - infer_speedup=acc, - method=method, - k_step=k_step, - use_tqdm=False) - if start_frame > 0: - out_mel = out_mel[:, start_frame:, :] - f0 = f0[:, start_frame:, :] - output = self.vocoder.infer(out_mel, f0) - if start_frame > 0: - output = F.pad(output, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return output - - def infer(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', silence_front=0, - use_silence=False, spk_mix_dict=None): - start_frame = int(silence_front * self.vocoder.vocoder_sample_rate / self.vocoder.vocoder_hop_size) - if use_silence: - audio = audio[:, start_frame * self.vocoder.vocoder_hop_size:] - f0 = f0[:, start_frame:, :] - hubert = hubert[:, start_frame:, :] - volume = volume[:, start_frame:, :] - _start_frame = 0 - else: - _start_frame = start_frame - audio = self.__call__(audio, f0, hubert, volume, acc=acc, spk_id=spk_id, k_step=k_step, - method=method, spk_mix_dict=spk_mix_dict, start_frame=_start_frame) - if use_silence: - if start_frame > 0: - audio = F.pad(audio, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return audio diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/dphubert/model.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/dphubert/model.py deleted file mode 100644 index 348ede2c3edc3e5588ee75760085dee9eafd9d68..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/dphubert/model.py +++ /dev/null @@ -1,966 +0,0 @@ -"""Speech SSL models supporting pruning. - -Originally from: -https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py - -""" - -import math -from typing import List, Optional, Tuple - -import torch -import torch.nn.functional as F -from torch import Tensor -from torch.nn import Module - -from . import components - - -class Wav2Vec2Model(Module): - """Acoustic model used in *wav2vec 2.0* :cite:`baevski2020wav2vec`. - - Note: - To build the model, please use one of the factory functions. - :py:func:`wav2vec2_model`, :py:func:`wav2vec2_base`, :py:func:`wav2vec2_large`, - :py:func:`wav2vec2_large_lv60k`, :py:func:`hubert_base`, :py:func:`hubert_large`, - and :py:func:`hubert_xlarge`. - - See Also: - * :class:`torchaudio.pipelines.Wav2Vec2Bundle`: Pretrained models (without fine-tuning) - * :class:`torchaudio.pipelines.Wav2Vec2ASRBundle`: ASR pipelines with pretrained models. - - Args: - feature_extractor (torch.nn.Module): - Feature extractor that extracts feature vectors from raw audio Tensor. - - encoder (torch.nn.Module): - Encoder that converts the audio features into the sequence of probability - distribution (in negative log-likelihood) over labels. - - aux (torch.nn.Module or None, optional): - Auxiliary module. If provided, the output from encoder is passed to this module. - """ # noqa: E501 - - def __init__( - self, - normalize_waveform: bool, - feature_extractor: Module, - encoder: Module, - aux: Optional[Module] = None, - ): - super().__init__() - self.normalize_waveform = normalize_waveform - self.feature_extractor = feature_extractor - self.encoder = encoder - self.aux = aux - - @torch.jit.export - def extract_features( - self, - waveforms: Tensor, - lengths: Optional[Tensor] = None, - num_layers: Optional[int] = None, - ) -> Tuple[List[Tensor], Optional[Tensor]]: - """Extract feature vectors from raw waveforms - - This returns the list of outputs from the intermediate layers of - transformer block in encoder. - - Args: - waveforms (Tensor): Audio tensor of shape `(batch, frames)`. - lengths (Tensor or None, optional): - Indicates the valid length of each audio in the batch. - Shape: `(batch, )`. - When the ``waveforms`` contains audios with different durations, - by providing ``lengths`` argument, the model will compute - the corresponding valid output lengths and apply proper mask in - transformer attention layer. - If ``None``, it is assumed that the entire audio waveform - length is valid. - num_layers (int or None, optional): - If given, limit the number of intermediate layers to go through. - Providing `1` will stop the computation after going through one - intermediate layers. If not given, the outputs from all the - intermediate layers are returned. - - Returns: - (List[Tensor], Optional[Tensor]): - List of Tensors - Features from requested layers. - Each Tensor is of shape: `(batch, time frame, feature dimension)` - Tensor or None - If ``lengths`` argument was provided, a Tensor of shape `(batch, )` - is returned. - It indicates the valid length in time axis of each feature Tensor. - """ - if self.normalize_waveform: - if lengths is not None: - waveforms = [ - F.layer_norm(wave[:length], (length,)) for wave, length in zip(waveforms, lengths) - ] - waveforms = torch.nn.utils.rnn.pad_sequence(waveforms, batch_first=True) - else: - waveforms = F.layer_norm(waveforms, waveforms.shape[-1:]) - - x, lengths = self.feature_extractor(waveforms, lengths) - x = self.encoder.extract_features(x, lengths, num_layers) # (num_layers+1,), including the input - return x, lengths - - def get_num_params(self): - """Calculate the current size.""" - feature_extractor_size, encoder_in_features = self.feature_extractor.get_num_params_and_final_out_channels() - encoder_size = self.encoder.get_num_params(encoder_in_features) - return feature_extractor_size + encoder_size - - def prune(self): - self.eval() # must be in eval mode - conv_config, conv_out_index = self.feature_extractor.prune() # [(output_channel, kernel_size, stride), ...] - transformer_config = self.encoder.prune(conv_out_index) # NOTE: this is a defaultdict(list) - use_attention = transformer_config["use_attention"] - use_feed_forward = transformer_config["use_feed_forward"] - num_heads = transformer_config["num_heads"] # can be [] - remaining_heads = transformer_config["remaining_heads"] # can be [] - ff_interm_features = transformer_config["ff_interm_features"] - - return conv_config, use_attention, use_feed_forward, num_heads, remaining_heads, ff_interm_features - - def forward( - self, - waveforms: Tensor, - lengths: Optional[Tensor] = None, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Compute the sequence of probability distribution over labels. - - Args: - waveforms (Tensor): Audio tensor of shape `(batch, frames)`. - lengths (Tensor or None, optional): - Indicates the valid length of each audio in the batch. - Shape: `(batch, )`. - When the ``waveforms`` contains audios with different durations, - by providing ``lengths`` argument, the model will compute - the corresponding valid output lengths and apply proper mask in - transformer attention layer. - If ``None``, it is assumed that all the audio in ``waveforms`` - have valid length. Default: ``None``. - - Returns: - (Tensor, Optional[Tensor]): - Tensor - The sequences of probability distribution (in logit) over labels. - Shape: `(batch, frames, num labels)`. - Tensor or None - If ``lengths`` argument was provided, a Tensor of shape `(batch, )` - is returned. - It indicates the valid length in time axis of the output Tensor. - """ - if self.normalize_waveform: - if lengths is not None: - waveforms = [ - F.layer_norm(wave[:length], (length,)) for wave, length in zip(waveforms, lengths) - ] - waveforms = torch.nn.utils.rnn.pad_sequence(waveforms, batch_first=True) - else: - waveforms = F.layer_norm(waveforms, waveforms.shape[-1:]) - - x, lengths = self.feature_extractor(waveforms, lengths) - x = self.encoder(x, lengths) - if self.aux is not None: - x = self.aux(x) - return x, lengths - - -def wav2vec2_model(**configs) -> Wav2Vec2Model: - """Wraps the original wav2vec2_model and wavlm_model.""" - - if "encoder_remaining_heads" in configs: - return wavlm_model(**configs) - - return wav2vec2_model_original(**configs) - - -def wav2vec2_model_original( - extractor_mode: str, - extractor_conv_layer_config: Optional[List[Tuple[int, int, int]]], - extractor_conv_bias: bool, - encoder_embed_dim: int, - encoder_projection_dropout: float, - encoder_pos_conv_kernel: int, - encoder_pos_conv_groups: int, - encoder_num_layers: int, - encoder_use_attention: List[bool], - encoder_use_feed_forward: List[bool], - encoder_num_heads: List[int], - encoder_head_dim: int, - encoder_attention_dropout: float, - encoder_ff_interm_features: List[int], - encoder_ff_interm_dropout: float, - encoder_dropout: float, - encoder_layer_norm_first: bool, - encoder_layer_drop: float, - aux_num_out: Optional[int], - normalize_waveform: bool, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds custom :class:`~torchaudio.models.Wav2Vec2Model`. - - Note: - The "feature extractor" below corresponds to - `ConvFeatureExtractionModel `__ - in the original ``fairseq`` implementation. - This is referred as "(convolutional) feature encoder" in the *wav2vec 2.0* - :cite:`baevski2020wav2vec` paper. - - The "encoder" below corresponds to `TransformerEncoder `__, - and this is referred as "Transformer" in the paper. - - Args: - extractor_mode (str): Operation mode of feature extractor. - Valid values are ``"group_norm"`` or ``"layer_norm"``. - If ``"group_norm"``, then a single normalization is applied - in the first convolution block. Otherwise, all the convolution - blocks will have layer normalization. - - This option corresponds to ``extractor_mode`` from ``fairseq``. - extractor_conv_layer_config (list of integer tuples or None): - Configuration of convolution layers in feature extractor. - List of convolution configuration, - i.e. ``[(output_channel, kernel_size, stride), ...]`` - - If ``None`` is provided, then the following default value is used. - - .. code-block:: python - - [ - (512, 10, 5), - (512, 3, 2), - (512, 3, 2), - (512, 3, 2), - (512, 3, 2), - (512, 2, 2), - (512, 2, 2), - ] - - This option corresponds to ``conv_feature_layers`` from ``fairseq``. - - extractor_conv_bias (bool): - Whether to include bias term to each convolution operation. - - This option corresponds to ``conv_bias`` from ``fairseq``. - - encoder_embed_dim (int): - The dimension of embedding in encoder. - - This option corresponds to ``encoder_embed_dim`` from ``fairseq``. - - encoder_projection_dropout (float): - The dropout probability applied after the input feature is projected - to ``encoder_embed_dim``. - - This option corresponds to ``dropout_input`` from ``fairseq``. - - encoder_pos_conv_kernel (int): - The kernel size of convolutional positional embeddings. - - This option corresponds to ``conv_pos`` from ``fairseq``. - - encoder_pos_conv_groups (int): - The number of groups of convolutional positional embeddings. - - This option corresponds to ``conv_pos_groups`` from ``fairseq``. - - encoder_num_layers (int): - The number of self attention layers in transformer block. - - This option corresponds to ``encoder_layers`` from ``fairseq``. - - encoder_num_heads (int): - The number of heads in self attention layers. - - This option corresponds to ``encoder_attention_heads`` from ``fairseq``. - - encoder_attention_dropout (float): - The dropout probability applied after softmax in self-attention layer. - - This option corresponds to ``attention_dropout`` from ``fairseq``. - - encoder_ff_interm_features (int): - The dimension of hidden features in feed forward layer. - - This option corresponds to ``encoder_ffn_embed_dim`` from ``fairseq``. - - encoder_ff_interm_dropout (float): - The dropout probability applied in feedforward layer. - - This option correspinds to ``activation_dropout`` from ``fairseq``. - - encoder_dropout (float): - The dropout probability applied at the end of feed forward layer. - - This option corresponds to ``dropout`` from ``fairseq``. - - encoder_layer_norm_first (bool): - Control the order of layer norm in transformer layer and each encoder layer. - If True, in transformer layer, layer norm is applied before features are fed - to encoder layers. In encoder layer, two layer norms are applied before and after - self attention. - If False, in transformer layer, layer norm is applied after features are fed - to encoder layers. In encoder layer, two layer norms are applied after self - attention, before and after feed forward. - - This option corresponds to ``layer_norm_first`` from ``fairseq``. - - encoder_layer_drop (float): - Probability to drop each encoder layer during training. - - This option corresponds to ``layerdrop`` from ``fairseq``. - - aux_num_out (int or None): - When provided, attach an extra linear layer on top of encoder, which can be - used for fine-tuning. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - if extractor_conv_layer_config is None: - extractor_conv_layer_config = [(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512, 2, 2)] * 2 - - feature_extractor = components._get_feature_extractor( - extractor_mode, extractor_conv_layer_config, extractor_conv_bias, - prune_conv_channels=extractor_prune_conv_channels, - ) - encoder = components._get_encoder( - in_features=extractor_conv_layer_config[-1][0], - embed_dim=encoder_embed_dim, - dropout_input=encoder_projection_dropout, - pos_conv_kernel=encoder_pos_conv_kernel, - pos_conv_groups=encoder_pos_conv_groups, - num_layers=encoder_num_layers, - use_attention=encoder_use_attention, - use_feed_forward=encoder_use_feed_forward, - num_heads=encoder_num_heads, - head_dim=encoder_head_dim, - attention_dropout=encoder_attention_dropout, - ff_interm_features=encoder_ff_interm_features, - ff_interm_dropout=encoder_ff_interm_dropout, - dropout=encoder_dropout, - layer_norm_first=encoder_layer_norm_first, - layer_drop=encoder_layer_drop, - prune_attention_heads=encoder_prune_attention_heads, - prune_attention_layer=encoder_prune_attention_layer, - prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - aux = None - if aux_num_out is not None: - aux = torch.nn.Linear(in_features=encoder_embed_dim, out_features=aux_num_out) - return Wav2Vec2Model(normalize_waveform, feature_extractor, encoder, aux) - - -def wav2vec2_base( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.1, - encoder_ff_interm_dropout: float = 0.1, - encoder_dropout: float = 0.1, - encoder_layer_drop: float = 0.1, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "base" :class:`~torchaudio.models.Wav2Vec2Model` from *wav2vec 2.0* :cite:`baevski2020wav2vec` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="group_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=768, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=12, - encoder_num_heads=12, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=3072, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=False, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def wav2vec2_large( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.1, - encoder_ff_interm_dropout: float = 0.1, - encoder_dropout: float = 0.1, - encoder_layer_drop: float = 0.1, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "large" :class:`~torchaudio.models.Wav2Vec2Model` from *wav2vec 2.0* :cite:`baevski2020wav2vec` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="group_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=1024, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=24, - encoder_num_heads=16, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=4096, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=False, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def wav2vec2_large_lv60k( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.0, - encoder_ff_interm_dropout: float = 0.1, - encoder_dropout: float = 0.0, - encoder_layer_drop: float = 0.1, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "large lv-60k" :class:`~torchaudio.models.Wav2Vec2Model` from *wav2vec 2.0* :cite:`baevski2020wav2vec` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="layer_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=True, - encoder_embed_dim=1024, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=24, - encoder_num_heads=16, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=4096, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=True, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def hubert_base( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.1, - encoder_ff_interm_dropout: float = 0.0, - encoder_dropout: float = 0.1, - encoder_layer_drop: float = 0.05, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "base" :class:`HuBERT ` from *HuBERT* :cite:`hsu2021hubert` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="group_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=768, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=12, - encoder_use_attention=[True] * 12, - encoder_use_feed_forward=[True] * 12, - encoder_num_heads=[12] * 12, - encoder_head_dim=64, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=[3072] * 12, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=False, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def hubert_large( - encoder_projection_dropout: float = 0.0, - encoder_attention_dropout: float = 0.0, - encoder_ff_interm_dropout: float = 0.0, - encoder_dropout: float = 0.0, - encoder_layer_drop: float = 0.0, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "large" :class:`HuBERT ` from *HuBERT* :cite:`hsu2021hubert` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="layer_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=1024, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=24, - encoder_num_heads=16, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=4096, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=True, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def hubert_xlarge( - encoder_projection_dropout: float = 0.0, - encoder_attention_dropout: float = 0.0, - encoder_ff_interm_dropout: float = 0.0, - encoder_dropout: float = 0.0, - encoder_layer_drop: float = 0.0, - aux_num_out: Optional[int] = None, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds "extra large" :class:`HuBERT ` from *HuBERT* :cite:`hsu2021hubert` - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int or None, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ # noqa: E501 - return wav2vec2_model( - extractor_mode="layer_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=1280, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=48, - encoder_num_heads=16, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=5120, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=True, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - extractor_prune_conv_channels=extractor_prune_conv_channels, - encoder_prune_attention_heads=encoder_prune_attention_heads, - encoder_prune_attention_layer=encoder_prune_attention_layer, - encoder_prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - encoder_prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - - -def _init_hubert_pretrain_model(module): - if isinstance(module, components.LayerNorm): - torch.nn.init.kaiming_normal_(module.conv.weight) - elif isinstance(module, components.ConvolutionalPositionalEmbedding): - # normalize the weight to normal distribution. - std = math.sqrt(4.0 / (module.embed_dim * module.kernel_size)) - torch.nn.init.normal_(module.conv.weight, mean=0.0, std=std) - torch.nn.init.constant_(module.conv.bias, 0.0) - elif isinstance(module, components.SelfAttention): - # normalize the query, key, value, and out_proj parameters in self attention module. - torch.nn.init.xavier_uniform_(module.k_proj.weight, gain=1 / math.sqrt(2)) - torch.nn.init.xavier_uniform_(module.v_proj.weight, gain=1 / math.sqrt(2)) - torch.nn.init.xavier_uniform_(module.q_proj.weight, gain=1 / math.sqrt(2)) - torch.nn.init.xavier_uniform_(module.out_proj.weight) - torch.nn.init.constant_(module.out_proj.bias, 0.0) - elif isinstance(module, components.Transformer): - module.apply(components._init_transformer_params) - else: - pass - - -def wavlm_model( - extractor_mode: str, - extractor_conv_layer_config: Optional[List[Tuple[int, int, int]]], - extractor_conv_bias: bool, - encoder_embed_dim: int, - encoder_projection_dropout: float, - encoder_pos_conv_kernel: int, - encoder_pos_conv_groups: int, - encoder_num_layers: int, - encoder_use_attention: List[bool], - encoder_use_feed_forward: List[bool], - encoder_total_num_heads: List[int], - encoder_remaining_heads: List[List[int]], - encoder_num_buckets: int, - encoder_max_distance: int, - encoder_attention_dropout: float, - encoder_ff_interm_features: List[int], - encoder_ff_interm_dropout: float, - encoder_dropout: float, - encoder_layer_norm_first: bool, - encoder_layer_drop: float, - aux_num_out: Optional[int], - normalize_waveform: bool, - extractor_prune_conv_channels: bool = False, - encoder_prune_attention_heads: bool = False, - encoder_prune_attention_layer: bool = False, - encoder_prune_feed_forward_intermediate: bool = False, - encoder_prune_feed_forward_layer: bool = False, -) -> Wav2Vec2Model: - """Builds custom WaveLM model :cite:`chen2022wavlm`. The architecture is compatible - with Wav2Vec2 model :cite:`baevski2020wav2vec`, and so the output object is - :class:`~torchaudio.models.Wav2Vec2Model`. Most of the arguments have the same meaning - as in :py:func:`wav2vec2_model` so please refer there for documentation. - - Args: - extractor_mode (str): Operation mode of feature extractor. - See :py:func:`wav2vec2_model`. - - extractor_conv_layer_config (list of integer tuples or None): - See :py:func:`wav2vec2_model`. - - extractor_conv_bias (bool): - See :py:func:`wav2vec2_model`. - - encoder_embed_dim (int): - See :py:func:`wav2vec2_model`. - - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - - encoder_pos_conv_kernel (int): - See :py:func:`wav2vec2_model`. - - encoder_pos_conv_groups (int): - See :py:func:`wav2vec2_model`. - - encoder_num_layers (int): - See :py:func:`wav2vec2_model`. - - encoder_num_heads (int): - See :py:func:`wav2vec2_model`. - - encoder_num_buckets (int): - Number of buckets for relative position embedding. - encoder_max_distance (int): - Maximum distance for relative position embedding. - - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - - encoder_ff_interm_features (int): - See :py:func:`wav2vec2_model`. - - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - - encoder_layer_norm_first (bool): - See :py:func:`wav2vec2_model`. - - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - - aux_num_out (int or None): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ - if extractor_conv_layer_config is None: - extractor_conv_layer_config = [(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512, 2, 2)] * 2 - - feature_extractor = components._get_feature_extractor( - extractor_mode, extractor_conv_layer_config, extractor_conv_bias, - prune_conv_channels=extractor_prune_conv_channels, - ) - encoder = components._get_wavlm_encoder( - in_features=extractor_conv_layer_config[-1][0], - embed_dim=encoder_embed_dim, - dropout_input=encoder_projection_dropout, - pos_conv_kernel=encoder_pos_conv_kernel, - pos_conv_groups=encoder_pos_conv_groups, - num_layers=encoder_num_layers, - use_attention=encoder_use_attention, - use_feed_forward=encoder_use_feed_forward, - total_num_heads=encoder_total_num_heads, - remaining_heads=encoder_remaining_heads, - num_buckets=encoder_num_buckets, - max_distance=encoder_max_distance, - attention_dropout=encoder_attention_dropout, - ff_interm_features=encoder_ff_interm_features, - ff_interm_dropout=encoder_ff_interm_dropout, - dropout=encoder_dropout, - layer_norm_first=encoder_layer_norm_first, - layer_drop=encoder_layer_drop, - prune_attention_heads=encoder_prune_attention_heads, - prune_attention_layer=encoder_prune_attention_layer, - prune_feed_forward_intermediate=encoder_prune_feed_forward_intermediate, - prune_feed_forward_layer=encoder_prune_feed_forward_layer, - ) - aux = None - if aux_num_out is not None: - aux = torch.nn.Linear(in_features=encoder_embed_dim, out_features=aux_num_out) - return Wav2Vec2Model(normalize_waveform, feature_extractor, encoder, aux) - - -def wavlm_base( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.1, - encoder_ff_interm_dropout: float = 0.1, - encoder_dropout: float = 0.1, - encoder_layer_drop: float = 0.1, - aux_num_out: Optional[int] = None, -) -> Wav2Vec2Model: - """Builds "base" WaveLM model :cite:`chen2022wavlm`. The architecture is compatible - with Wav2Vec2 model :cite:`baevski2020wav2vec`, and so the output class is - :class:`~torchaudio.models.Wav2Vec2Model`. - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ - return wavlm_model( - extractor_mode="group_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=768, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=12, - encoder_num_heads=12, - encoder_num_buckets=320, - encoder_max_distance=800, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=3072, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=False, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - ) - - -def wavlm_large( - encoder_projection_dropout: float = 0.1, - encoder_attention_dropout: float = 0.1, - encoder_ff_interm_dropout: float = 0.0, - encoder_dropout: float = 0.1, - encoder_layer_drop: float = 0.1, - aux_num_out: Optional[int] = None, -) -> Wav2Vec2Model: - """Builds "large" WaveLM model :cite:`chen2022wavlm`. The architecture is compatible - with Wav2Vec2 model :cite:`baevski2020wav2vec`, and so the output class is - :class:`~torchaudio.models.Wav2Vec2Model`. - - Args: - encoder_projection_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_attention_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_ff_interm_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_dropout (float): - See :py:func:`wav2vec2_model`. - encoder_layer_drop (float): - See :py:func:`wav2vec2_model`. - aux_num_out (int, optional): - See :py:func:`wav2vec2_model`. - - Returns: - Wav2Vec2Model: - The resulting model. - """ - return wavlm_model( - extractor_mode="layer_norm", - extractor_conv_layer_config=None, - extractor_conv_bias=False, - encoder_embed_dim=1024, - encoder_projection_dropout=encoder_projection_dropout, - encoder_pos_conv_kernel=128, - encoder_pos_conv_groups=16, - encoder_num_layers=24, - encoder_num_heads=16, - encoder_num_buckets=320, - encoder_max_distance=800, - encoder_attention_dropout=encoder_attention_dropout, - encoder_ff_interm_features=4096, - encoder_ff_interm_dropout=encoder_ff_interm_dropout, - encoder_dropout=encoder_dropout, - encoder_layer_norm_first=True, - encoder_layer_drop=encoder_layer_drop, - aux_num_out=aux_num_out, - ) diff --git a/spaces/ysharma/dummy_phtogrd_blocks/app.py b/spaces/ysharma/dummy_phtogrd_blocks/app.py deleted file mode 100644 index 26e5ef2020df05d193c098dc367bba08c5f420ea..0000000000000000000000000000000000000000 --- a/spaces/ysharma/dummy_phtogrd_blocks/app.py +++ /dev/null @@ -1,206 +0,0 @@ -from io import BytesIO -import requests -import gradio as gr -import requests -import torch -from tqdm import tqdm -from PIL import Image, ImageOps -from diffusers import StableDiffusionInpaintPipeline -from torchvision.transforms import ToPILImage -from utils import preprocess, prepare_mask_and_masked_image, recover_image, resize_and_crop - -gr.close_all() -topil = ToPILImage() - -pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="fp16", - torch_dtype=torch.float16, - safety_checker=None, -) -pipe_inpaint = pipe_inpaint.to("cuda") - -## Good params for editing that we used all over the paper --> decent quality and speed -GUIDANCE_SCALE = 7.5 -NUM_INFERENCE_STEPS = 100 -DEFAULT_SEED = 1234 - -def pgd(X, targets, model, criterion, eps=0.1, step_size=0.015, iters=40, clamp_min=0, clamp_max=1, mask=None): - X_adv = X.clone().detach() + (torch.rand(*X.shape)*2*eps-eps).cuda() - pbar = tqdm(range(iters)) - for i in pbar: - actual_step_size = step_size - (step_size - step_size / 100) / iters * i - X_adv.requires_grad_(True) - - loss = (model(X_adv).latent_dist.mean - targets).norm() - pbar.set_description(f"Loss {loss.item():.5f} | step size: {actual_step_size:.4}") - - grad, = torch.autograd.grad(loss, [X_adv]) - - X_adv = X_adv - grad.detach().sign() * actual_step_size - X_adv = torch.minimum(torch.maximum(X_adv, X - eps), X + eps) - X_adv.data = torch.clamp(X_adv, min=clamp_min, max=clamp_max) - X_adv.grad = None - - if mask is not None: - X_adv.data *= mask - - return X_adv - -def get_target(): - print("***get_target***") - target_url = 'https://www.rtings.com/images/test-materials/2015/204_Gray_Uniformity.png' - response = requests.get(target_url) - target_image = Image.open(BytesIO(response.content)).convert("RGB") - target_image = target_image.resize((512, 512)) - return target_image - -def immunize_fn(init_image, mask_image): - with torch.autocast('cuda'): - mask, X = prepare_mask_and_masked_image(init_image, mask_image) - X = X.half().cuda() - mask = mask.half().cuda() - - targets = pipe_inpaint.vae.encode(preprocess(get_target()).half().cuda()).latent_dist.mean - - adv_X = pgd(X, - targets = targets, - model=pipe_inpaint.vae.encode, - criterion=torch.nn.MSELoss(), - clamp_min=-1, - clamp_max=1, - eps=0.12, - step_size=0.01, - iters=200, - mask=1-mask - ) - - adv_X = (adv_X / 2 + 0.5).clamp(0, 1) - - adv_image = topil(adv_X[0]).convert("RGB") - adv_image = recover_image(adv_image, init_image, mask_image, background=True) - return adv_image - -def run(image, prompt, seed, immunize=False): - if seed == '': - seed = DEFAULT_SEED - else: - seed = int(seed) - torch.manual_seed(seed) - - init_image = Image.fromarray(image['image']) - init_image = resize_and_crop(init_image, (512,512)) - mask_image = ImageOps.invert(Image.fromarray(image['mask']).convert('RGB')) - mask_image = resize_and_crop(mask_image, init_image.size) - - if immunize: - immunized_image = immunize_fn(init_image, mask_image) - - image_edited = pipe_inpaint(prompt=prompt, - image=init_image if not immunize else immunized_image, - mask_image=mask_image, - height = init_image.size[0], - width = init_image.size[1], - eta=1, - guidance_scale=GUIDANCE_SCALE, - num_inference_steps=NUM_INFERENCE_STEPS, - ).images[0] - - image_edited = recover_image(image_edited, init_image, mask_image) - - if immunize: - return [(immunized_image, 'Immunized Image'), (image_edited, 'Edited After Immunization')] - else: - return [(image_edited, 'Edited Image')] - -description='''Official demo of our paper:
      -**Raising the Cost of Malicious AI-Powered Image Editing**
      -*Hadi Salman, Alaa Khaddaj, -Guillaume Leclerc, Andrew Ilyas, -Aleksander Madry
      -MIT
      Paper, Blog post -''' - - -with gr.Blocks() as demo: - gr.HTML(value="""

      - Interactive Demo: Immunize your Photos Against AI-powered Malicious Manipulation


      - """) - gr.HTML(description) - gr.HTML('''GitHub''') - gr.HTML('''Below you can test our (encoder attack) immunization method for making images resistant to manipulation by Stable Diffusion. - This immunization process forces the model to perform unrealistic edits.
      - This is a research project and is not production-ready. See Section 5 in our paper for discussion on its limitations. - ''') - with gr.Accordion(label='Click for demo steps:', open=False): - gr.HTML(''' - - Upload an image (or select from the below examples!) - - Mask (using the drawing tool) the parts of the image you want to maintain unedited (e.g., faces of people) - - Add a prompt to edit the image accordingly (see examples below) - - Play with the seed and click submit until you get a realistic edit that you are happy with (or use default seeds below) - - Now let's immunize your image and try again! - - Click on the "immunize" button, then submit. - - You will get the immunized image (which looks identical to the original one) and the edited image, which is now hopefully unrealistic! - ''') - - with gr.Row(): - with gr.Column(): - imgmask = gr.ImageMask(label='Drawing tool to mask regions you want to keep, e.g. faces') - prompt = gr.Textbox(label='Prompt', placeholder='A photo of a man in a wedding') - seed = gr.Textbox(label='Seed (Change to get different edits!)', placeholder=str(DEFAULT_SEED), visible=True) - immunize = gr.Checkbox(label='Immunize', value=False) - b1 = gr.Button('Submit') - with gr.Column(): - genimages = gr.Gallery(label="Generated images", - show_label=False, - elem_id="gallery").style(grid=[1,2], height="auto") - b1.click(run, [imgmask, prompt, seed, immunize], [genimages]) - -"""demo = gr.Interface(fn=run, - inputs=[ - gr.ImageMask(label='Drawing tool to mask regions you want to keep, e.g. faces'), - gr.Textbox(label='Prompt', placeholder='A photo of a man in a wedding'), - gr.Textbox(label='Seed (Change to get different edits!)', placeholder=str(DEFAULT_SEED), visible=True), - gr.Checkbox(label='Immunize', value=False), - ], - cache_examples=False, - outputs=[gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery").style(grid=[1,2], height="auto")], - examples=[ - ['./images/hadi_and_trevor.jpg', 'man attending a wedding', '329357'], - ['./images/trevor_2.jpg', 'two men in prison', '329357'], - ['./images/elon_2.jpg', 'man in a metro station', '214213'], - ], - examples_per_page=20, - allow_flagging='never', - title="Interactive Demo: Immunize your Photos Against AI-powered Malicious Manipulation", - description='''Official demo of our paper:
      - **Raising the Cost of Malicious AI-Powered Image Editing**
      - *[Hadi Salman](https://twitter.com/hadisalmanX)\*, [Alaa Khaddaj](https://twitter.com/Alaa_Khaddaj)\*, [Guillaume Leclerc](https://twitter.com/gpoleclerc)\*, [Andrew Ilyas](https://twitter.com/andrew_ilyas), [Aleksander Madry](https://twitter.com/aleks_madry)*
      - MIT   [Paper](https://arxiv.org/abs/2302.06588) -   [Blog post](https://gradientscience.org/photoguard/) -   [![](https://badgen.net/badge/icon/GitHub?icon=github&label)](https://github.com/MadryLab/photoguard) -
      - Below you can test our (encoder attack) immunization method for making images resistant to manipulation by Stable Diffusion. This immunization process forces the model to perform unrealistic edits. -
      -**This is a research project and is not production-ready. See Section 5 in our paper for discussion on its limitations.** -
      -Click for demo steps: - -+ Upload an image (or select from the below examples!) -+ Mask (using the drawing tool) the parts of the image you want to maintain unedited (e.g., faces of people) -+ Add a prompt to edit the image accordingly (see examples below) -+ Play with the seed and click submit until you get a realistic edit that you are happy with (or use default seeds below) - -Now let's immunize your image and try again! -+ Click on the "immunize" button, then submit. -+ You will get the immunized image (which looks identical to the original one) and the edited image, which is now hopefully unrealistic! -
      - ''', - ) -""" -# demo.launch() -demo.launch() #server_name='0.0.0.0', share=False, server_port=7860, inline=False) \ No newline at end of file diff --git a/spaces/zhan66/vits-simple-api/utils/sentence.py b/spaces/zhan66/vits-simple-api/utils/sentence.py deleted file mode 100644 index 11330c10de4bb7a8cbc7db459acb503b4b251ad7..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/utils/sentence.py +++ /dev/null @@ -1,91 +0,0 @@ -import regex as re - -from logger import logger -from utils.data_utils import check_is_none -from utils.classify_language import classify_language - - -def markup_language_type(text: str, target_languages: list = None) -> str: - pattern = r'[\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\>\=\?\@\[\]\{\}\\\\\^\_\`' \ - r'\!?。"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」' \ - r'『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘\'\‛\“\”\„\‟…‧﹏.]+' - sentences = re.split(pattern, text) - - pre_lang = "" - p = 0 - - for sentence in sentences: - - if check_is_none(sentence): continue - - lang = classify_language(sentence, target_languages) - - if pre_lang == "": - text = text[:p] + text[p:].replace(sentence, f"[{lang.upper()}]{sentence}", 1) - p += len(f"[{lang.upper()}]") - elif pre_lang != lang: - text = text[:p] + text[p:].replace(sentence, f"[{pre_lang.upper()}][{lang.upper()}]{sentence}", 1) - p += len(f"[{pre_lang.upper()}][{lang.upper()}]") - pre_lang = lang - p += text[p:].index(sentence) + len(sentence) - text += f"[{pre_lang.upper()}]" - - return text - - -def cut(text: str, max: int) -> list: - pattern = r'[!(),—+\-.:;??。,、;:]+' - sentences = re.split(pattern, text) - discarded_chars = re.findall(pattern, text) - - sentence_list, count, p = [], 0, 0 - - # 按被分割的符号遍历 - for i, discarded_chars in enumerate(discarded_chars): - count += len(sentences[i]) + len(discarded_chars) - if count >= max: - sentence_list.append(text[p:p + count].strip()) - p += count - count = 0 - - # 加入最后剩余的文本 - if p < len(text): - sentence_list.append(text[p:]) - - return sentence_list - - -def sentence_split_and_markup(text, max=50, lang="auto", speaker_lang=None): - # 如果该speaker只支持一种语言 - if speaker_lang is not None and len(speaker_lang) == 1: - if lang.upper() not in ["AUTO", "MIX"] and lang.lower() != speaker_lang[0]: - logger.debug( - f"lang \"{lang}\" is not in speaker_lang {speaker_lang},automatically set lang={speaker_lang[0]}") - lang = speaker_lang[0] - - sentence_list = [] - if lang.upper() != "MIX": - if max <= 0: - sentence_list.append( - markup_language_type(text, - speaker_lang) if lang.upper() == "AUTO" else f"[{lang.upper()}]{text}[{lang.upper()}]") - else: - for i in cut(text, max): - if check_is_none(i): continue - sentence_list.append( - markup_language_type(i, - speaker_lang) if lang.upper() == "AUTO" else f"[{lang.upper()}]{i}[{lang.upper()}]") - else: - sentence_list.append(text) - - for i in sentence_list: - logger.debug(i) - - return sentence_list - - -if __name__ == '__main__': - text = "这几天心里颇不宁静。今晚在院子里坐着乘凉,忽然想起日日走过的荷塘,在这满月的光里,总该另有一番样子吧。月亮渐渐地升高了,墙外马路上孩子们的欢笑,已经听不见了;妻在屋里拍着闰儿,迷迷糊糊地哼着眠歌。我悄悄地披了大衫,带上门出去。" - print(markup_language_type(text, languages=None)) - print(cut(text, max=50)) - print(sentence_split_and_markup(text, max=50, lang="auto", speaker_lang=None)) diff --git a/spaces/zhan66/vits-simple-api/utils/utils.py b/spaces/zhan66/vits-simple-api/utils/utils.py deleted file mode 100644 index fcca4711767b4c60f49932e00dcd11bbd9bfddea..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/utils/utils.py +++ /dev/null @@ -1,95 +0,0 @@ -import logging -import os -from json import loads -from torch import load, FloatTensor -from numpy import float32 -import librosa - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - - -def load_checkpoint(checkpoint_path, model): - checkpoint_dict = load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict.get('iteration', None) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logging.info(f"{k} is not in the checkpoint") - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - if iteration: - logging.info(f"Loaded checkpoint '{checkpoint_path}' (iteration {iteration})") - else: - logging.info(f"Loaded checkpoint '{checkpoint_path}'") - return - - -def get_hparams_from_file(config_path): - with open(config_path, 'r', encoding='utf-8') as f: - data = f.read() - config = loads(data) - - hparams = HParams(**config) - return hparams - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return FloatTensor(audio.astype(float32)) - - -def clean_folder(folder_path): - for filename in os.listdir(folder_path): - file_path = os.path.join(folder_path, filename) - # 如果是文件,则删除文件 - if os.path.isfile(file_path): - os.remove(file_path) - - -# is none -> True, is not none -> False -def check_is_none(s): - return s is None or (isinstance(s, str) and str(s).isspace()) or str(s) == "" - -def save_audio(audio, path): - with open(path,"wb") as f: - f.write(audio) diff --git a/spaces/zhang-wei-jian/docker/node_modules/delegates/test/index.js b/spaces/zhang-wei-jian/docker/node_modules/delegates/test/index.js deleted file mode 100644 index 7b6e3d4df19d908a6eb3f577cc25445d1b479f58..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/delegates/test/index.js +++ /dev/null @@ -1,94 +0,0 @@ - -var assert = require('assert'); -var delegate = require('..'); - -describe('.method(name)', function(){ - it('should delegate methods', function(){ - var obj = {}; - - obj.request = { - foo: function(bar){ - assert(this == obj.request); - return bar; - } - }; - - delegate(obj, 'request').method('foo'); - - obj.foo('something').should.equal('something'); - }) -}) - -describe('.getter(name)', function(){ - it('should delegate getters', function(){ - var obj = {}; - - obj.request = { - get type() { - return 'text/html'; - } - } - - delegate(obj, 'request').getter('type'); - - obj.type.should.equal('text/html'); - }) -}) - -describe('.setter(name)', function(){ - it('should delegate setters', function(){ - var obj = {}; - - obj.request = { - get type() { - return this._type.toUpperCase(); - }, - - set type(val) { - this._type = val; - } - } - - delegate(obj, 'request').setter('type'); - - obj.type = 'hey'; - obj.request.type.should.equal('HEY'); - }) -}) - -describe('.access(name)', function(){ - it('should delegate getters and setters', function(){ - var obj = {}; - - obj.request = { - get type() { - return this._type.toUpperCase(); - }, - - set type(val) { - this._type = val; - } - } - - delegate(obj, 'request').access('type'); - - obj.type = 'hey'; - obj.type.should.equal('HEY'); - }) -}) - -describe('.fluent(name)', function () { - it('should delegate in a fluent fashion', function () { - var obj = { - settings: { - env: 'development' - } - }; - - delegate(obj, 'settings').fluent('env'); - - obj.env().should.equal('development'); - obj.env('production').should.equal(obj); - obj.settings.env.should.equal('production'); - }) -}) diff --git a/spaces/zhangyd/bingo/src/components/ui/alert-dialog.tsx b/spaces/zhangyd/bingo/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
      - {children} -
      -
      -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/providers.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/zomehwh/vits-models-ow2/mel_processing.py b/spaces/zomehwh/vits-models-ow2/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-ow2/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/zouguojun/chatPDF/app.py b/spaces/zouguojun/chatPDF/app.py deleted file mode 100644 index a6dc7983e0ac16c63d966ec66e601b030200728b..0000000000000000000000000000000000000000 --- a/spaces/zouguojun/chatPDF/app.py +++ /dev/null @@ -1,166 +0,0 @@ -import requests -import json -import gradio as gr -# from concurrent.futures import ThreadPoolExecutor -import pdfplumber -import pandas as pd -import time -from cnocr import CnOcr -from sentence_transformers import SentenceTransformer, models, util -word_embedding_model = models.Transformer('uer/sbert-base-chinese-nli', do_lower_case=True) -pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode='cls') -embedder = SentenceTransformer(modules=[word_embedding_model, pooling_model]) -ocr = CnOcr() -# chat_url = 'https://souljoy-my-api.hf.space/sale' -chat_url = 'https://souljoy-my-api.hf.space/chatpdf' -headers = { - 'Content-Type': 'application/json', -} -# thread_pool_executor = ThreadPoolExecutor(max_workers=4) -history_max_len = 500 -all_max_len = 3000 - - -def get_emb(text): - emb_url = 'https://souljoy-my-api.hf.space/embeddings' - data = {"content": text} - try: - result = requests.post(url=emb_url, - data=json.dumps(data), - headers=headers - ) - return result.json()['data'][0]['embedding'] - except Exception as e: - print('data', data, 'result json', result.json()) - - -def doc_emb(doc: str): - texts = doc.split('\n') - # futures = [] - emb_list = embedder.encode(texts) - # for text in texts: - # futures.append(thread_pool_executor.submit(get_emb, text)) - # for f in futures: - # emb_list.append(f.result()) - print('\n'.join(texts)) - return texts, emb_list, gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Markdown.update( - value="""操作说明 step 3:PDF解析提交成功! 🙋 可以开始对话啦~"""), gr.Chatbot.update(visible=True) - - -def get_response(msg, bot, doc_text_list, doc_embeddings): - # future = thread_pool_executor.submit(get_emb, msg) - now_len = len(msg) - req_json = {'question': msg} - his_bg = -1 - for i in range(len(bot) - 1, -1, -1): - if now_len + len(bot[i][0]) + len(bot[i][1]) > history_max_len: - break - now_len += len(bot[i][0]) + len(bot[i][1]) - his_bg = i - req_json['history'] = [] if his_bg == -1 else bot[his_bg:] - # query_embedding = future.result() - query_embedding = embedder.encode([msg]) - cos_scores = util.cos_sim(query_embedding, doc_embeddings)[0] - score_index = [[score, index] for score, index in zip(cos_scores, [i for i in range(len(cos_scores))])] - score_index.sort(key=lambda x: x[0], reverse=True) - print('score_index:\n', score_index) - index_set, sub_doc_list = set(), [] - for s_i in score_index: - doc = doc_text_list[s_i[1]] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]) - now_len += len(doc) - # 可能段落截断错误,所以把上下段也加入进来 - if s_i[1] > 0 and s_i[1] -1 not in index_set: - doc = doc_text_list[s_i[1]-1] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]-1) - now_len += len(doc) - if s_i[1] + 1 < len(doc_text_list) and s_i[1] + 1 not in index_set: - doc = doc_text_list[s_i[1]+1] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]+1) - now_len += len(doc) - - index_list = list(index_set) - index_list.sort() - for i in index_list: - sub_doc_list.append(doc_text_list[i]) - req_json['doc'] = '' if len(sub_doc_list) == 0 else '\n'.join(sub_doc_list) - data = {"content": json.dumps(req_json)} - print('data:\n', req_json) - result = requests.post(url=chat_url, - data=json.dumps(data), - headers=headers - ) - res = result.json()['content'] - bot.append([msg, res]) - return bot[max(0, len(bot) - 3):] - - -def up_file(files): - doc_text_list = [] - for idx, file in enumerate(files): - print(file.name) - with pdfplumber.open(file.name) as pdf: - for i in range(len(pdf.pages)): - # 读取PDF文档第i+1页 - page = pdf.pages[i] - res_list = page.extract_text().split('\n')[:-1] - - for j in range(len(page.images)): - # 获取图片的二进制流 - img = page.images[j] - file_name = '{}-{}-{}.png'.format(str(time.time()), str(i), str(j)) - with open(file_name, mode='wb') as f: - f.write(img['stream'].get_data()) - try: - res = ocr.ocr(file_name) - except Exception as e: - res = [] - if len(res) > 0: - res_list.append(' '.join([re['text'] for re in res])) - - tables = page.extract_tables() - for table in tables: - # 第一列当成表头: - df = pd.DataFrame(table[1:], columns=table[0]) - try: - records = json.loads(df.to_json(orient="records", force_ascii=False)) - for rec in records: - res_list.append(json.dumps(rec, ensure_ascii=False)) - except Exception as e: - res_list.append(str(df)) - - doc_text_list += res_list - doc_text_list = [str(text).strip() for text in doc_text_list if len(str(text).strip()) > 0] - print(doc_text_list) - return gr.Textbox.update(value='\n'.join(doc_text_list), visible=True), gr.Button.update( - visible=True), gr.Markdown.update( - value="操作说明 step 2:确认PDF解析结果(可修正),点击“提交解析结果”,随后进行对话") - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - file = gr.File(file_types=['.pdf'], label='点击上传PDF,进行解析(支持多文档、表格、OCR)', file_count='multiple') - doc_bu = gr.Button(value='提交解析结果', visible=False) - txt = gr.Textbox(label='PDF解析结果', visible=False) - doc_text_state = gr.State([]) - doc_emb_state = gr.State([]) - with gr.Column(): - md = gr.Markdown("""操作说明 step 1:点击左侧区域,上传PDF,进行解析""") - chat_bot = gr.Chatbot(visible=False) - msg_txt = gr.Textbox(label='消息框', placeholder='输入消息,点击发送', visible=False) - chat_bu = gr.Button(value='发送', visible=False) - - file.change(up_file, [file], [txt, doc_bu, md]) - doc_bu.click(doc_emb, [txt], [doc_text_state, doc_emb_state, msg_txt, chat_bu, md, chat_bot]) - chat_bu.click(get_response, [msg_txt, chat_bot, doc_text_state, doc_emb_state], [chat_bot]) - -if __name__ == "__main__": - demo.queue().launch() - # demo.queue().launch(share=False, server_name='172.22.2.54', server_port=9191) \ No newline at end of file