How to Download Easy Worship 2009 Full Crack for Free
-
Easy Worship 2009 is a software that helps you to create and present worship songs, Bible verses, videos, and other media in your church. It is a powerful and easy-to-use tool that allows you to customize your worship service with different themes, fonts, backgrounds, transitions, and more. You can also use it to display live video feeds, DVDs, PowerPoint presentations, and web pages.
-
If you want to download Easy Worship 2009 full crack for free, you have come to the right place. In this article, we will show you how to download and install Easy Worship 2009 full crack from a reliable source. You will be able to enjoy all the features of the software without paying anything.
The first step is to download Easy Worship 2009 full crack from a trusted website. You can use the link below to download the software from our website. The file is safe and virus-free.
The next step is to disable Windows Defender on your computer. This is a security feature that prevents you from installing cracked software. To do this, follow these steps:
-
-
Go to Settings > Update & Security > Windows Security > Virus & threat protection.
-
Click on Manage settings under Virus & threat protection settings.
-
Turn off the switch for Real-time protection.
-
Click on Yes to confirm.
-
-
Step 3: Extract the File
-
The final step is to extract the file that you downloaded in step 1. To do this, follow these steps:
-
-
Locate the file on your computer using a file manager app.
-
Right-click on the file and select Extract Here or Extract All.
-
Enter the password: fullsoftdl.blogspot.com
-
Wait for the extraction process to finish.
-
Open the extracted folder and run the setup file.
-
Follow the on-screen instructions to complete the installation.
-
Copy the crack file from the crack folder and paste it into the installation directory.
-
Launch the software and enjoy its features.
-
-
Conclusion
-
Easy Worship 2009 full crack is a great software for creating and presenting worship media in your church. You can download it for free from our website and use it without any limitations. You can also update it regularly with new songs, Bible versions, and other resources.
-
We hope this article helped you download and install Easy Worship 2009 full crack for free. If you have any questions or feedback, feel free to leave a comment below.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ALA - Little Melissa 34 Sets !!! -.md b/spaces/1gistliPinn/ChatGPT4/Examples/ALA - Little Melissa 34 Sets !!! -.md
deleted file mode 100644
index b506ea708d3f653b3e0ba4ac8c4878e27f45786a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/ALA - Little Melissa 34 Sets !!! -.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
How to Download and Fix steam_api.dll for Resident Evil 6 Reloaded
-
If you are a fan of the Resident Evil series, you may have encountered an error message related to steam_api.dll when trying to launch Resident Evil 6 Reloaded. This file is part of the Steam client application developed by Valve Corporation, which is a digital distribution platform for video games. The file is used by game developers to integrate their games with the Steam platform, specifically to access the Steam API, which provides various services such as authentication, user profiles, game stats, and cloud storage.
-
The error message may indicate that the file is missing, corrupted, or not installed properly. In such cases, you may need to reinstall the affected game or the Steam client to restore the missing file. However, before you do that, you can try some simple solutions that may fix the problem without reinstalling anything. Here are some steps you can follow to download and fix steam_api.dll for Resident Evil 6 Reloaded.
Some antivirus software may flag the steam_api.dll file as a potential threat, as it can be used to modify the behavior of video games. However, this file is an integral part of the Steam client and is not a virus or malware. If you encounter such warnings, you can usually ignore them or add the file to the list of exceptions in your antivirus software. To do that, you need to open your antivirus software and look for a setting that allows you to exclude certain files or folders from scanning. Then, you need to add the steam_api.dll file or the folder where it is located to the exclusion list. The location of the file may vary depending on where you installed the game or the Steam client, but it is commonly found in one of these paths:
After adding the file or folder to the exclusion list, you need to restart your computer and try launching the game again. If the error message persists, you can move on to the next step.
-
Step 2: Download a new copy of steam_api.dll
-
If your antivirus software is not the cause of the problem, it may be that your steam_api.dll file is corrupted or outdated. In that case, you can try downloading a new copy of the file from a reliable source and replacing it with the old one. There are many websites that offer free downloads of DLL files, but not all of them are safe or trustworthy. You need to be careful when choosing where to download the file from, as some sites may contain malware or viruses that can harm your computer. One of the websites that we recommend is DLL-files.com[^1^], which is a reputable and secure site that provides various versions of DLL files for free. To download steam_api.dll from DLL-files.com, you need to follow these steps:
Choose wisely. Most of the time, just pick the highest version. However, some games may require a specific version of steam_api.dll that matches their own version. To find out which version of steam_api.dll you need for Resident Evil 6 Reloaded, you can right-click on the game's executable file (re6.exe) and select Properties. Then, go to the Details tab and look for the Product version field. For example, if your game's product version is 1.0.6.165, you may need to download steam_api.dll version 7.9.87.40.
-
Click on the Download button next to the version that you want and save the ZIP file to your computer.
-
Extract the ZIP file using a program like WinRAR or 7-Zip and copy the steam_api.dll file inside it.
-
Paste the steam_api.dll file into the same folder where your game or Steam client is installed, depending on where you found the original file in Step 1.
-
If you are prompted to overwrite or replace an existing file, click Yes.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Facebook Video to MP4 Online - Fast Free and Easy.md b/spaces/1phancelerku/anime-remove-background/Download Facebook Video to MP4 Online - Fast Free and Easy.md
deleted file mode 100644
index 39d3d34e13d104a3fbdc4e855895d6909e7a2fd4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Facebook Video to MP4 Online - Fast Free and Easy.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Download Facebook Video to MP4: A Complete Guide
-
Facebook is one of the most popular social media platforms in the world, with billions of users and millions of videos uploaded every day. You may have come across some interesting or useful videos on Facebook that you want to save on your device for offline viewing, sharing, or editing. But how can you download Facebook videos to MP4, which is a widely supported and versatile video format?
In this article, we will show you why you should download Facebook videos to MP4, how to do it with two different methods, and some tips and tricks for downloading Facebook videos to MP4. Let's get started!
-
Why Download Facebook Videos to MP4?
-
Before we dive into the methods of downloading Facebook videos to MP4, let's first understand why you should do it in the first place. Here are some benefits of MP4 format and some use cases for downloading Facebook videos.
-
Benefits of MP4 Format
-
MP4 is a digital multimedia container format that can store video, audio, subtitles, images, and other data. It is one of the most common and widely compatible video formats on the web and various devices. Here are some advantages of MP4 format:
-
How to download facebook video to mp4 online
-Download facebook video to mp4 free
-Download facebook video to mp4 hd
-Download facebook video to mp4 converter
-Download facebook video to mp4 android
-Download facebook video to mp4 iphone
-Download facebook video to mp4 mac
-Download facebook video to mp4 chrome
-Download facebook video to mp4 firefox
-Download facebook video to mp4 safari
-Download facebook live video to mp4
-Download private facebook video to mp4
-Download facebook story video to mp4
-Download facebook 360 video to mp4
-Download facebook watch video to mp4
-Download facebook messenger video to mp4
-Download facebook group video to mp4
-Download facebook page video to mp4
-Download facebook profile video to mp4
-Download facebook cover video to mp4
-Best way to download facebook video to mp4
-Fastest way to download facebook video to mp4
-Easiest way to download facebook video to mp4
-Simplest way to download facebook video to mp4
-Quickest way to download facebook video to mp4
-Download multiple facebook videos to mp4
-Download entire facebook videos playlist to mp4
-Download long facebook videos to mp4
-Download short facebook videos to mp4
-Download high quality facebook videos to mp4
-Download low quality facebook videos to mp4
-Download any facebook videos to mp4
-Save and download facebook videos as mp4 files
-Convert and download facebook videos in mp4 format
-Edit and download facebook videos in mp4 format
-Crop and download facebook videos in mp4 format
-Trim and download facebook videos in mp4 format
-Cut and download facebook videos in mp4 format
-Merge and download facebook videos in mp4 format
-Split and download facebook videos in mp4 format
-Rotate and download facebook videos in mp4 format
-Flip and download facebook videos in mp4 format
-Resize and download facebook videos in mp4 format
-Compress and download facebook videos in mp4 format
-Enhance and download facebook videos in mp4 format
-Add subtitles and download facebook videos in mp4 format
-Add watermark and download facebook videos in mp4 format
-Add music and download facebook videos in mp4 format
-Add effects and download facebook videos in mp4 format
-
-
It has high compression efficiency, which means it can reduce the file size without compromising the video quality.
-
It supports various codecs, which are methods of encoding and decoding video and audio data. This means it can handle different types of video and audio content.
-
It is compatible with most web browsers, media players, video editing software, and mobile devices. This means you can easily play, share, or edit your downloaded videos.
-
-
Use Cases for Downloading Facebook Videos
-
There are many reasons why you may want to download Facebook videos to MP4. Here are some common scenarios:
-
-
You want to watch your favorite videos offline without internet connection or buffering issues.
-
You want to share your downloaded videos with your friends or family via other platforms or devices.
-
You want to edit your downloaded videos with your preferred software or tools.
-
You want to backup your downloaded videos on your computer or external storage devices.
-
You want to create your own video collection or library from various sources.
-
-
How to Download Facebook Videos to MP4?
-
Now that you know why you should download Facebook videos to MP4, let's see how you can do it with two different methods. One is using an online Facebook video downloader, and the other is using a desktop Facebook video downloader software.
-
Method 1: Use an Online Facebook Video Downloader
-
An online Facebook video downloader is a web-based tool that allows you to download Facebook videos to MP4 without installing any software or app on your device. All you need is a web browser and an internet connection. Here are the steps to use an online Facebook video downloader:
-
Step 1: Copy the Facebook Video URL
-
The first step is to copy the URL of the Facebook video that you want to download. To do this, you can either right click on the three dots icon on the top right corner of the video and select "Copy link", or go to the video page and copy the URL from the address bar of your browser.
-
Step 2: Paste the URL into the Online Downloader
-
The next step is to paste the URL into the online downloader. To do this, you can go to any online Facebook video downloader website, such as [FBDownloader], [Getfvid], or [FB Video Saver]. Then, you can paste the URL into the input box and click on the "Download" or "Go" button.
-
Step 3: Choose MP4 as the Output Format and Download
-
The final step is to choose MP4 as the output format and download the video. To do this, you can look for the MP4 option among the available formats and quality options. Usually, MP4 is the default or recommended format for most online downloaders. Then, you can right-click on the "Download" or "Save" button and select "Save link as" or "Save target as" to save the video on your device.
-
Method 2: Use a Desktop Facebook Video Downloader Software
-
A desktop Facebook video downloader software is a program that you need to install and run on your computer. It usually offers more features and functions than an online downloader, such as batch downloading, video conversion, video editing, and more. Here are the steps to use a desktop Facebook video downloader software:
-
Step 1: Install and Launch the Software
-
The first step is to install and launch the software on your computer. To do this, you can go to the official website of the software and download the installation file. Some examples of desktop Facebook video downloader software are [iTubeGo], [4K Video Downloader], and [Wondershare UniConverter]. Then, you can follow the instructions to install and launch the software.
-
Step 2: Copy and Paste the Facebook Video URL into the Software
-
The next step is to copy and paste the Facebook video URL into the software. To do this, you can use the same method as in step 1 of method 1 to copy the URL of the Facebook video that you want to download. Then, you can paste it into the software by clicking on the "Paste URL" or "Add URL" button.
-
Step 3: Select MP4 as the Output Format and Download
-
The final step is to select MP4 as the output format and download the video. To do this, you can look for the MP4 option in the settings or preferences of the software. You can also adjust the quality, resolution, and other parameters of the output video according to your needs. Then, you can click on the "Download" or "Start" button to save the video on your computer.
-
Tips and Tricks for Downloading Facebook Videos to MP4
-
Now that you know how to download Facebook videos to MP4 with two different methods, here are some tips and tricks for downloading Facebook videos to MP4 more easily and effectively:
-
Check the Video Quality and Size Before Downloading
-
Before you download a Facebook video to MP4, you should check the video quality and size to make sure it meets your expectations and requirements. You can do this by hovering over the video on Facebook and looking at the information that appears on the bottom right corner. You can also use the online or desktop downloader tools to preview the video quality and size before downloading.
-
Respect the Copyrights and Privacy of the Video Owners
-
When you download a Facebook video to MP4, you should respect the copyrights and privacy of the video owners. You should not download or use any videos that are protected by intellectual property rights or personal data protection laws without their permission or consent. You should also not download or use any videos that are illegal, harmful, or offensive.
-
Manage and Organize Your Downloaded Videos
-
After you download a Facebook video to MP4, you should manage and organize your downloaded videos properly. You can do this by creating folders and subfolders on your device to store your videos by categories, topics, or dates. You can also rename your videos with descriptive titles and tags to make them easier to find and access.
-
Conclusion
-
In conclusion, downloading Facebook videos to MP4 is a useful and convenient way to save, share, or edit your favorite videos from Facebook. You can do it with two different methods: using an online Facebook video downloader or using a desktop Facebook video downloader software. Both methods are easy and effective, but they have their own advantages and disadvantages. You can choose the one that suits your needs and preferences best.
-
We hope this article has helped you learn how to download Facebook videos to MP4 with a complete guide. If you have any questions or feedback, please feel free to leave a comment below. Happy downloading!
-
FAQs
-
Here are some frequently asked questions about downloading Facebook videos to MP4:
-
-
Can I download Facebook videos to MP4 on my mobile device?
-
Yes, you can download Facebook videos to MP4 on your mobile device with an online Facebook video downloader. However, you may need to use a mobile browser that supports downloading files, such as Chrome or Safari. Alternatively, you can use a mobile app that can download Facebook videos to MP4, such as [Video Downloader for Facebook] or [Video Downloader for FB].
-
Can I download live videos from Facebook to MP4?
-
Yes, you can download live videos from Facebook to MP4 with a desktop Facebook video downloader software. However, you may need to wait until the live stream is over before you can download it. Alternatively, you can use a screen recorder software or app that can capture live videos from Facebook and save them as MP4 files.
-
Can I download private videos from Facebook to MP4?
-
Yes, you can download private videos from Facebook to MP4 with an online or desktop Facebook video downloader tool. However, you may need to log in to your Facebook account before you can access the private videos. Alternatively, you can use a browser extension that can download private videos from Facebook to MP4, such as [FBDown Video Downloader] or [Video Downloader PLUS].
-
Can I convert other video formats to MP4?
-
Yes, you can convert other video formats to MP4 with a desktop Facebook video downloader software or a standalone video converter software or app. You can choose from various formats and codecs, such as AVI, MKV, MOV, WMV, FLV, MPEG, H.264, HEVC, etc.
-
Can I edit my downloaded videos from Facebook?
-
Yes, you can edit your downloaded videos from Facebook with a desktop Facebook video downloader software or a standalone video editor software or app. You can perform various editing tasks, such as trimming, cropping, rotating, merging, splitting, adding effects, subtitles, music, etc. to your videos.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/encoder/align_all_parallel.py b/spaces/232labs/VToonify/vtoonify/model/encoder/align_all_parallel.py
deleted file mode 100644
index a3bdf8d1c4b02687249709a2da3c21794b22be92..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/encoder/align_all_parallel.py
+++ /dev/null
@@ -1,231 +0,0 @@
-"""
-brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset)
-author: lzhbrian (https://lzhbrian.me)
-date: 2020.1.5
-note: code is heavily borrowed from
- https://github.com/NVlabs/ffhq-dataset
- http://dlib.net/face_landmark_detection.py.html
-
-requirements:
- apt install cmake
- conda install Pillow numpy scipy
- pip install dlib
- # download face landmark model from:
- # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
-"""
-from argparse import ArgumentParser
-import time
-import numpy as np
-import PIL
-import PIL.Image
-import os
-import scipy
-import scipy.ndimage
-import dlib
-import multiprocessing as mp
-import math
-
-#from configs.paths_config import model_paths
-SHAPE_PREDICTOR_PATH = 'shape_predictor_68_face_landmarks.dat'#model_paths["shape_predictor"]
-cnn_model_path = 'mmod_human_face_detector.dat'
-def get_landmark(filepath, predictor):
- """get landmark with dlib
- :return: np.array shape=(68, 2)
- """
- detector = dlib.get_frontal_face_detector()
- cnn_face_detector = dlib.cnn_face_detection_model_v1('localmodel/mmod_human_face_detector.dat') # Load the MMod CNN model
-
- if type(filepath) == str:
- img = dlib.load_rgb_image(filepath)
- else:
- img = filepath
-
- # Try multiple times if necessary
-
- num_attempts = 3
- dets = []
- for attempt in range(num_attempts):
- dets = detector(img, 1)
- if len(dets) > 0:
- break
-
- # If no faces are detected using HOG-based detector, try using MMod CNN-based detector
- if len(dets) == 0:
- dets = cnn_face_detector(img, 1)
- dets = [rect.rect for rect in dets] # Convert mmod_rectangles to rectangles
-
- if len(dets) == 0:
- print('Error: no face detected!')
- return None
-
- shape = None
- for k, d in enumerate(dets):
- shape = predictor(img, d)
-
- if shape is None:
- print(
- 'Error: No face detected! If you are sure there are faces in your input, you may rerun the code several times until the face is detected. Sometimes the detector is unstable.')
- t = list(shape.parts())
- a = []
- for tt in t:
- a.append([tt.x, tt.y])
- lm = np.array(a)
- return lm
-
-def align_face(filepath, predictor):
- """
- :param filepath: str
- :return: PIL Image
- """
-
- lm = get_landmark(filepath, predictor)
- if lm is None:
- return None
-
- lm_chin = lm[0: 17] # left-right
- lm_eyebrow_left = lm[17: 22] # left-right
- lm_eyebrow_right = lm[22: 27] # left-right
- lm_nose = lm[27: 31] # top-down
- lm_nostrils = lm[31: 36] # top-down
- lm_eye_left = lm[36: 42] # left-clockwise
- lm_eye_right = lm[42: 48] # left-clockwise
- lm_mouth_outer = lm[48: 60] # left-clockwise
- lm_mouth_inner = lm[60: 68] # left-clockwise
-
- # Calculate auxiliary vectors.
- eye_left = np.mean(lm_eye_left, axis=0)
- eye_right = np.mean(lm_eye_right, axis=0)
- eye_avg = (eye_left + eye_right) * 0.5
- eye_to_eye = eye_right - eye_left
- mouth_left = lm_mouth_outer[0]
- mouth_right = lm_mouth_outer[6]
- mouth_avg = (mouth_left + mouth_right) * 0.5
- eye_to_mouth = mouth_avg - eye_avg
-
- # Choose oriented crop rectangle.
- x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
- x /= np.hypot(*x)
- x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
- y = np.flipud(x) * [-1, 1]
- c = eye_avg + eye_to_mouth * 0.1
- quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
- qsize = np.hypot(*x) * 2
-
- # read image
- if type(filepath) == str:
- img = PIL.Image.open(filepath)
- else:
- img = PIL.Image.fromarray(filepath)
-
- output_size = 256
- transform_size = 256
- enable_padding = True
-
- # Shrink.
- shrink = int(np.floor(qsize / output_size * 0.5))
- if shrink > 1:
- rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
- img = img.resize(rsize, PIL.Image.ANTIALIAS)
- quad /= shrink
- qsize /= shrink
-
- # Crop.
- border = max(int(np.rint(qsize * 0.1)), 3)
- crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
- min(crop[3] + border, img.size[1]))
- if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
- img = img.crop(crop)
- quad -= crop[0:2]
-
- # Pad.
- pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
- max(pad[3] - img.size[1] + border, 0))
- if enable_padding and max(pad) > border - 4:
- pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
- img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
- h, w, _ = img.shape
- y, x, _ = np.ogrid[:h, :w, :1]
- mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
- 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
- blur = qsize * 0.02
- img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
- img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
- img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
- quad += pad[:2]
-
- # Transform.
- img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
- if output_size < transform_size:
- img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
-
- # Save aligned image.
- return img
-
-
-def chunks(lst, n):
- """Yield successive n-sized chunks from lst."""
- for i in range(0, len(lst), n):
- yield lst[i:i + n]
-
-
-def extract_on_paths(file_paths):
- predictor = dlib.shape_predictor(SHAPE_PREDICTOR_PATH)
- pid = mp.current_process().name
- print('\t{} is starting to extract on #{} images'.format(pid, len(file_paths)))
- tot_count = len(file_paths)
- count = 0
- for file_path, res_path in file_paths:
- count += 1
- if count % 100 == 0:
- print('{} done with {}/{}'.format(pid, count, tot_count))
- try:
- res = align_face(file_path, predictor)
- res = res.convert('RGB')
- os.makedirs(os.path.dirname(res_path), exist_ok=True)
- res.save(res_path)
- except Exception:
- continue
- print('\tDone!')
-
-
-def parse_args():
- parser = ArgumentParser(add_help=False)
- parser.add_argument('--num_threads', type=int, default=1)
- parser.add_argument('--root_path', type=str, default='')
- args = parser.parse_args()
- return args
-
-
-def run(args):
- root_path = args.root_path
- out_crops_path = root_path + '_crops'
- if not os.path.exists(out_crops_path):
- os.makedirs(out_crops_path, exist_ok=True)
-
- file_paths = []
- for root, dirs, files in os.walk(root_path):
- for file in files:
- file_path = os.path.join(root, file)
- fname = os.path.join(out_crops_path, os.path.relpath(file_path, root_path))
- res_path = '{}.jpg'.format(os.path.splitext(fname)[0])
- if os.path.splitext(file_path)[1] == '.txt' or os.path.exists(res_path):
- continue
- file_paths.append((file_path, res_path))
-
- file_chunks = list(chunks(file_paths, int(math.ceil(len(file_paths) / args.num_threads))))
- print(len(file_chunks))
- pool = mp.Pool(args.num_threads)
- print('Running on {} paths\nHere we goooo'.format(len(file_paths)))
- tic = time.time()
- pool.map(extract_on_paths, file_chunks)
- toc = time.time()
- print('Mischief managed in {}s'.format(toc - tic))
-
-
-if __name__ == '__main__':
- args = parse_args()
- run(args)
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/networks_basic.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/networks_basic.py
deleted file mode 100644
index ec3f045f9f22dbf49e18e9edca25d04ccc551da9..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/networks_basic.py
+++ /dev/null
@@ -1,187 +0,0 @@
-
-from __future__ import absolute_import
-
-import sys
-import torch
-import torch.nn as nn
-import torch.nn.init as init
-from torch.autograd import Variable
-import numpy as np
-from pdb import set_trace as st
-from skimage import color
-from IPython import embed
-from models.stylegan2.lpips import pretrained_networks as pn
-
-import models.stylegan2.lpips as util
-
-def spatial_average(in_tens, keepdim=True):
- return in_tens.mean([2,3],keepdim=keepdim)
-
-def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W
- in_H = in_tens.shape[2]
- scale_factor = 1.*out_H/in_H
-
- return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens)
-
-# Learned perceptual metric
-class PNetLin(nn.Module):
- def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True):
- super(PNetLin, self).__init__()
-
- self.pnet_type = pnet_type
- self.pnet_tune = pnet_tune
- self.pnet_rand = pnet_rand
- self.spatial = spatial
- self.lpips = lpips
- self.version = version
- self.scaling_layer = ScalingLayer()
-
- if(self.pnet_type in ['vgg','vgg16']):
- net_type = pn.vgg16
- self.chns = [64,128,256,512,512]
- elif(self.pnet_type=='alex'):
- net_type = pn.alexnet
- self.chns = [64,192,384,256,256]
- elif(self.pnet_type=='squeeze'):
- net_type = pn.squeezenet
- self.chns = [64,128,256,384,384,512,512]
- self.L = len(self.chns)
-
- self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
-
- if(lpips):
- self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
- self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
- self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
- self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
- self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
- self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4]
- if(self.pnet_type=='squeeze'): # 7 layers for squeezenet
- self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
- self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
- self.lins+=[self.lin5,self.lin6]
-
- def forward(self, in0, in1, retPerLayer=False):
- # v0.0 - original release had a bug, where input was not scaled
- in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1)
- outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
- feats0, feats1, diffs = {}, {}, {}
-
- for kk in range(self.L):
- feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk])
- diffs[kk] = (feats0[kk]-feats1[kk])**2
-
- if(self.lpips):
- if(self.spatial):
- res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)]
- else:
- if(self.spatial):
- res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)]
-
- val = res[0]
- for l in range(1,self.L):
- val += res[l]
-
- if(retPerLayer):
- return (val, res)
- else:
- return val
-
-class ScalingLayer(nn.Module):
- def __init__(self):
- super(ScalingLayer, self).__init__()
- self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None])
- self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None])
-
- def forward(self, inp):
- return (inp - self.shift) / self.scale
-
-
-class NetLinLayer(nn.Module):
- ''' A single linear layer which does a 1x1 conv '''
- def __init__(self, chn_in, chn_out=1, use_dropout=False):
- super(NetLinLayer, self).__init__()
-
- layers = [nn.Dropout(),] if(use_dropout) else []
- layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),]
- self.model = nn.Sequential(*layers)
-
-
-class Dist2LogitLayer(nn.Module):
- ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) '''
- def __init__(self, chn_mid=32, use_sigmoid=True):
- super(Dist2LogitLayer, self).__init__()
-
- layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),]
- if(use_sigmoid):
- layers += [nn.Sigmoid(),]
- self.model = nn.Sequential(*layers)
-
- def forward(self,d0,d1,eps=0.1):
- return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1))
-
-class BCERankingLoss(nn.Module):
- def __init__(self, chn_mid=32):
- super(BCERankingLoss, self).__init__()
- self.net = Dist2LogitLayer(chn_mid=chn_mid)
- # self.parameters = list(self.net.parameters())
- self.loss = torch.nn.BCELoss()
-
- def forward(self, d0, d1, judge):
- per = (judge+1.)/2.
- self.logit = self.net.forward(d0,d1)
- return self.loss(self.logit, per)
-
-# L2, DSSIM metrics
-class FakeNet(nn.Module):
- def __init__(self, use_gpu=True, colorspace='Lab'):
- super(FakeNet, self).__init__()
- self.use_gpu = use_gpu
- self.colorspace=colorspace
-
-class L2(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- (N,C,X,Y) = in0.size()
- value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N)
- return value
- elif(self.colorspace=='Lab'):
- value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-class DSSIM(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float')
- elif(self.colorspace=='Lab'):
- value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-def print_network(net):
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- print('Network',net)
- print('Total number of parameters: %d' % num_params)
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/imagenet_zeroshot_data.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/imagenet_zeroshot_data.py
deleted file mode 100644
index d32e55328d6799ccb8d61625f43abb80a33d6c17..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/imagenet_zeroshot_data.py
+++ /dev/null
@@ -1,1088 +0,0 @@
-# NOTE: This script is currently not supported for CLAP.
-
-imagenet_classnames = [
- "tench",
- "goldfish",
- "great white shark",
- "tiger shark",
- "hammerhead shark",
- "electric ray",
- "stingray",
- "rooster",
- "hen",
- "ostrich",
- "brambling",
- "goldfinch",
- "house finch",
- "junco",
- "indigo bunting",
- "American robin",
- "bulbul",
- "jay",
- "magpie",
- "chickadee",
- "American dipper",
- "kite (bird of prey)",
- "bald eagle",
- "vulture",
- "great grey owl",
- "fire salamander",
- "smooth newt",
- "newt",
- "spotted salamander",
- "axolotl",
- "American bullfrog",
- "tree frog",
- "tailed frog",
- "loggerhead sea turtle",
- "leatherback sea turtle",
- "mud turtle",
- "terrapin",
- "box turtle",
- "banded gecko",
- "green iguana",
- "Carolina anole",
- "desert grassland whiptail lizard",
- "agama",
- "frilled-necked lizard",
- "alligator lizard",
- "Gila monster",
- "European green lizard",
- "chameleon",
- "Komodo dragon",
- "Nile crocodile",
- "American alligator",
- "triceratops",
- "worm snake",
- "ring-necked snake",
- "eastern hog-nosed snake",
- "smooth green snake",
- "kingsnake",
- "garter snake",
- "water snake",
- "vine snake",
- "night snake",
- "boa constrictor",
- "African rock python",
- "Indian cobra",
- "green mamba",
- "sea snake",
- "Saharan horned viper",
- "eastern diamondback rattlesnake",
- "sidewinder rattlesnake",
- "trilobite",
- "harvestman",
- "scorpion",
- "yellow garden spider",
- "barn spider",
- "European garden spider",
- "southern black widow",
- "tarantula",
- "wolf spider",
- "tick",
- "centipede",
- "black grouse",
- "ptarmigan",
- "ruffed grouse",
- "prairie grouse",
- "peafowl",
- "quail",
- "partridge",
- "african grey parrot",
- "macaw",
- "sulphur-crested cockatoo",
- "lorikeet",
- "coucal",
- "bee eater",
- "hornbill",
- "hummingbird",
- "jacamar",
- "toucan",
- "duck",
- "red-breasted merganser",
- "goose",
- "black swan",
- "tusker",
- "echidna",
- "platypus",
- "wallaby",
- "koala",
- "wombat",
- "jellyfish",
- "sea anemone",
- "brain coral",
- "flatworm",
- "nematode",
- "conch",
- "snail",
- "slug",
- "sea slug",
- "chiton",
- "chambered nautilus",
- "Dungeness crab",
- "rock crab",
- "fiddler crab",
- "red king crab",
- "American lobster",
- "spiny lobster",
- "crayfish",
- "hermit crab",
- "isopod",
- "white stork",
- "black stork",
- "spoonbill",
- "flamingo",
- "little blue heron",
- "great egret",
- "bittern bird",
- "crane bird",
- "limpkin",
- "common gallinule",
- "American coot",
- "bustard",
- "ruddy turnstone",
- "dunlin",
- "common redshank",
- "dowitcher",
- "oystercatcher",
- "pelican",
- "king penguin",
- "albatross",
- "grey whale",
- "killer whale",
- "dugong",
- "sea lion",
- "Chihuahua",
- "Japanese Chin",
- "Maltese",
- "Pekingese",
- "Shih Tzu",
- "King Charles Spaniel",
- "Papillon",
- "toy terrier",
- "Rhodesian Ridgeback",
- "Afghan Hound",
- "Basset Hound",
- "Beagle",
- "Bloodhound",
- "Bluetick Coonhound",
- "Black and Tan Coonhound",
- "Treeing Walker Coonhound",
- "English foxhound",
- "Redbone Coonhound",
- "borzoi",
- "Irish Wolfhound",
- "Italian Greyhound",
- "Whippet",
- "Ibizan Hound",
- "Norwegian Elkhound",
- "Otterhound",
- "Saluki",
- "Scottish Deerhound",
- "Weimaraner",
- "Staffordshire Bull Terrier",
- "American Staffordshire Terrier",
- "Bedlington Terrier",
- "Border Terrier",
- "Kerry Blue Terrier",
- "Irish Terrier",
- "Norfolk Terrier",
- "Norwich Terrier",
- "Yorkshire Terrier",
- "Wire Fox Terrier",
- "Lakeland Terrier",
- "Sealyham Terrier",
- "Airedale Terrier",
- "Cairn Terrier",
- "Australian Terrier",
- "Dandie Dinmont Terrier",
- "Boston Terrier",
- "Miniature Schnauzer",
- "Giant Schnauzer",
- "Standard Schnauzer",
- "Scottish Terrier",
- "Tibetan Terrier",
- "Australian Silky Terrier",
- "Soft-coated Wheaten Terrier",
- "West Highland White Terrier",
- "Lhasa Apso",
- "Flat-Coated Retriever",
- "Curly-coated Retriever",
- "Golden Retriever",
- "Labrador Retriever",
- "Chesapeake Bay Retriever",
- "German Shorthaired Pointer",
- "Vizsla",
- "English Setter",
- "Irish Setter",
- "Gordon Setter",
- "Brittany dog",
- "Clumber Spaniel",
- "English Springer Spaniel",
- "Welsh Springer Spaniel",
- "Cocker Spaniel",
- "Sussex Spaniel",
- "Irish Water Spaniel",
- "Kuvasz",
- "Schipperke",
- "Groenendael dog",
- "Malinois",
- "Briard",
- "Australian Kelpie",
- "Komondor",
- "Old English Sheepdog",
- "Shetland Sheepdog",
- "collie",
- "Border Collie",
- "Bouvier des Flandres dog",
- "Rottweiler",
- "German Shepherd Dog",
- "Dobermann",
- "Miniature Pinscher",
- "Greater Swiss Mountain Dog",
- "Bernese Mountain Dog",
- "Appenzeller Sennenhund",
- "Entlebucher Sennenhund",
- "Boxer",
- "Bullmastiff",
- "Tibetan Mastiff",
- "French Bulldog",
- "Great Dane",
- "St. Bernard",
- "husky",
- "Alaskan Malamute",
- "Siberian Husky",
- "Dalmatian",
- "Affenpinscher",
- "Basenji",
- "pug",
- "Leonberger",
- "Newfoundland dog",
- "Great Pyrenees dog",
- "Samoyed",
- "Pomeranian",
- "Chow Chow",
- "Keeshond",
- "brussels griffon",
- "Pembroke Welsh Corgi",
- "Cardigan Welsh Corgi",
- "Toy Poodle",
- "Miniature Poodle",
- "Standard Poodle",
- "Mexican hairless dog (xoloitzcuintli)",
- "grey wolf",
- "Alaskan tundra wolf",
- "red wolf or maned wolf",
- "coyote",
- "dingo",
- "dhole",
- "African wild dog",
- "hyena",
- "red fox",
- "kit fox",
- "Arctic fox",
- "grey fox",
- "tabby cat",
- "tiger cat",
- "Persian cat",
- "Siamese cat",
- "Egyptian Mau",
- "cougar",
- "lynx",
- "leopard",
- "snow leopard",
- "jaguar",
- "lion",
- "tiger",
- "cheetah",
- "brown bear",
- "American black bear",
- "polar bear",
- "sloth bear",
- "mongoose",
- "meerkat",
- "tiger beetle",
- "ladybug",
- "ground beetle",
- "longhorn beetle",
- "leaf beetle",
- "dung beetle",
- "rhinoceros beetle",
- "weevil",
- "fly",
- "bee",
- "ant",
- "grasshopper",
- "cricket insect",
- "stick insect",
- "cockroach",
- "praying mantis",
- "cicada",
- "leafhopper",
- "lacewing",
- "dragonfly",
- "damselfly",
- "red admiral butterfly",
- "ringlet butterfly",
- "monarch butterfly",
- "small white butterfly",
- "sulphur butterfly",
- "gossamer-winged butterfly",
- "starfish",
- "sea urchin",
- "sea cucumber",
- "cottontail rabbit",
- "hare",
- "Angora rabbit",
- "hamster",
- "porcupine",
- "fox squirrel",
- "marmot",
- "beaver",
- "guinea pig",
- "common sorrel horse",
- "zebra",
- "pig",
- "wild boar",
- "warthog",
- "hippopotamus",
- "ox",
- "water buffalo",
- "bison",
- "ram (adult male sheep)",
- "bighorn sheep",
- "Alpine ibex",
- "hartebeest",
- "impala (antelope)",
- "gazelle",
- "arabian camel",
- "llama",
- "weasel",
- "mink",
- "European polecat",
- "black-footed ferret",
- "otter",
- "skunk",
- "badger",
- "armadillo",
- "three-toed sloth",
- "orangutan",
- "gorilla",
- "chimpanzee",
- "gibbon",
- "siamang",
- "guenon",
- "patas monkey",
- "baboon",
- "macaque",
- "langur",
- "black-and-white colobus",
- "proboscis monkey",
- "marmoset",
- "white-headed capuchin",
- "howler monkey",
- "titi monkey",
- "Geoffroy's spider monkey",
- "common squirrel monkey",
- "ring-tailed lemur",
- "indri",
- "Asian elephant",
- "African bush elephant",
- "red panda",
- "giant panda",
- "snoek fish",
- "eel",
- "silver salmon",
- "rock beauty fish",
- "clownfish",
- "sturgeon",
- "gar fish",
- "lionfish",
- "pufferfish",
- "abacus",
- "abaya",
- "academic gown",
- "accordion",
- "acoustic guitar",
- "aircraft carrier",
- "airliner",
- "airship",
- "altar",
- "ambulance",
- "amphibious vehicle",
- "analog clock",
- "apiary",
- "apron",
- "trash can",
- "assault rifle",
- "backpack",
- "bakery",
- "balance beam",
- "balloon",
- "ballpoint pen",
- "Band-Aid",
- "banjo",
- "baluster / handrail",
- "barbell",
- "barber chair",
- "barbershop",
- "barn",
- "barometer",
- "barrel",
- "wheelbarrow",
- "baseball",
- "basketball",
- "bassinet",
- "bassoon",
- "swimming cap",
- "bath towel",
- "bathtub",
- "station wagon",
- "lighthouse",
- "beaker",
- "military hat (bearskin or shako)",
- "beer bottle",
- "beer glass",
- "bell tower",
- "baby bib",
- "tandem bicycle",
- "bikini",
- "ring binder",
- "binoculars",
- "birdhouse",
- "boathouse",
- "bobsleigh",
- "bolo tie",
- "poke bonnet",
- "bookcase",
- "bookstore",
- "bottle cap",
- "hunting bow",
- "bow tie",
- "brass memorial plaque",
- "bra",
- "breakwater",
- "breastplate",
- "broom",
- "bucket",
- "buckle",
- "bulletproof vest",
- "high-speed train",
- "butcher shop",
- "taxicab",
- "cauldron",
- "candle",
- "cannon",
- "canoe",
- "can opener",
- "cardigan",
- "car mirror",
- "carousel",
- "tool kit",
- "cardboard box / carton",
- "car wheel",
- "automated teller machine",
- "cassette",
- "cassette player",
- "castle",
- "catamaran",
- "CD player",
- "cello",
- "mobile phone",
- "chain",
- "chain-link fence",
- "chain mail",
- "chainsaw",
- "storage chest",
- "chiffonier",
- "bell or wind chime",
- "china cabinet",
- "Christmas stocking",
- "church",
- "movie theater",
- "cleaver",
- "cliff dwelling",
- "cloak",
- "clogs",
- "cocktail shaker",
- "coffee mug",
- "coffeemaker",
- "spiral or coil",
- "combination lock",
- "computer keyboard",
- "candy store",
- "container ship",
- "convertible",
- "corkscrew",
- "cornet",
- "cowboy boot",
- "cowboy hat",
- "cradle",
- "construction crane",
- "crash helmet",
- "crate",
- "infant bed",
- "Crock Pot",
- "croquet ball",
- "crutch",
- "cuirass",
- "dam",
- "desk",
- "desktop computer",
- "rotary dial telephone",
- "diaper",
- "digital clock",
- "digital watch",
- "dining table",
- "dishcloth",
- "dishwasher",
- "disc brake",
- "dock",
- "dog sled",
- "dome",
- "doormat",
- "drilling rig",
- "drum",
- "drumstick",
- "dumbbell",
- "Dutch oven",
- "electric fan",
- "electric guitar",
- "electric locomotive",
- "entertainment center",
- "envelope",
- "espresso machine",
- "face powder",
- "feather boa",
- "filing cabinet",
- "fireboat",
- "fire truck",
- "fire screen",
- "flagpole",
- "flute",
- "folding chair",
- "football helmet",
- "forklift",
- "fountain",
- "fountain pen",
- "four-poster bed",
- "freight car",
- "French horn",
- "frying pan",
- "fur coat",
- "garbage truck",
- "gas mask or respirator",
- "gas pump",
- "goblet",
- "go-kart",
- "golf ball",
- "golf cart",
- "gondola",
- "gong",
- "gown",
- "grand piano",
- "greenhouse",
- "radiator grille",
- "grocery store",
- "guillotine",
- "hair clip",
- "hair spray",
- "half-track",
- "hammer",
- "hamper",
- "hair dryer",
- "hand-held computer",
- "handkerchief",
- "hard disk drive",
- "harmonica",
- "harp",
- "combine harvester",
- "hatchet",
- "holster",
- "home theater",
- "honeycomb",
- "hook",
- "hoop skirt",
- "gymnastic horizontal bar",
- "horse-drawn vehicle",
- "hourglass",
- "iPod",
- "clothes iron",
- "carved pumpkin",
- "jeans",
- "jeep",
- "T-shirt",
- "jigsaw puzzle",
- "rickshaw",
- "joystick",
- "kimono",
- "knee pad",
- "knot",
- "lab coat",
- "ladle",
- "lampshade",
- "laptop computer",
- "lawn mower",
- "lens cap",
- "letter opener",
- "library",
- "lifeboat",
- "lighter",
- "limousine",
- "ocean liner",
- "lipstick",
- "slip-on shoe",
- "lotion",
- "music speaker",
- "loupe magnifying glass",
- "sawmill",
- "magnetic compass",
- "messenger bag",
- "mailbox",
- "tights",
- "one-piece bathing suit",
- "manhole cover",
- "maraca",
- "marimba",
- "mask",
- "matchstick",
- "maypole",
- "maze",
- "measuring cup",
- "medicine cabinet",
- "megalith",
- "microphone",
- "microwave oven",
- "military uniform",
- "milk can",
- "minibus",
- "miniskirt",
- "minivan",
- "missile",
- "mitten",
- "mixing bowl",
- "mobile home",
- "ford model t",
- "modem",
- "monastery",
- "monitor",
- "moped",
- "mortar and pestle",
- "graduation cap",
- "mosque",
- "mosquito net",
- "vespa",
- "mountain bike",
- "tent",
- "computer mouse",
- "mousetrap",
- "moving van",
- "muzzle",
- "metal nail",
- "neck brace",
- "necklace",
- "baby pacifier",
- "notebook computer",
- "obelisk",
- "oboe",
- "ocarina",
- "odometer",
- "oil filter",
- "pipe organ",
- "oscilloscope",
- "overskirt",
- "bullock cart",
- "oxygen mask",
- "product packet / packaging",
- "paddle",
- "paddle wheel",
- "padlock",
- "paintbrush",
- "pajamas",
- "palace",
- "pan flute",
- "paper towel",
- "parachute",
- "parallel bars",
- "park bench",
- "parking meter",
- "railroad car",
- "patio",
- "payphone",
- "pedestal",
- "pencil case",
- "pencil sharpener",
- "perfume",
- "Petri dish",
- "photocopier",
- "plectrum",
- "Pickelhaube",
- "picket fence",
- "pickup truck",
- "pier",
- "piggy bank",
- "pill bottle",
- "pillow",
- "ping-pong ball",
- "pinwheel",
- "pirate ship",
- "drink pitcher",
- "block plane",
- "planetarium",
- "plastic bag",
- "plate rack",
- "farm plow",
- "plunger",
- "Polaroid camera",
- "pole",
- "police van",
- "poncho",
- "pool table",
- "soda bottle",
- "plant pot",
- "potter's wheel",
- "power drill",
- "prayer rug",
- "printer",
- "prison",
- "missile",
- "projector",
- "hockey puck",
- "punching bag",
- "purse",
- "quill",
- "quilt",
- "race car",
- "racket",
- "radiator",
- "radio",
- "radio telescope",
- "rain barrel",
- "recreational vehicle",
- "fishing casting reel",
- "reflex camera",
- "refrigerator",
- "remote control",
- "restaurant",
- "revolver",
- "rifle",
- "rocking chair",
- "rotisserie",
- "eraser",
- "rugby ball",
- "ruler measuring stick",
- "sneaker",
- "safe",
- "safety pin",
- "salt shaker",
- "sandal",
- "sarong",
- "saxophone",
- "scabbard",
- "weighing scale",
- "school bus",
- "schooner",
- "scoreboard",
- "CRT monitor",
- "screw",
- "screwdriver",
- "seat belt",
- "sewing machine",
- "shield",
- "shoe store",
- "shoji screen / room divider",
- "shopping basket",
- "shopping cart",
- "shovel",
- "shower cap",
- "shower curtain",
- "ski",
- "balaclava ski mask",
- "sleeping bag",
- "slide rule",
- "sliding door",
- "slot machine",
- "snorkel",
- "snowmobile",
- "snowplow",
- "soap dispenser",
- "soccer ball",
- "sock",
- "solar thermal collector",
- "sombrero",
- "soup bowl",
- "keyboard space bar",
- "space heater",
- "space shuttle",
- "spatula",
- "motorboat",
- "spider web",
- "spindle",
- "sports car",
- "spotlight",
- "stage",
- "steam locomotive",
- "through arch bridge",
- "steel drum",
- "stethoscope",
- "scarf",
- "stone wall",
- "stopwatch",
- "stove",
- "strainer",
- "tram",
- "stretcher",
- "couch",
- "stupa",
- "submarine",
- "suit",
- "sundial",
- "sunglasses",
- "sunglasses",
- "sunscreen",
- "suspension bridge",
- "mop",
- "sweatshirt",
- "swim trunks / shorts",
- "swing",
- "electrical switch",
- "syringe",
- "table lamp",
- "tank",
- "tape player",
- "teapot",
- "teddy bear",
- "television",
- "tennis ball",
- "thatched roof",
- "front curtain",
- "thimble",
- "threshing machine",
- "throne",
- "tile roof",
- "toaster",
- "tobacco shop",
- "toilet seat",
- "torch",
- "totem pole",
- "tow truck",
- "toy store",
- "tractor",
- "semi-trailer truck",
- "tray",
- "trench coat",
- "tricycle",
- "trimaran",
- "tripod",
- "triumphal arch",
- "trolleybus",
- "trombone",
- "hot tub",
- "turnstile",
- "typewriter keyboard",
- "umbrella",
- "unicycle",
- "upright piano",
- "vacuum cleaner",
- "vase",
- "vaulted or arched ceiling",
- "velvet fabric",
- "vending machine",
- "vestment",
- "viaduct",
- "violin",
- "volleyball",
- "waffle iron",
- "wall clock",
- "wallet",
- "wardrobe",
- "military aircraft",
- "sink",
- "washing machine",
- "water bottle",
- "water jug",
- "water tower",
- "whiskey jug",
- "whistle",
- "hair wig",
- "window screen",
- "window shade",
- "Windsor tie",
- "wine bottle",
- "airplane wing",
- "wok",
- "wooden spoon",
- "wool",
- "split-rail fence",
- "shipwreck",
- "sailboat",
- "yurt",
- "website",
- "comic book",
- "crossword",
- "traffic or street sign",
- "traffic light",
- "dust jacket",
- "menu",
- "plate",
- "guacamole",
- "consomme",
- "hot pot",
- "trifle",
- "ice cream",
- "popsicle",
- "baguette",
- "bagel",
- "pretzel",
- "cheeseburger",
- "hot dog",
- "mashed potatoes",
- "cabbage",
- "broccoli",
- "cauliflower",
- "zucchini",
- "spaghetti squash",
- "acorn squash",
- "butternut squash",
- "cucumber",
- "artichoke",
- "bell pepper",
- "cardoon",
- "mushroom",
- "Granny Smith apple",
- "strawberry",
- "orange",
- "lemon",
- "fig",
- "pineapple",
- "banana",
- "jackfruit",
- "cherimoya (custard apple)",
- "pomegranate",
- "hay",
- "carbonara",
- "chocolate syrup",
- "dough",
- "meatloaf",
- "pizza",
- "pot pie",
- "burrito",
- "red wine",
- "espresso",
- "tea cup",
- "eggnog",
- "mountain",
- "bubble",
- "cliff",
- "coral reef",
- "geyser",
- "lakeshore",
- "promontory",
- "sandbar",
- "beach",
- "valley",
- "volcano",
- "baseball player",
- "bridegroom",
- "scuba diver",
- "rapeseed",
- "daisy",
- "yellow lady's slipper",
- "corn",
- "acorn",
- "rose hip",
- "horse chestnut seed",
- "coral fungus",
- "agaric",
- "gyromitra",
- "stinkhorn mushroom",
- "earth star fungus",
- "hen of the woods mushroom",
- "bolete",
- "corn cob",
- "toilet paper",
-]
-
-
-openai_imagenet_template = [
- lambda c: f"a bad photo of a {c}.",
- lambda c: f"a photo of many {c}.",
- lambda c: f"a sculpture of a {c}.",
- lambda c: f"a photo of the hard to see {c}.",
- lambda c: f"a low resolution photo of the {c}.",
- lambda c: f"a rendering of a {c}.",
- lambda c: f"graffiti of a {c}.",
- lambda c: f"a bad photo of the {c}.",
- lambda c: f"a cropped photo of the {c}.",
- lambda c: f"a tattoo of a {c}.",
- lambda c: f"the embroidered {c}.",
- lambda c: f"a photo of a hard to see {c}.",
- lambda c: f"a bright photo of a {c}.",
- lambda c: f"a photo of a clean {c}.",
- lambda c: f"a photo of a dirty {c}.",
- lambda c: f"a dark photo of the {c}.",
- lambda c: f"a drawing of a {c}.",
- lambda c: f"a photo of my {c}.",
- lambda c: f"the plastic {c}.",
- lambda c: f"a photo of the cool {c}.",
- lambda c: f"a close-up photo of a {c}.",
- lambda c: f"a black and white photo of the {c}.",
- lambda c: f"a painting of the {c}.",
- lambda c: f"a painting of a {c}.",
- lambda c: f"a pixelated photo of the {c}.",
- lambda c: f"a sculpture of the {c}.",
- lambda c: f"a bright photo of the {c}.",
- lambda c: f"a cropped photo of a {c}.",
- lambda c: f"a plastic {c}.",
- lambda c: f"a photo of the dirty {c}.",
- lambda c: f"a jpeg corrupted photo of a {c}.",
- lambda c: f"a blurry photo of the {c}.",
- lambda c: f"a photo of the {c}.",
- lambda c: f"a good photo of the {c}.",
- lambda c: f"a rendering of the {c}.",
- lambda c: f"a {c} in a video game.",
- lambda c: f"a photo of one {c}.",
- lambda c: f"a doodle of a {c}.",
- lambda c: f"a close-up photo of the {c}.",
- lambda c: f"a photo of a {c}.",
- lambda c: f"the origami {c}.",
- lambda c: f"the {c} in a video game.",
- lambda c: f"a sketch of a {c}.",
- lambda c: f"a doodle of the {c}.",
- lambda c: f"a origami {c}.",
- lambda c: f"a low resolution photo of a {c}.",
- lambda c: f"the toy {c}.",
- lambda c: f"a rendition of the {c}.",
- lambda c: f"a photo of the clean {c}.",
- lambda c: f"a photo of a large {c}.",
- lambda c: f"a rendition of a {c}.",
- lambda c: f"a photo of a nice {c}.",
- lambda c: f"a photo of a weird {c}.",
- lambda c: f"a blurry photo of a {c}.",
- lambda c: f"a cartoon {c}.",
- lambda c: f"art of a {c}.",
- lambda c: f"a sketch of the {c}.",
- lambda c: f"a embroidered {c}.",
- lambda c: f"a pixelated photo of a {c}.",
- lambda c: f"itap of the {c}.",
- lambda c: f"a jpeg corrupted photo of the {c}.",
- lambda c: f"a good photo of a {c}.",
- lambda c: f"a plushie {c}.",
- lambda c: f"a photo of the nice {c}.",
- lambda c: f"a photo of the small {c}.",
- lambda c: f"a photo of the weird {c}.",
- lambda c: f"the cartoon {c}.",
- lambda c: f"art of the {c}.",
- lambda c: f"a drawing of the {c}.",
- lambda c: f"a photo of the large {c}.",
- lambda c: f"a black and white photo of a {c}.",
- lambda c: f"the plushie {c}.",
- lambda c: f"a dark photo of a {c}.",
- lambda c: f"itap of a {c}.",
- lambda c: f"graffiti of the {c}.",
- lambda c: f"a toy {c}.",
- lambda c: f"itap of my {c}.",
- lambda c: f"a photo of a cool {c}.",
- lambda c: f"a photo of a small {c}.",
- lambda c: f"a tattoo of the {c}.",
-]
diff --git a/spaces/AIWaves/SOP_Generation-single/__init__.py b/spaces/AIWaves/SOP_Generation-single/__init__.py
deleted file mode 100644
index 69b468b54240b0a357eac1ba7573971cf65b412c..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/SOP_Generation-single/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .evolve import *
-from .SOP import *
-from .State import *
-from .utils import *
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet152_8xb16_cifar10.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet152_8xb16_cifar10.py
deleted file mode 100644
index 3f307b6aa81661558b8308094de6e8327d08c830..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet152_8xb16_cifar10.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/resnet152_cifar.py',
- '../_base_/datasets/cifar10_bs16.py',
- '../_base_/schedules/cifar10_bs128.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/activations.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/Abbasghanbari/Abo/README.md b/spaces/Abbasghanbari/Abo/README.md
deleted file mode 100644
index 727d719dd86ab25ffdfe12b0e928c7aae2be45a3..0000000000000000000000000000000000000000
--- a/spaces/Abbasghanbari/Abo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Abo
-emoji: 💻
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Acytoo.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Acytoo.py
deleted file mode 100644
index d36ca6da22ddfa43690abdd0db27e6f971320f93..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Acytoo.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from __future__ import annotations
-
-from aiohttp import ClientSession
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-
-class Acytoo(AsyncGeneratorProvider):
- url = 'https://chat.acytoo.com'
- working = True
- supports_gpt_35_turbo = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
-
- async with ClientSession(
- headers=_create_header()
- ) as session:
- async with session.post(
- cls.url + '/api/completions',
- proxy=proxy,
- json=_create_payload(messages, **kwargs)
- ) as response:
- response.raise_for_status()
- async for stream in response.content.iter_any():
- if stream:
- yield stream.decode()
-
-
-def _create_header():
- return {
- 'accept': '*/*',
- 'content-type': 'application/json',
- }
-
-
-def _create_payload(messages: list[dict[str, str]], temperature: float = 0.5, **kwargs):
- return {
- 'key' : '',
- 'model' : 'gpt-3.5-turbo',
- 'messages' : messages,
- 'temperature' : temperature,
- 'password' : ''
- }
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/__init__.py b/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SwapChess.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SwapChess.js
deleted file mode 100644
index 0bfb6c65ff1529561a56a741650a3895407874f1..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SwapChess.js
+++ /dev/null
@@ -1,30 +0,0 @@
-var SwapChess = function (chess1, chess2, board, bejeweled) {
- var tileXYZ1 = board.chessToTileXYZ(chess1);
- var tileXYZ2 = board.chessToTileXYZ(chess2);
- var tileX1 = tileXYZ1.x,
- tileY1 = tileXYZ1.y,
- tileX2 = tileXYZ2.x,
- tileY2 = tileXYZ2.y,
- tileZ = tileXYZ1.z;
-
- // TileZ of chess1 and chess2 are the same, change tileZ of chess2 to a different value
- board.setChessTileZ(chess2, `#${tileZ}`);
-
- // Move chess1 to tileXYZ2, chess2 to tileXYZ1
- var moveTo1 = bejeweled.getChessMoveTo(chess1);
- var moveTo2 = bejeweled.getChessMoveTo(chess2);
- moveTo1.moveTo(tileX2, tileY2);
- moveTo2.moveTo(tileX1, tileY1);
-
- // Change tileZ of chess2 back
- board.setChessTileZ(chess2, tileZ);
-
- if (moveTo1.isRunning) {
- bejeweled.waitEvent(moveTo1, 'complete');
- }
- if (moveTo2.isRunning) {
- bejeweled.waitEvent(moveTo2, 'complete');
- }
-};
-
-export default SwapChess;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PressCell.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PressCell.js
deleted file mode 100644
index 8590d442600cf74a4c0300a2bd7c9e662535f94a..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PressCell.js
+++ /dev/null
@@ -1,22 +0,0 @@
-import Press from '../../press/Press.js';
-import EmitCellEvent from './EmitCellEvent.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-var PressCell = function (table, tableConfig) {
- var pressConfig = GetValue(tableConfig, 'press', undefined);
- if (pressConfig === false) {
- return;
- }
-
- table._press = new Press(table, pressConfig);
- table._press
- .on('pressstart', function (press, gameObject, lastPointer) {
- EmitCellEvent(this.eventEmitter, 'cell.pressstart', table, press.worldX, press.worldY, lastPointer);
- }, this)
- .on('pressend', function (press, gameObject, lastPointer) {
- EmitCellEvent(this.eventEmitter, 'cell.pressend', table, press.worldX, press.worldY, lastPointer);
- }, this)
-};
-
-export default PressCell;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/Methods.js
deleted file mode 100644
index cc2e514004b6a143b96fb541f03084267476153f..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/scrollablepanel/scrollableblock/Methods.js
+++ /dev/null
@@ -1,21 +0,0 @@
-import GetChildrenWidth from './GetChildrenWidth.js';
-import GetChildrenHeight from './GetChildrenHeight.js';
-import GetChildrenSizers from './GetChildrenSizers.js';
-import ResetChildPosition from './ResetChildPosition.js';
-import LayoutChildren from './LayoutChildren.js';
-import ChildrenMaskMethods from '../../../../plugins/gameobjects/container/containerlite/mask/ChildrenMaskMethods.js';
-
-var methods = {
- getChildrenWidth: GetChildrenWidth,
- getChildrenHeight: GetChildrenHeight,
- getChildrenSizers: GetChildrenSizers,
- resetChildPosition: ResetChildPosition,
- layoutChildren: LayoutChildren
-};
-
-Object.assign(
- methods,
- ChildrenMaskMethods
-);
-
-export default methods;
\ No newline at end of file
diff --git a/spaces/Ajitku/BTMLabs/README.md b/spaces/Ajitku/BTMLabs/README.md
deleted file mode 100644
index 963f4b1a10758c63fcbea141252e3b948a695a9c..0000000000000000000000000000000000000000
--- a/spaces/Ajitku/BTMLabs/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: BTMLabs
-emoji: 📊
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Akshay-More-007/starcoder/README.md b/spaces/Akshay-More-007/starcoder/README.md
deleted file mode 100644
index 511b956c2f0163c021557d9fc30cc054e5cd9947..0000000000000000000000000000000000000000
--- a/spaces/Akshay-More-007/starcoder/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Starcoder
-emoji: 👁
-colorFrom: purple
-colorTo: green
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/quicktour.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/quicktour.md
deleted file mode 100644
index e0676ce2a9ca169322c79c17c4cfd224b6163f43..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/quicktour.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
-# 훑어보기
-
-🧨 Diffusers로 빠르게 시작하고 실행하세요!
-이 훑어보기는 여러분이 개발자, 일반사용자 상관없이 시작하는 데 도움을 주며, 추론을 위해 [`DiffusionPipeline`] 사용하는 방법을 보여줍니다.
-
-시작하기에 앞서서, 필요한 모든 라이브러리가 설치되어 있는지 확인하세요:
-
-```bash
-pip install --upgrade diffusers accelerate transformers
-```
-
-- [`accelerate`](https://huggingface.co/docs/accelerate/index)은 추론 및 학습을 위한 모델 불러오기 속도를 높입니다.
-- [`transformers`](https://huggingface.co/docs/transformers/index)는 [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview)과 같이 가장 널리 사용되는 확산 모델을 실행하기 위해 필요합니다.
-
-## DiffusionPipeline
-
-[`DiffusionPipeline`]은 추론을 위해 사전학습된 확산 시스템을 사용하는 가장 쉬운 방법입니다. 다양한 양식의 많은 작업에 [`DiffusionPipeline`]을 바로 사용할 수 있습니다. 지원되는 작업은 아래의 표를 참고하세요:
-
-| **Task** | **Description** | **Pipeline**
-|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|
-| Unconditional Image Generation | 가우시안 노이즈에서 이미지 생성 | [unconditional_image_generation](./using-diffusers/unconditional_image_generation`) |
-| Text-Guided Image Generation | 텍스트 프롬프트로 이미지 생성 | [conditional_image_generation](./using-diffusers/conditional_image_generation) |
-| Text-Guided Image-to-Image Translation | 텍스트 프롬프트에 따라 이미지 조정 | [img2img](./using-diffusers/img2img) |
-| Text-Guided Image-Inpainting | 마스크 및 텍스트 프롬프트가 주어진 이미지의 마스킹된 부분을 채우기 | [inpaint](./using-diffusers/inpaint) |
-| Text-Guided Depth-to-Image Translation | 깊이 추정을 통해 구조를 유지하면서 텍스트 프롬프트에 따라 이미지의 일부를 조정 | [depth2image](./using-diffusers/depth2image) |
-
-확산 파이프라인이 다양한 작업에 대해 어떻게 작동하는지는 [**Using Diffusers**](./using-diffusers/overview)를 참고하세요.
-
-예를들어, [`DiffusionPipeline`] 인스턴스를 생성하여 시작하고, 다운로드하려는 파이프라인 체크포인트를 지정합니다.
-모든 [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads)에 대해 [`DiffusionPipeline`]을 사용할 수 있습니다.
-하지만, 이 가이드에서는 [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion)을 사용하여 text-to-image를 하는데 [`DiffusionPipeline`]을 사용합니다.
-
-[Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) 기반 모델을 실행하기 전에 [license](https://huggingface.co/spaces/CompVis/stable-diffusion-license)를 주의 깊게 읽으세요.
-이는 모델의 향상된 이미지 생성 기능과 이것으로 생성될 수 있는 유해한 콘텐츠 때문입니다. 선택한 Stable Diffusion 모델(*예*: [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5))로 이동하여 라이센스를 읽으세요.
-
-다음과 같이 모델을 로드할 수 있습니다:
-
-```python
->>> from diffusers import DiffusionPipeline
-
->>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-```
-
-[`DiffusionPipeline`]은 모든 모델링, 토큰화 및 스케줄링 구성요소를 다운로드하고 캐시합니다.
-모델은 약 14억개의 매개변수로 구성되어 있으므로 GPU에서 실행하는 것이 좋습니다.
-PyTorch에서와 마찬가지로 생성기 객체를 GPU로 옮길 수 있습니다.
-
-```python
->>> pipeline.to("cuda")
-```
-
-이제 `pipeline`을 사용할 수 있습니다:
-
-```python
->>> image = pipeline("An image of a squirrel in Picasso style").images[0]
-```
-
-출력은 기본적으로 [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class)로 래핑됩니다.
-
-다음과 같이 함수를 호출하여 이미지를 저장할 수 있습니다:
-
-```python
->>> image.save("image_of_squirrel_painting.png")
-```
-
-**참고**: 다음을 통해 가중치를 다운로드하여 로컬에서 파이프라인을 사용할 수도 있습니다:
-
-```
-git lfs install
-git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
-```
-
-그리고 저장된 가중치를 파이프라인에 불러옵니다.
-
-```python
->>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
-```
-
-파이프라인 실행은 동일한 모델 아키텍처이므로 위의 코드와 동일합니다.
-
-```python
->>> generator.to("cuda")
->>> image = generator("An image of a squirrel in Picasso style").images[0]
->>> image.save("image_of_squirrel_painting.png")
-```
-
-확산 시스템은 각각 장점이 있는 여러 다른 [schedulers](./api/schedulers/overview)와 함께 사용할 수 있습니다. 기본적으로 Stable Diffusion은 `PNDMScheduler`로 실행되지만 다른 스케줄러를 사용하는 방법은 매우 간단합니다. *예* [`EulerDiscreteScheduler`] 스케줄러를 사용하려는 경우, 다음과 같이 사용할 수 있습니다:
-
-```python
->>> from diffusers import EulerDiscreteScheduler
-
->>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-
->>> # change scheduler to Euler
->>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
-```
-
-스케줄러 변경 방법에 대한 자세한 내용은 [Using Schedulers](./using-diffusers/schedulers) 가이드를 참고하세요.
-
-[Stability AI's](https://stability.ai/)의 Stable Diffusion 모델은 인상적인 이미지 생성 모델이며 텍스트에서 이미지를 생성하는 것보다 훨씬 더 많은 작업을 수행할 수 있습니다. 우리는 Stable Diffusion만을 위한 전체 문서 페이지를 제공합니다 [link](./conceptual/stable_diffusion).
-
-만약 더 적은 메모리, 더 높은 추론 속도, Mac과 같은 특정 하드웨어 또는 ONNX 런타임에서 실행되도록 Stable Diffusion을 최적화하는 방법을 알고 싶다면 최적화 페이지를 살펴보세요:
-
-- [Optimized PyTorch on GPU](./optimization/fp16)
-- [Mac OS with PyTorch](./optimization/mps)
-- [ONNX](./optimization/onnx)
-- [OpenVINO](./optimization/open_vino)
-
-확산 모델을 미세조정하거나 학습시키려면, [**training section**](./training/overview)을 살펴보세요.
-
-마지막으로, 생성된 이미지를 공개적으로 배포할 때 신중을 기해 주세요 🤗.
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/watermark.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/watermark.py
deleted file mode 100644
index 5b6e36d9f44756da494cee0b996b1871721872e7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/watermark.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numpy as np
-import torch
-
-from ...utils import is_invisible_watermark_available
-
-
-if is_invisible_watermark_available():
- from imwatermark import WatermarkEncoder
-
-
-# Copied from https://github.com/Stability-AI/generative-models/blob/613af104c6b85184091d42d374fef420eddb356d/scripts/demo/streamlit_helpers.py#L66
-WATERMARK_MESSAGE = 0b101100111110110010010000011110111011000110011110
-# bin(x)[2:] gives bits of x as str, use int to convert them to 0/1
-WATERMARK_BITS = [int(bit) for bit in bin(WATERMARK_MESSAGE)[2:]]
-
-
-class StableDiffusionXLWatermarker:
- def __init__(self):
- self.watermark = WATERMARK_BITS
- self.encoder = WatermarkEncoder()
-
- self.encoder.set_watermark("bits", self.watermark)
-
- def apply_watermark(self, images: torch.FloatTensor):
- # can't encode images that are smaller than 256
- if images.shape[-1] < 256:
- return images
-
- images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy()
-
- images = [self.encoder.encode(image, "dwtDct") for image in images]
-
- images = torch.from_numpy(np.array(images)).permute(0, 3, 1, 2)
-
- images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0)
- return images
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/README.md
deleted file mode 100644
index d1a94d155149250e76d922185763c13d64509a62..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# VarifocalNet: An IoU-aware Dense Object Detector
-
-## Introduction
-
-[ALGORITHM]
-
-**VarifocalNet (VFNet)** learns to predict the IoU-aware classification score which mixes the object presence confidence and localization accuracy together as the detection score for a bounding box. The learning is supervised by the proposed Varifocal Loss (VFL), based on a new star-shaped bounding box feature representation (the features at nine yellow sampling points). Given the new representation, the object localization accuracy is further improved by refining the initially regressed bounding box. The full paper is available at: [https://arxiv.org/abs/2008.13367](https://arxiv.org/abs/2008.13367).
-
-
-
-
Learning to Predict the IoU-aware Classification Score.
-"""
-
-summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt
-
-MODELS = [
- "gpt-3.5-turbo",
- "gpt-3.5-turbo-0301",
- "gpt-4",
- "gpt-4-0314",
- "gpt-4-32k",
- "gpt-4-32k-0314",
-] # 可选的模型
-
-MODEL_SOFT_TOKEN_LIMIT = {
- "gpt-3.5-turbo": {
- "streaming": 3500,
- "all": 3500
- },
- "gpt-3.5-turbo-0301": {
- "streaming": 3500,
- "all": 3500
- },
- "gpt-4": {
- "streaming": 7500,
- "all": 7500
- },
- "gpt-4-0314": {
- "streaming": 7500,
- "all": 7500
- },
- "gpt-4-32k": {
- "streaming": 31000,
- "all": 31000
- },
- "gpt-4-32k-0314": {
- "streaming": 31000,
- "all": 31000
- }
-}
-
-REPLY_LANGUAGES = [
- "简体中文",
- "繁體中文",
- "English",
- "日本語",
- "Español",
- "Français",
- "Deutsch",
- "跟随问题语言(不稳定)"
-]
-
-
-WEBSEARCH_PTOMPT_TEMPLATE = """\
-Web search results:
-
-{web_results}
-Current date: {current_date}
-
-Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject.
-Query: {query}
-Reply in {reply_language}
-"""
-
-PROMPT_TEMPLATE = """\
-Context information is below.
----------------------
-{context_str}
----------------------
-Current date: {current_date}.
-Using the provided context information, write a comprehensive reply to the given query.
-Make sure to cite results using [number] notation after the reference.
-If the provided context information refer to multiple subjects with the same name, write separate answers for each subject.
-Use prior knowledge only if the given context didn't provide enough information.
-Answer the question: {query_str}
-Reply in {reply_language}
-"""
-
-REFINE_TEMPLATE = """\
-The original question is as follows: {query_str}
-We have provided an existing answer: {existing_answer}
-We have the opportunity to refine the existing answer
-(only if needed) with some more context below.
-------------
-{context_msg}
-------------
-Given the new context, refine the original answer to better
-Reply in {reply_language}
-If the context isn't useful, return the original answer.
-"""
-
-ALREADY_CONVERTED_MARK = ""
-
-small_and_beautiful_theme = gr.themes.Soft(
- primary_hue=gr.themes.Color(
- c50="#02C160",
- c100="rgba(2, 193, 96, 0.2)",
- c200="#02C160",
- c300="rgba(2, 193, 96, 0.32)",
- c400="rgba(2, 193, 96, 0.32)",
- c500="rgba(2, 193, 96, 1.0)",
- c600="rgba(2, 193, 96, 1.0)",
- c700="rgba(2, 193, 96, 0.32)",
- c800="rgba(2, 193, 96, 0.32)",
- c900="#02C160",
- c950="#02C160",
- ),
- secondary_hue=gr.themes.Color(
- c50="#576b95",
- c100="#576b95",
- c200="#576b95",
- c300="#576b95",
- c400="#576b95",
- c500="#576b95",
- c600="#576b95",
- c700="#576b95",
- c800="#576b95",
- c900="#576b95",
- c950="#576b95",
- ),
- neutral_hue=gr.themes.Color(
- name="gray",
- c50="#f9fafb",
- c100="#f3f4f6",
- c200="#e5e7eb",
- c300="#d1d5db",
- c400="#B2B2B2",
- c500="#808080",
- c600="#636363",
- c700="#515151",
- c800="#393939",
- c900="#272727",
- c950="#171717",
- ),
- radius_size=gr.themes.sizes.radius_sm,
- ).set(
- button_primary_background_fill="#06AE56",
- button_primary_background_fill_dark="#06AE56",
- button_primary_background_fill_hover="#07C863",
- button_primary_border_color="#06AE56",
- button_primary_border_color_dark="#06AE56",
- button_primary_text_color="#FFFFFF",
- button_primary_text_color_dark="#FFFFFF",
- button_secondary_background_fill="#F2F2F2",
- button_secondary_background_fill_dark="#2B2B2B",
- button_secondary_text_color="#393939",
- button_secondary_text_color_dark="#FFFFFF",
- # background_fill_primary="#F7F7F7",
- # background_fill_primary_dark="#1F1F1F",
- block_title_text_color="*primary_500",
- block_title_background_fill="*primary_100",
- input_background_fill="#F6F6F6",
- )
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/__init__.py
deleted file mode 100644
index e589bb917e23823e25f9fff7e0849c4d6d4a62bc..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-"""Subpackage containing all of pip's command line interface related code
-"""
-
-# This file intentionally does not import submodules
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/recipes.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
deleted file mode 100644
index a2596423a4c3dbd15a357241477a0af0a531f9ec..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
+++ /dev/null
@@ -1,698 +0,0 @@
-"""Imported from the recipes section of the itertools documentation.
-
-All functions taken from the recipes section of the itertools library docs
-[1]_.
-Some backward-compatible usability improvements have been made.
-
-.. [1] http://docs.python.org/library/itertools.html#recipes
-
-"""
-import warnings
-from collections import deque
-from itertools import (
- chain,
- combinations,
- count,
- cycle,
- groupby,
- islice,
- repeat,
- starmap,
- tee,
- zip_longest,
-)
-import operator
-from random import randrange, sample, choice
-
-__all__ = [
- 'all_equal',
- 'before_and_after',
- 'consume',
- 'convolve',
- 'dotproduct',
- 'first_true',
- 'flatten',
- 'grouper',
- 'iter_except',
- 'ncycles',
- 'nth',
- 'nth_combination',
- 'padnone',
- 'pad_none',
- 'pairwise',
- 'partition',
- 'powerset',
- 'prepend',
- 'quantify',
- 'random_combination_with_replacement',
- 'random_combination',
- 'random_permutation',
- 'random_product',
- 'repeatfunc',
- 'roundrobin',
- 'sliding_window',
- 'tabulate',
- 'tail',
- 'take',
- 'triplewise',
- 'unique_everseen',
- 'unique_justseen',
-]
-
-
-def take(n, iterable):
- """Return first *n* items of the iterable as a list.
-
- >>> take(3, range(10))
- [0, 1, 2]
-
- If there are fewer than *n* items in the iterable, all of them are
- returned.
-
- >>> take(10, range(3))
- [0, 1, 2]
-
- """
- return list(islice(iterable, n))
-
-
-def tabulate(function, start=0):
- """Return an iterator over the results of ``func(start)``,
- ``func(start + 1)``, ``func(start + 2)``...
-
- *func* should be a function that accepts one integer argument.
-
- If *start* is not specified it defaults to 0. It will be incremented each
- time the iterator is advanced.
-
- >>> square = lambda x: x ** 2
- >>> iterator = tabulate(square, -3)
- >>> take(4, iterator)
- [9, 4, 1, 0]
-
- """
- return map(function, count(start))
-
-
-def tail(n, iterable):
- """Return an iterator over the last *n* items of *iterable*.
-
- >>> t = tail(3, 'ABCDEFG')
- >>> list(t)
- ['E', 'F', 'G']
-
- """
- return iter(deque(iterable, maxlen=n))
-
-
-def consume(iterator, n=None):
- """Advance *iterable* by *n* steps. If *n* is ``None``, consume it
- entirely.
-
- Efficiently exhausts an iterator without returning values. Defaults to
- consuming the whole iterator, but an optional second argument may be
- provided to limit consumption.
-
- >>> i = (x for x in range(10))
- >>> next(i)
- 0
- >>> consume(i, 3)
- >>> next(i)
- 4
- >>> consume(i)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- If the iterator has fewer items remaining than the provided limit, the
- whole iterator will be consumed.
-
- >>> i = (x for x in range(3))
- >>> consume(i, 5)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- """
- # Use functions that consume iterators at C speed.
- if n is None:
- # feed the entire iterator into a zero-length deque
- deque(iterator, maxlen=0)
- else:
- # advance to the empty slice starting at position n
- next(islice(iterator, n, n), None)
-
-
-def nth(iterable, n, default=None):
- """Returns the nth item or a default value.
-
- >>> l = range(10)
- >>> nth(l, 3)
- 3
- >>> nth(l, 20, "zebra")
- 'zebra'
-
- """
- return next(islice(iterable, n, None), default)
-
-
-def all_equal(iterable):
- """
- Returns ``True`` if all the elements are equal to each other.
-
- >>> all_equal('aaaa')
- True
- >>> all_equal('aaab')
- False
-
- """
- g = groupby(iterable)
- return next(g, True) and not next(g, False)
-
-
-def quantify(iterable, pred=bool):
- """Return the how many times the predicate is true.
-
- >>> quantify([True, False, True])
- 2
-
- """
- return sum(map(pred, iterable))
-
-
-def pad_none(iterable):
- """Returns the sequence of elements and then returns ``None`` indefinitely.
-
- >>> take(5, pad_none(range(3)))
- [0, 1, 2, None, None]
-
- Useful for emulating the behavior of the built-in :func:`map` function.
-
- See also :func:`padded`.
-
- """
- return chain(iterable, repeat(None))
-
-
-padnone = pad_none
-
-
-def ncycles(iterable, n):
- """Returns the sequence elements *n* times
-
- >>> list(ncycles(["a", "b"], 3))
- ['a', 'b', 'a', 'b', 'a', 'b']
-
- """
- return chain.from_iterable(repeat(tuple(iterable), n))
-
-
-def dotproduct(vec1, vec2):
- """Returns the dot product of the two iterables.
-
- >>> dotproduct([10, 10], [20, 20])
- 400
-
- """
- return sum(map(operator.mul, vec1, vec2))
-
-
-def flatten(listOfLists):
- """Return an iterator flattening one level of nesting in a list of lists.
-
- >>> list(flatten([[0, 1], [2, 3]]))
- [0, 1, 2, 3]
-
- See also :func:`collapse`, which can flatten multiple levels of nesting.
-
- """
- return chain.from_iterable(listOfLists)
-
-
-def repeatfunc(func, times=None, *args):
- """Call *func* with *args* repeatedly, returning an iterable over the
- results.
-
- If *times* is specified, the iterable will terminate after that many
- repetitions:
-
- >>> from operator import add
- >>> times = 4
- >>> args = 3, 5
- >>> list(repeatfunc(add, times, *args))
- [8, 8, 8, 8]
-
- If *times* is ``None`` the iterable will not terminate:
-
- >>> from random import randrange
- >>> times = None
- >>> args = 1, 11
- >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP
- [2, 4, 8, 1, 8, 4]
-
- """
- if times is None:
- return starmap(func, repeat(args))
- return starmap(func, repeat(args, times))
-
-
-def _pairwise(iterable):
- """Returns an iterator of paired items, overlapping, from the original
-
- >>> take(4, pairwise(count()))
- [(0, 1), (1, 2), (2, 3), (3, 4)]
-
- On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`.
-
- """
- a, b = tee(iterable)
- next(b, None)
- yield from zip(a, b)
-
-
-try:
- from itertools import pairwise as itertools_pairwise
-except ImportError:
- pairwise = _pairwise
-else:
-
- def pairwise(iterable):
- yield from itertools_pairwise(iterable)
-
- pairwise.__doc__ = _pairwise.__doc__
-
-
-def grouper(iterable, n, fillvalue=None):
- """Collect data into fixed-length chunks or blocks.
-
- >>> list(grouper('ABCDEFG', 3, 'x'))
- [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')]
-
- """
- if isinstance(iterable, int):
- warnings.warn(
- "grouper expects iterable as first parameter", DeprecationWarning
- )
- n, iterable = iterable, n
- args = [iter(iterable)] * n
- return zip_longest(fillvalue=fillvalue, *args)
-
-
-def roundrobin(*iterables):
- """Yields an item from each iterable, alternating between them.
-
- >>> list(roundrobin('ABC', 'D', 'EF'))
- ['A', 'D', 'E', 'B', 'F', 'C']
-
- This function produces the same output as :func:`interleave_longest`, but
- may perform better for some inputs (in particular when the number of
- iterables is small).
-
- """
- # Recipe credited to George Sakkis
- pending = len(iterables)
- nexts = cycle(iter(it).__next__ for it in iterables)
- while pending:
- try:
- for next in nexts:
- yield next()
- except StopIteration:
- pending -= 1
- nexts = cycle(islice(nexts, pending))
-
-
-def partition(pred, iterable):
- """
- Returns a 2-tuple of iterables derived from the input iterable.
- The first yields the items that have ``pred(item) == False``.
- The second yields the items that have ``pred(item) == True``.
-
- >>> is_odd = lambda x: x % 2 != 0
- >>> iterable = range(10)
- >>> even_items, odd_items = partition(is_odd, iterable)
- >>> list(even_items), list(odd_items)
- ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9])
-
- If *pred* is None, :func:`bool` is used.
-
- >>> iterable = [0, 1, False, True, '', ' ']
- >>> false_items, true_items = partition(None, iterable)
- >>> list(false_items), list(true_items)
- ([0, False, ''], [1, True, ' '])
-
- """
- if pred is None:
- pred = bool
-
- evaluations = ((pred(x), x) for x in iterable)
- t1, t2 = tee(evaluations)
- return (
- (x for (cond, x) in t1 if not cond),
- (x for (cond, x) in t2 if cond),
- )
-
-
-def powerset(iterable):
- """Yields all possible subsets of the iterable.
-
- >>> list(powerset([1, 2, 3]))
- [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
-
- :func:`powerset` will operate on iterables that aren't :class:`set`
- instances, so repeated elements in the input will produce repeated elements
- in the output. Use :func:`unique_everseen` on the input to avoid generating
- duplicates:
-
- >>> seq = [1, 1, 0]
- >>> list(powerset(seq))
- [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)]
- >>> from more_itertools import unique_everseen
- >>> list(powerset(unique_everseen(seq)))
- [(), (1,), (0,), (1, 0)]
-
- """
- s = list(iterable)
- return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))
-
-
-def unique_everseen(iterable, key=None):
- """
- Yield unique elements, preserving order.
-
- >>> list(unique_everseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D']
- >>> list(unique_everseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'D']
-
- Sequences with a mix of hashable and unhashable items can be used.
- The function will be slower (i.e., `O(n^2)`) for unhashable items.
-
- Remember that ``list`` objects are unhashable - you can use the *key*
- parameter to transform the list to a tuple (which is hashable) to
- avoid a slowdown.
-
- >>> iterable = ([1, 2], [2, 3], [1, 2])
- >>> list(unique_everseen(iterable)) # Slow
- [[1, 2], [2, 3]]
- >>> list(unique_everseen(iterable, key=tuple)) # Faster
- [[1, 2], [2, 3]]
-
- Similary, you may want to convert unhashable ``set`` objects with
- ``key=frozenset``. For ``dict`` objects,
- ``key=lambda x: frozenset(x.items())`` can be used.
-
- """
- seenset = set()
- seenset_add = seenset.add
- seenlist = []
- seenlist_add = seenlist.append
- use_key = key is not None
-
- for element in iterable:
- k = key(element) if use_key else element
- try:
- if k not in seenset:
- seenset_add(k)
- yield element
- except TypeError:
- if k not in seenlist:
- seenlist_add(k)
- yield element
-
-
-def unique_justseen(iterable, key=None):
- """Yields elements in order, ignoring serial duplicates
-
- >>> list(unique_justseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D', 'A', 'B']
- >>> list(unique_justseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'A', 'D']
-
- """
- return map(next, map(operator.itemgetter(1), groupby(iterable, key)))
-
-
-def iter_except(func, exception, first=None):
- """Yields results from a function repeatedly until an exception is raised.
-
- Converts a call-until-exception interface to an iterator interface.
- Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel
- to end the loop.
-
- >>> l = [0, 1, 2]
- >>> list(iter_except(l.pop, IndexError))
- [2, 1, 0]
-
- Multiple exceptions can be specified as a stopping condition:
-
- >>> l = [1, 2, 3, '...', 4, 5, 6]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [7, 6, 5]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [4, 3, 2]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- []
-
- """
- try:
- if first is not None:
- yield first()
- while 1:
- yield func()
- except exception:
- pass
-
-
-def first_true(iterable, default=None, pred=None):
- """
- Returns the first true value in the iterable.
-
- If no true value is found, returns *default*
-
- If *pred* is not None, returns the first item for which
- ``pred(item) == True`` .
-
- >>> first_true(range(10))
- 1
- >>> first_true(range(10), pred=lambda x: x > 5)
- 6
- >>> first_true(range(10), default='missing', pred=lambda x: x > 9)
- 'missing'
-
- """
- return next(filter(pred, iterable), default)
-
-
-def random_product(*args, repeat=1):
- """Draw an item at random from each of the input iterables.
-
- >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP
- ('c', 3, 'Z')
-
- If *repeat* is provided as a keyword argument, that many items will be
- drawn from each iterable.
-
- >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP
- ('a', 2, 'd', 3)
-
- This equivalent to taking a random selection from
- ``itertools.product(*args, **kwarg)``.
-
- """
- pools = [tuple(pool) for pool in args] * repeat
- return tuple(choice(pool) for pool in pools)
-
-
-def random_permutation(iterable, r=None):
- """Return a random *r* length permutation of the elements in *iterable*.
-
- If *r* is not specified or is ``None``, then *r* defaults to the length of
- *iterable*.
-
- >>> random_permutation(range(5)) # doctest:+SKIP
- (3, 4, 0, 1, 2)
-
- This equivalent to taking a random selection from
- ``itertools.permutations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- r = len(pool) if r is None else r
- return tuple(sample(pool, r))
-
-
-def random_combination(iterable, r):
- """Return a random *r* length subsequence of the elements in *iterable*.
-
- >>> random_combination(range(5), 3) # doctest:+SKIP
- (2, 3, 4)
-
- This equivalent to taking a random selection from
- ``itertools.combinations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(sample(range(n), r))
- return tuple(pool[i] for i in indices)
-
-
-def random_combination_with_replacement(iterable, r):
- """Return a random *r* length subsequence of elements in *iterable*,
- allowing individual elements to be repeated.
-
- >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP
- (0, 0, 1, 2, 2)
-
- This equivalent to taking a random selection from
- ``itertools.combinations_with_replacement(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(randrange(n) for i in range(r))
- return tuple(pool[i] for i in indices)
-
-
-def nth_combination(iterable, r, index):
- """Equivalent to ``list(combinations(iterable, r))[index]``.
-
- The subsequences of *iterable* that are of length *r* can be ordered
- lexicographically. :func:`nth_combination` computes the subsequence at
- sort position *index* directly, without computing the previous
- subsequences.
-
- >>> nth_combination(range(5), 3, 5)
- (0, 3, 4)
-
- ``ValueError`` will be raised If *r* is negative or greater than the length
- of *iterable*.
- ``IndexError`` will be raised if the given *index* is invalid.
- """
- pool = tuple(iterable)
- n = len(pool)
- if (r < 0) or (r > n):
- raise ValueError
-
- c = 1
- k = min(r, n - r)
- for i in range(1, k + 1):
- c = c * (n - k + i) // i
-
- if index < 0:
- index += c
-
- if (index < 0) or (index >= c):
- raise IndexError
-
- result = []
- while r:
- c, n, r = c * r // n, n - 1, r - 1
- while index >= c:
- index -= c
- c, n = c * (n - r) // n, n - 1
- result.append(pool[-1 - n])
-
- return tuple(result)
-
-
-def prepend(value, iterator):
- """Yield *value*, followed by the elements in *iterator*.
-
- >>> value = '0'
- >>> iterator = ['1', '2', '3']
- >>> list(prepend(value, iterator))
- ['0', '1', '2', '3']
-
- To prepend multiple values, see :func:`itertools.chain`
- or :func:`value_chain`.
-
- """
- return chain([value], iterator)
-
-
-def convolve(signal, kernel):
- """Convolve the iterable *signal* with the iterable *kernel*.
-
- >>> signal = (1, 2, 3, 4, 5)
- >>> kernel = [3, 2, 1]
- >>> list(convolve(signal, kernel))
- [3, 8, 14, 20, 26, 14, 5]
-
- Note: the input arguments are not interchangeable, as the *kernel*
- is immediately consumed and stored.
-
- """
- kernel = tuple(kernel)[::-1]
- n = len(kernel)
- window = deque([0], maxlen=n) * n
- for x in chain(signal, repeat(0, n - 1)):
- window.append(x)
- yield sum(map(operator.mul, kernel, window))
-
-
-def before_and_after(predicate, it):
- """A variant of :func:`takewhile` that allows complete access to the
- remainder of the iterator.
-
- >>> it = iter('ABCdEfGhI')
- >>> all_upper, remainder = before_and_after(str.isupper, it)
- >>> ''.join(all_upper)
- 'ABC'
- >>> ''.join(remainder) # takewhile() would lose the 'd'
- 'dEfGhI'
-
- Note that the first iterator must be fully consumed before the second
- iterator can generate valid results.
- """
- it = iter(it)
- transition = []
-
- def true_iterator():
- for elem in it:
- if predicate(elem):
- yield elem
- else:
- transition.append(elem)
- return
-
- def remainder_iterator():
- yield from transition
- yield from it
-
- return true_iterator(), remainder_iterator()
-
-
-def triplewise(iterable):
- """Return overlapping triplets from *iterable*.
-
- >>> list(triplewise('ABCDE'))
- [('A', 'B', 'C'), ('B', 'C', 'D'), ('C', 'D', 'E')]
-
- """
- for (a, _), (b, c) in pairwise(pairwise(iterable)):
- yield a, b, c
-
-
-def sliding_window(iterable, n):
- """Return a sliding window of width *n* over *iterable*.
-
- >>> list(sliding_window(range(6), 4))
- [(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)]
-
- If *iterable* has fewer than *n* items, then nothing is yielded:
-
- >>> list(sliding_window(range(3), 4))
- []
-
- For a variant with more features, see :func:`windowed`.
- """
- it = iter(iterable)
- window = deque(islice(it, n), maxlen=n)
- if len(window) == n:
- yield tuple(window)
- for x in it:
- window.append(x)
- yield tuple(window)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/packaging/pkg_helpers.bash b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/packaging/pkg_helpers.bash
deleted file mode 100644
index ed9acb00ae8627b96c057b4493d368c7dfeda8ae..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/packaging/pkg_helpers.bash
+++ /dev/null
@@ -1,76 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-# Function to retry functions that sometimes timeout or have flaky failures
-retry () {
- $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
-}
-# Install with pip a bit more robustly than the default
-pip_install() {
- retry pip install --progress-bar off "$@"
-}
-
-
-setup_cuda() {
- # Now work out the CUDA settings
- # Like other torch domain libraries, we choose common GPU architectures only.
- # See https://github.com/pytorch/pytorch/blob/master/torch/utils/cpp_extension.py
- # and https://github.com/pytorch/vision/blob/main/packaging/pkg_helpers.bash for reference.
- export FORCE_CUDA=1
- case "$CU_VERSION" in
- cu113)
- export CUDA_HOME=/usr/local/cuda-11.3/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX"
- ;;
- cu112)
- export CUDA_HOME=/usr/local/cuda-11.2/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX"
- ;;
- cu111)
- export CUDA_HOME=/usr/local/cuda-11.1/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0;8.6+PTX"
- ;;
- cu110)
- export CUDA_HOME=/usr/local/cuda-11.0/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX;8.0+PTX"
- ;;
- cu102)
- export CUDA_HOME=/usr/local/cuda-10.2/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX"
- ;;
- cu101)
- export CUDA_HOME=/usr/local/cuda-10.1/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX"
- ;;
- cu100)
- export CUDA_HOME=/usr/local/cuda-10.0/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0;7.5+PTX"
- ;;
- cu92)
- export CUDA_HOME=/usr/local/cuda-9.2/
- export TORCH_CUDA_ARCH_LIST="3.7;5.0;5.2;6.0;6.1+PTX;7.0+PTX"
- ;;
- cpu)
- unset FORCE_CUDA
- export CUDA_VISIBLE_DEVICES=
- ;;
- *)
- echo "Unrecognized CU_VERSION=$CU_VERSION"
- exit 1
- ;;
- esac
-}
-
-setup_wheel_python() {
- case "$PYTHON_VERSION" in
- 3.6) python_abi=cp36-cp36m ;;
- 3.7) python_abi=cp37-cp37m ;;
- 3.8) python_abi=cp38-cp38 ;;
- 3.9) python_abi=cp39-cp39 ;;
- *)
- echo "Unrecognized PYTHON_VERSION=$PYTHON_VERSION"
- exit 1
- ;;
- esac
- export PATH="/opt/python/$python_abi/bin:$PATH"
-}
diff --git a/spaces/BAAI/AltDiffusion-m9/footer.html b/spaces/BAAI/AltDiffusion-m9/footer.html
deleted file mode 100644
index b58ca8b79cc930a56952881f4922bda406fd3581..0000000000000000000000000000000000000000
--- a/spaces/BAAI/AltDiffusion-m9/footer.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
diff --git a/spaces/BAAI/vid2vid-zero/gradio_demo/style.css b/spaces/BAAI/vid2vid-zero/gradio_demo/style.css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/BAAI/vid2vid-zero/gradio_demo/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/Benson/text-generation/Examples/Choque De Clanes Th 15 Nueva Versin Hack.md b/spaces/Benson/text-generation/Examples/Choque De Clanes Th 15 Nueva Versin Hack.md
deleted file mode 100644
index cb33d42a95318aa1eb052126ad04ba24005f5542..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Choque De Clanes Th 15 Nueva Versin Hack.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Choque de clanes TH 15 Nueva versión Hack Descargar: Todo lo que necesita saber
-
Clash of Clans es uno de los juegos de estrategia más populares y adictivos en dispositivos móviles. Tiene millones de jugadores en todo el mundo que construyen sus aldeas, entrenan a sus tropas y compiten en guerras épicas de clanes. El juego se actualiza constantemente con nuevas características y contenido, y la última adición es el Ayuntamiento 15 (TH 15), que trae nuevos edificios, tropas, hechizos y desafíos para el juego.
Pero ¿qué pasa si quieres salir adelante de la competencia y disfrutar de todos los beneficios de TH 15 sin gastar demasiado tiempo y dinero en el juego? Ahí es donde entra una versión hack. Una versión hack es una versión modificada del juego que le da recursos ilimitados, gemas y otras ventajas. Algunos jugadores usan versiones hackeadas para progresar más rápido, experimentar con diferentes estrategias o simplemente divertirse.
-
Sin embargo, el uso de una versión de hackeo no está exento de riesgos. Puede enfrentar problemas legales, prohibiciones de cuentas, infecciones de malware u otros problemas. Es por eso que usted necesita ser cuidadoso e informado antes de descargar y utilizar una versión hack de Clash of Clans TH 15. En este artículo, le diremos todo lo que necesita saber sobre Clash of Clans TH 15 nueva versión hack descarga, incluyendo cómo hacerlo, cuáles son los riesgos y beneficios, y cuáles son algunos consejos y trucos para jugar el juego con eficacia.
-
Cómo descargar una versión Hack de choque de clanes TH 15?
-
Hay muchos sitios web y aplicaciones que afirman ofrecer versiones hack de Clash of Clans TH 15 de forma gratuita o por una tarifa. Sin embargo, no todos ellos son fiables o seguros. Algunos pueden contener virus, spyware u otro software malicioso que puede dañar su dispositivo o robar su información personal. Es posible que algunos no funcionen o que tu juego falle o falle.
-
-
-
Hacer algunas investigaciones antes de descargar nada. Leer comentarios, valoraciones, y comentarios de otros usuarios que han intentado la versión hack. Busque testimonios positivos y pruebas de que la versión hack funciona como se anuncia.
-
Compruebe la reputación y la credibilidad de la página web o aplicación que ofrece la versión hack. Busque signos de profesionalismo, como una descripción clara y detallada de la versión de hackeo, una información de contacto, una política de privacidad y un descargo de responsabilidad.
-
Evite descargar cualquier cosa de fuentes desconocidas o sospechosas, como anuncios emergentes, correos electrónicos no deseados o enlaces aleatorios. Estos pueden ser intentos de phishing o estafas que pueden engañarle para que revele su información personal o financiera o descargue malware.
-
Utilice el software antivirus y el firewall en su dispositivo para protegerlo de amenazas potenciales. Escanee cualquier archivo que descargue antes de abrirlo. Elimina cualquier archivo que parezca sospechoso o que cause problemas.
-
Copia de seguridad de los datos originales del juego antes de instalar una versión hack. De esta manera, puedes restaurar tu juego a su estado normal si algo sale mal o si quieres volver a la versión oficial.
-
-
Una vez que encuentre una fuente confiable para descargar una versión hack de Clash of Clans TH 15, siga estos pasos:
-
-
-
Descargar el archivo APK (para dispositivos Android) o el archivo IPA (para dispositivos iOS) de la versión de corte de la fuente.
-
Habilitar fuentes desconocidas en la configuración del dispositivo para permitir la instalación de aplicaciones desde fuera de la tienda de aplicaciones oficial.
-
Busque el archivo descargado en su dispositivo y toque en él para instalarlo.
-
Lanzar la versión hack y disfrutar de jugar Clash of Clans TH 15 con recursos ilimitados, gemas, y otras características.
-
¿Cuáles son los riesgos y beneficios de usar una versión Hack de choque de clanes TH 15?
-
-
Riesgos
-
-
Puede violar los términos del servicio y el acuerdo de licencia de usuario final del juego, lo que puede resultar en acciones legales, multas o demandas del desarrollador o editor del juego.
-
Usted puede ser expulsado del servidor del juego o perder su cuenta permanentemente si es detectado o reportado por otros jugadores o el sistema de seguridad del juego.
-
Puede perder su progreso, logros, recompensas o compras si desinstala la versión de hackeo o vuelve a la versión oficial.
-
Puede dañar su dispositivo o comprometer su rendimiento, seguridad o funcionalidad si descarga una versión defectuosa, dañada o maliciosa.
-
Puede arruinar la diversión, el desafío y el equilibrio del juego mediante el uso de ventajas injustas sobre otros jugadores o saltando la mecánica de juego prevista.
-
-
Beneficios
-
-
Usted puede ahorrar tiempo y dinero mediante la obtención de recursos ilimitados, gemas, y otras características sin gastar dinero real o molienda durante horas.
-
Puedes explorar nuevos aspectos del juego que de otra manera son inaccesibles, como nuevos edificios, tropas, hechizos y desafíos.
-
Puedes experimentar con diferentes estrategias, tácticas y combinaciones que pueden ayudarte a mejorar tus habilidades y conocimientos del juego.
-
Usted puede tener más diversión y satisfacción al lograr sus objetivos más rápido, más fácil y más eficiente.
-
Puedes impresionar a tus amigos, compañeros de clan u oponentes mostrando tus logros, estadísticas o diseño base.
-
-
¿Cuáles son algunos consejos y trucos para jugar choque de clanes TH 15 con eficacia?
-
Ya sea que uses una versión hack o no, jugar Clash of Clans TH 15 puede ser desafiante y gratificante. Aquí hay algunos consejos y trucos que pueden ayudarte a jugar el juego de manera efectiva:
-
-
-
Construir y actualizar los nuevos edificios que vienen con TH 15, tales como la casa del animal doméstico, la cabaña del constructor, la torre del infierno nivel 7, y el nivel de artillería del águila 4. Estos edificios pueden proporcionarle nuevas capacidades defensivas y ofensivas.
-
Entrena y mejora las nuevas tropas y hechizos que vienen con TH 15, como el globo cohete, el jinete del dragón, el Super Archer, y el hechizo de invisibilidad. Estas tropas y hechizos pueden darte una ventaja en las batallas.
-
Recoge y actualiza las nuevas mascotas que vienen con TH 15, como L.A.S.S.I., Electro Owl, Mighty Yak y Unicornio. Estas mascotas pueden acompañar a tus héroes y proporcionarles apoyo y habilidades adicionales.
-
Usa el nuevo Cuartel de Asedio de nivel 5 y el Taller de Asedio de nivel 5 para desplegar más tropas y máquinas de asedio en las batallas. También puedes usar la nueva máquina de asedio Log Launcher para atravesar paredes e infligir daño a edificios enemigos.
-
Únete a un clan o crea tu propio clan para participar en guerras de clanes, juegos de clanes, ligas de guerra de clanes y beneficios de clanes. También puedes chatear con otros jugadores, solicitar y donar tropas y hechizos, y compartir repeticiones y estrategias.
-
-
Conclusión
-
Clash of Clans TH 15 es una emocionante actualización que trae nuevas características y contenido al juego. Sin embargo, si quieres disfrutar de todos los beneficios de TH 15 sin gastar demasiado tiempo y dinero en el juego, usted puede considerar la descarga de una versión hack de Clash of Clans TH 15. Una versión hack puede darle recursos ilimitados, gemas y otras ventajas que pueden ayudarle a progresar más rápido y divertirse más. Sin embargo, el uso de una versión hack también viene con riesgos, como problemas legales, prohibiciones de cuenta, infecciones de malware o problemas de juego. Por lo tanto, usted necesita ser cuidadoso e informado antes de descargar y utilizar una versión hack de Clash of Clans TH 15. También necesitas seguir algunos consejos y trucos para jugar el juego de manera efectiva y aprovechar al máximo tu experiencia de juego.
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes relacionadas con el tema de Clash of Clans TH 15 nueva versión hack descargar:
-
Q: ¿Es legal usar una versión hack de Clash of Clans TH 15?
-
A: No, no es legal usar una versión hackeada de Clash of Clans TH 15. Viola los términos del servicio y el acuerdo de licencia de usuario final del juego, lo que puede resultar en acciones legales, multas o demandas del desarrollador o editor del juego. También puede ser expulsado del servidor del juego o perder su cuenta permanentemente si es detectado o reportado por otros jugadores o el sistema de seguridad del juego.
-
Q: ¿Es seguro usar una versión hack de Clash of Clans TH 15?
-
A: No, no es seguro usar una versión hack de Clash of Clans TH 15. Puede dañar su dispositivo o comprometer su rendimiento, seguridad o funcionalidad si descarga una versión defectuosa, dañada o maliciosa. También puede perder su progreso, logros, recompensas o compras si desinstala la versión de hackeo o vuelve a la versión oficial. También puede arruinar la diversión, el desafío y el equilibrio del juego mediante el uso de ventajas injustas sobre otros jugadores o saltando la mecánica de juego prevista.
-
Q: ¿Cómo puedo obtener gemas gratis en Clash of Clans TH 15?
-
A: Hay algunas formas legítimas de obtener gemas gratis en Clash of Clans TH 15 sin usar una versión hack. Algunos de ellos son:
-
-
Completar logros y eventos
-
Eliminar obstáculos y cajas de gemas
-
Abriendo los regalos del clan y las recompensas del pase de temporada
-
Participar en encuestas y ofertas
-
Comprar ofertas especiales y paquetes
-
-
Q: ¿Cuál es la mejor estrategia para el choque de clanes TH 15?
-
-
-
Usa una mezcla equilibrada de tropas y hechizos que puedan lidiar con diferentes tipos de defensas y situaciones
-
Usa máquinas de asedio y mascotas para apoyar a tus héroes y al ejército principal
-
Utilice exploradores y repeticiones para analizar la base de su enemigo y planificar su ataque en consecuencia
-
Utiliza técnicas de canalización para guiar a tus tropas al núcleo de la base del enemigo
-
Usa hechizos sabiamente y oportunamente para mejorar las habilidades de tus tropas o contrarrestar las defensas del enemigo
-
-
Q: ¿Cómo puedo unirme a un buen clan en Clash of Clans TH 15?
-
A: Unirse a un buen clan en Clash of Clans TH 15 puede mejorar tu experiencia de juego proporcionándote interacción social, donaciones de tropas, beneficios de clan, guerras de clan, juegos de clan y ligas de guerra de clan. Algunas maneras de encontrar y unirse a un buen clan son:
-
-
Usa la función de búsqueda de clanes en el juego para filtrar clanes por nombre, ubicación, nivel, miembros, trofeos, frecuencia de guerra, victorias de guerra, liga de guerra, nivel mínimo de ayuntamiento, etc.
-
Utilice sitios web externos o aplicaciones como [Clash of Stats](https://www.clashofstats.com/), [Clash Champs](https://www.clashchamps.com/), [Clash Leaders](https:/ww.clashleaders.com/), etc. para encontrar clanes basados en diversos criterios y estadísticas.
-
Utilice plataformas de medios sociales como [Reddit](https:/www.reddit.com/r/ClashOfClans/), [Discord](https://discord.com/invite/clashofclans), [Facebook](https:/ww.facebook.com/ClashofClans/), [Twitter https:///tter.cofashans), etc.
-
Pídele a tus amigos, familiares o conocidos que jueguen a Clash of Clans que te inviten a sus clanes o te recomienden algunos buenos clanes.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Oficina 2019 Gratis.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Oficina 2019 Gratis.md
deleted file mode 100644
index 6c6fcc5d498935d6aa8e459227f09c287360174a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Oficina 2019 Gratis.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
Cómo descargar Office 2019 gratis
-
Microsoft Office es una de las suites de productividad más populares y ampliamente utilizadas en el mundo. Incluye potentes aplicaciones como Word, Excel, PowerPoint, Outlook y más. Sin embargo, obtener la última versión de Office puede ser caro, especialmente si desea usarlo en varios dispositivos.
-
Afortunadamente, hay algunas maneras de descargar Office 2019 gratis legalmente. En este artículo, te mostraremos qué es Office 2019, por qué lo quieres y cómo obtenerlo sin pagar un centavo.
¿Qué es Office 2019 y por qué es posible que lo desee
-
Office 2019 es la última versión de la suite de software de oficina de Microsoft. Fue lanzado en septiembre de 2018 y es una compra única que no requiere una suscripción. A diferencia de Office 365, que es un servicio basado en la nube que ofrece actualizaciones regulares y nuevas características, Office 2019 es un producto independiente que no recibirá cambios ni mejoras importantes.
-
Sin embargo, eso no significa que Office 2019 sea inferior o obsoleto. De hecho, hay algunas razones por las que podría preferir Office 2019 sobre Office 365.
-
Oficina 2019 vs Oficina 365
-
La principal diferencia entre Office 2019 y Office 365 es cómo se conectan a la nube. Ambas suites cuentan con acceso a OneDrive, el servicio de almacenamiento en la nube de Microsoft. Pero, Office 2019 no viene con ningún espacio de almacenamiento en OneDrive y no obtiene acceso a las versiones en línea de aplicaciones como Word, Excel y PowerPoint. Office 365, por otro lado, incluye 1 TB de almacenamiento gratuito y puede editar fácilmente todos sus archivos en línea.
-
-
Entonces, ¿cuál debes elegir? Depende de tus necesidades y preferencias. Si desea tener las últimas funciones y actualizaciones, acceder a sus archivos desde cualquier lugar y usar varios dispositivos, Office 365 podría ser una mejor opción para usted. Si desea ahorrar dinero a largo plazo, usar sus archivos sin conexión y no necesita aplicaciones o servicios adicionales, Office 2019 podría ser suficiente para usted.
-
Características y beneficios de Office 2019
-
A pesar de que Office 2019 no tiene todas las campanas y silbatos de Office 365, todavía tiene algunas características y beneficios impresionantes que pueden mejorar su productividad y creatividad. Estos son algunos de ellos:
-
-
Nuevas herramientas de entintado: Puede usar su pluma o dedo para dibujar, escribir, resaltar y borrar en Word, Excel, PowerPoint y Outlook. También puede convertir su tinta a formas o texto, o realizar problemas matemáticos complejos con Ink Math Assistant.
-
Nuevos tipos de datos: Puede trabajar con nuevos tipos de datos en Excel, como Stocks y Geografía. Estos tipos de datos pueden extraer información de fuentes en línea y actualizarse automáticamente.
-
Nuevas funciones: Puede usar nuevas funciones en Excel, como TEXTJOIN, CONCAT, IFS, SWITCH y más. Continuando con el artículo:
Nuevos gráficos y efectos visuales: Puede crear gráficos e imágenes impresionantes en Excel y PowerPoint, como Embudo, Mapa, Cronología y modelos 3D. Estos gráficos y gráficos pueden ayudarlo a presentar sus datos de una manera más atractiva e interactiva.
-
Nuevas animaciones y transiciones: Puede agregar nuevas animaciones y transiciones en PowerPoint, como Morph, Zoom y 3D. Estas animaciones y transiciones pueden ayudarle a crear presentaciones dinámicas y cautivadoras.
-
-
Nuevas herramientas de aprendizaje: Puede usar nuevas herramientas de aprendizaje en Word y Outlook, como Leer en voz alta, Espaciado de texto y Modo de enfoque. Estas herramientas de aprendizaje pueden ayudarte a mejorar tus habilidades de lectura y escritura.
-
-
Cómo obtener Office 2019 gratis legalmente
-
Si estás interesado en obtener Office 2019 gratis legalmente, tienes algunas opciones que considerar. Aquí están algunas de ellas:
-
-
Opción 1: Usar Microsoft 365 para la Web
-
Una de las maneras más fáciles de obtener Office 2019 gratis es usar Microsoft 365 para la web. Esta es una versión en línea gratuita de Office que incluye Word, Excel, PowerPoint, OneNote y Outlook. Puede acceder a estas aplicaciones desde cualquier navegador y crear, editar y compartir sus archivos en línea. También obtiene 5 GB de almacenamiento gratuito en OneDrive.
-
Para usar Microsoft 365 para la web, solo necesita una cuenta de Microsoft. Si no lo tienes, puedes crear uno gratis aquí: https://signup.live.com/. Una vez que tenga una cuenta, puede iniciar sesión aquí: https://www.office.com/. A continuación, puede comenzar a usar las aplicaciones desde la página de inicio o el lanzador de aplicaciones.
-
Opción 2: Utilice el programa de descuento de Microsoft Workplace
-
Otra manera de obtener Office 2019 de forma gratuita es utilizar Microsoft Workplace Discount Program. Este es un programa que permite a los empleados elegibles de las organizaciones participantes obtener Office 2019 a un precio con descuento o incluso gratis. Puede comprobar si su organización forma parte de este programa aquí: https://www.microsoft.com/en-us/home-use-program.
-
Para utilizar Microsoft Workplace Discount Program, necesita una dirección de correo electrónico de trabajo válida de su organización. Si su organización es elegible, recibirá un correo electrónico con un enlace para comprar Office 2019 a un precio reducido o gratis. A continuación, puede descargar e instalar Office 2019 en su dispositivo personal.
-
Opción 3: Utilice el servidor en línea de Microsoft Office
-
-
Para usar Microsoft Office Online Server, necesita una licencia de Windows Server y una licencia de Office. Usted puede obtener estas licencias de forma gratuita si usted es un estudiante o un educador. Puedes comprobar si eres elegible aquí: https://www.microsoft.com/en-us/education/products/office. Una vez que tenga las licencias, puede descargar e instalar Office Online Server en su servidor aquí: https://www.microsoft.com/en-us/download/details.aspx?id=49030. Luego, puede configurar y usar las aplicaciones desde su servidor.
Continuando con el artículo:
Cómo instalar y activar Office 2019 en su PC o Mac
-
Si ha comprado u obtenido Office 2019 a través de una de las opciones anteriores, puede instalarlo y activarlo en su PC o Mac. Estos son los pasos para hacerlo:
-
Paso 1: Descargar Office 2019 desde una fuente de confianza
-
El primer paso es descargar Office 2019 desde una fuente confiable. Puede hacerlo desde la Tienda de Microsoft, el sitio web de Microsoft o el enlace que recibió de su organización o escuela. Asegúrese de descargar la versión correcta para su dispositivo y sistema operativo.
-
Paso 2: Ejecute el archivo de configuración y siga las instrucciones
-
El segundo paso es ejecutar el archivo de configuración y seguir las instrucciones. Dependiendo de su dispositivo y sistema operativo, el archivo de configuración podría ser un . exe, . dmg, o archivo . iso. Haga doble clic en el archivo y permita que se ejecute. Luego, siga las instrucciones en la pantalla para instalar Office 2019 en su dispositivo.
-
Paso 3: Ingrese su clave de producto o inicie sesión con su cuenta de Microsoft
-
-
Para activar Office 2019, debe ingresar su clave de producto o iniciar sesión con su cuenta de Microsoft. Puede hacer esto cuando inicie cualquiera de las aplicaciones de Office por primera vez. Verá una solicitud para activar Office 2019. Siga las instrucciones en la pantalla para introducir su clave de producto o iniciar sesión con su cuenta de Microsoft.
-
Conclusión
-
Office 2019 es una suite de productividad potente y versátil que puede ayudarlo a crear, editar y compartir documentos, hojas de cálculo, presentaciones y más. Sin embargo, también puede ser caro, especialmente si desea usarlo en varios dispositivos.
-
En este artículo, le hemos mostrado cómo descargar Office 2019 gratis legalmente. Puede utilizar Microsoft 365 para la web, Microsoft Workplace Discount Program o Microsoft Office Online Server. También puede instalar y activar Office 2019 en su PC o Mac siguiendo algunos pasos simples.
-
Esperamos que este artículo haya sido útil e informativo para usted. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
-
Preguntas frecuentes
-
-
Q: ¿Es Office 2019 compatible con Windows 10?
-
A: Sí, Office 2019 es compatible con Windows 10. También es compatible con Windows 8.1 y Windows Server 2019.
-
Q: ¿Es Office 2019 compatible con Mac OS?
-
A: Sí, Office 2019 es compatible con Mac OS. También es compatible con Mac OS X 10.14 Mojave y versiones posteriores.
A: Puede instalar Office 2019 en un dispositivo por licencia. Si desea usarlo en varios dispositivos, debe comprar varias licencias o usar Office 365 en su lugar.
-
Q: ¿Cuánto tiempo dura Office 2019?
-
A: Office 2019 dura tanto como su dispositivo lo soporte. No caduca ni requiere renovación. Sin embargo, no recibe ninguna actualización importante o nuevas características.
-
-
A: Sí, puede actualizar de Office 2016 a Office 2019. Sin embargo, necesita comprar una nueva licencia para Office 2019 o usar una de las opciones anteriores para obtenerla de forma gratuita.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/__init__.py
deleted file mode 100644
index a6d6b377dfcdf246972c05659673308cfa40db37..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-"""New retry v2 handlers.
-
-This package obsoletes the botocore/retryhandler.py module and contains
-new retry logic.
-
-"""
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/initialise.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/initialise.py
deleted file mode 100644
index d5fd4b71fed1bb4871717f978f0c470280f099c1..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/colorama/initialise.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-import atexit
-import contextlib
-import sys
-
-from .ansitowin32 import AnsiToWin32
-
-
-def _wipe_internal_state_for_tests():
- global orig_stdout, orig_stderr
- orig_stdout = None
- orig_stderr = None
-
- global wrapped_stdout, wrapped_stderr
- wrapped_stdout = None
- wrapped_stderr = None
-
- global atexit_done
- atexit_done = False
-
- global fixed_windows_console
- fixed_windows_console = False
-
- try:
- # no-op if it wasn't registered
- atexit.unregister(reset_all)
- except AttributeError:
- # python 2: no atexit.unregister. Oh well, we did our best.
- pass
-
-
-def reset_all():
- if AnsiToWin32 is not None: # Issue #74: objects might become None at exit
- AnsiToWin32(orig_stdout).reset_all()
-
-
-def init(autoreset=False, convert=None, strip=None, wrap=True):
-
- if not wrap and any([autoreset, convert, strip]):
- raise ValueError('wrap=False conflicts with any other arg=True')
-
- global wrapped_stdout, wrapped_stderr
- global orig_stdout, orig_stderr
-
- orig_stdout = sys.stdout
- orig_stderr = sys.stderr
-
- if sys.stdout is None:
- wrapped_stdout = None
- else:
- sys.stdout = wrapped_stdout = \
- wrap_stream(orig_stdout, convert, strip, autoreset, wrap)
- if sys.stderr is None:
- wrapped_stderr = None
- else:
- sys.stderr = wrapped_stderr = \
- wrap_stream(orig_stderr, convert, strip, autoreset, wrap)
-
- global atexit_done
- if not atexit_done:
- atexit.register(reset_all)
- atexit_done = True
-
-
-def deinit():
- if orig_stdout is not None:
- sys.stdout = orig_stdout
- if orig_stderr is not None:
- sys.stderr = orig_stderr
-
-
-def just_fix_windows_console():
- global fixed_windows_console
-
- if sys.platform != "win32":
- return
- if fixed_windows_console:
- return
- if wrapped_stdout is not None or wrapped_stderr is not None:
- # Someone already ran init() and it did stuff, so we won't second-guess them
- return
-
- # On newer versions of Windows, AnsiToWin32.__init__ will implicitly enable the
- # native ANSI support in the console as a side-effect. We only need to actually
- # replace sys.stdout/stderr if we're in the old-style conversion mode.
- new_stdout = AnsiToWin32(sys.stdout, convert=None, strip=None, autoreset=False)
- if new_stdout.convert:
- sys.stdout = new_stdout
- new_stderr = AnsiToWin32(sys.stderr, convert=None, strip=None, autoreset=False)
- if new_stderr.convert:
- sys.stderr = new_stderr
-
- fixed_windows_console = True
-
-@contextlib.contextmanager
-def colorama_text(*args, **kwargs):
- init(*args, **kwargs)
- try:
- yield
- finally:
- deinit()
-
-
-def reinit():
- if wrapped_stdout is not None:
- sys.stdout = wrapped_stdout
- if wrapped_stderr is not None:
- sys.stderr = wrapped_stderr
-
-
-def wrap_stream(stream, convert, strip, autoreset, wrap):
- if wrap:
- wrapper = AnsiToWin32(stream,
- convert=convert, strip=strip, autoreset=autoreset)
- if wrapper.should_wrap():
- stream = wrapper.stream
- return stream
-
-
-# Use this for initial setup as well, to reduce code duplication
-_wipe_internal_state_for_tests()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/crt.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/crt.py
deleted file mode 100644
index 7b5d1301365038629b23c630c71bf6c65461d34f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/crt.py
+++ /dev/null
@@ -1,644 +0,0 @@
-# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import logging
-import threading
-from io import BytesIO
-
-import awscrt.http
-import botocore.awsrequest
-import botocore.session
-from awscrt.auth import AwsCredentials, AwsCredentialsProvider
-from awscrt.io import (
- ClientBootstrap,
- ClientTlsContext,
- DefaultHostResolver,
- EventLoopGroup,
- TlsContextOptions,
-)
-from awscrt.s3 import S3Client, S3RequestTlsMode, S3RequestType
-from botocore import UNSIGNED
-from botocore.compat import urlsplit
-from botocore.config import Config
-from botocore.exceptions import NoCredentialsError
-
-from s3transfer.constants import GB, MB
-from s3transfer.exceptions import TransferNotDoneError
-from s3transfer.futures import BaseTransferFuture, BaseTransferMeta
-from s3transfer.utils import CallArgs, OSUtils, get_callbacks
-
-logger = logging.getLogger(__name__)
-
-
-class CRTCredentialProviderAdapter:
- def __init__(self, botocore_credential_provider):
- self._botocore_credential_provider = botocore_credential_provider
- self._loaded_credentials = None
- self._lock = threading.Lock()
-
- def __call__(self):
- credentials = self._get_credentials().get_frozen_credentials()
- return AwsCredentials(
- credentials.access_key, credentials.secret_key, credentials.token
- )
-
- def _get_credentials(self):
- with self._lock:
- if self._loaded_credentials is None:
- loaded_creds = (
- self._botocore_credential_provider.load_credentials()
- )
- if loaded_creds is None:
- raise NoCredentialsError()
- self._loaded_credentials = loaded_creds
- return self._loaded_credentials
-
-
-def create_s3_crt_client(
- region,
- botocore_credential_provider=None,
- num_threads=None,
- target_throughput=5 * GB / 8,
- part_size=8 * MB,
- use_ssl=True,
- verify=None,
-):
- """
- :type region: str
- :param region: The region used for signing
-
- :type botocore_credential_provider:
- Optional[botocore.credentials.CredentialResolver]
- :param botocore_credential_provider: Provide credentials for CRT
- to sign the request if not set, the request will not be signed
-
- :type num_threads: Optional[int]
- :param num_threads: Number of worker threads generated. Default
- is the number of processors in the machine.
-
- :type target_throughput: Optional[int]
- :param target_throughput: Throughput target in Bytes.
- Default is 0.625 GB/s (which translates to 5 Gb/s).
-
- :type part_size: Optional[int]
- :param part_size: Size, in Bytes, of parts that files will be downloaded
- or uploaded in.
-
- :type use_ssl: boolean
- :param use_ssl: Whether or not to use SSL. By default, SSL is used.
- Note that not all services support non-ssl connections.
-
- :type verify: Optional[boolean/string]
- :param verify: Whether or not to verify SSL certificates.
- By default SSL certificates are verified. You can provide the
- following values:
-
- * False - do not validate SSL certificates. SSL will still be
- used (unless use_ssl is False), but SSL certificates
- will not be verified.
- * path/to/cert/bundle.pem - A filename of the CA cert bundle to
- use. Specify this argument if you want to use a custom CA cert
- bundle instead of the default one on your system.
- """
-
- event_loop_group = EventLoopGroup(num_threads)
- host_resolver = DefaultHostResolver(event_loop_group)
- bootstrap = ClientBootstrap(event_loop_group, host_resolver)
- provider = None
- tls_connection_options = None
-
- tls_mode = (
- S3RequestTlsMode.ENABLED if use_ssl else S3RequestTlsMode.DISABLED
- )
- if verify is not None:
- tls_ctx_options = TlsContextOptions()
- if verify:
- tls_ctx_options.override_default_trust_store_from_path(
- ca_filepath=verify
- )
- else:
- tls_ctx_options.verify_peer = False
- client_tls_option = ClientTlsContext(tls_ctx_options)
- tls_connection_options = client_tls_option.new_connection_options()
- if botocore_credential_provider:
- credentails_provider_adapter = CRTCredentialProviderAdapter(
- botocore_credential_provider
- )
- provider = AwsCredentialsProvider.new_delegate(
- credentails_provider_adapter
- )
-
- target_gbps = target_throughput * 8 / GB
- return S3Client(
- bootstrap=bootstrap,
- region=region,
- credential_provider=provider,
- part_size=part_size,
- tls_mode=tls_mode,
- tls_connection_options=tls_connection_options,
- throughput_target_gbps=target_gbps,
- )
-
-
-class CRTTransferManager:
- def __init__(self, crt_s3_client, crt_request_serializer, osutil=None):
- """A transfer manager interface for Amazon S3 on CRT s3 client.
-
- :type crt_s3_client: awscrt.s3.S3Client
- :param crt_s3_client: The CRT s3 client, handling all the
- HTTP requests and functions under then hood
-
- :type crt_request_serializer: s3transfer.crt.BaseCRTRequestSerializer
- :param crt_request_serializer: Serializer, generates unsigned crt HTTP
- request.
-
- :type osutil: s3transfer.utils.OSUtils
- :param osutil: OSUtils object to use for os-related behavior when
- using with transfer manager.
- """
- if osutil is None:
- self._osutil = OSUtils()
- self._crt_s3_client = crt_s3_client
- self._s3_args_creator = S3ClientArgsCreator(
- crt_request_serializer, self._osutil
- )
- self._future_coordinators = []
- self._semaphore = threading.Semaphore(128) # not configurable
- # A counter to create unique id's for each transfer submitted.
- self._id_counter = 0
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, *args):
- cancel = False
- if exc_type:
- cancel = True
- self._shutdown(cancel)
-
- def download(
- self, bucket, key, fileobj, extra_args=None, subscribers=None
- ):
- if extra_args is None:
- extra_args = {}
- if subscribers is None:
- subscribers = {}
- callargs = CallArgs(
- bucket=bucket,
- key=key,
- fileobj=fileobj,
- extra_args=extra_args,
- subscribers=subscribers,
- )
- return self._submit_transfer("get_object", callargs)
-
- def upload(self, fileobj, bucket, key, extra_args=None, subscribers=None):
- if extra_args is None:
- extra_args = {}
- if subscribers is None:
- subscribers = {}
- callargs = CallArgs(
- bucket=bucket,
- key=key,
- fileobj=fileobj,
- extra_args=extra_args,
- subscribers=subscribers,
- )
- return self._submit_transfer("put_object", callargs)
-
- def delete(self, bucket, key, extra_args=None, subscribers=None):
- if extra_args is None:
- extra_args = {}
- if subscribers is None:
- subscribers = {}
- callargs = CallArgs(
- bucket=bucket,
- key=key,
- extra_args=extra_args,
- subscribers=subscribers,
- )
- return self._submit_transfer("delete_object", callargs)
-
- def shutdown(self, cancel=False):
- self._shutdown(cancel)
-
- def _cancel_transfers(self):
- for coordinator in self._future_coordinators:
- if not coordinator.done():
- coordinator.cancel()
-
- def _finish_transfers(self):
- for coordinator in self._future_coordinators:
- coordinator.result()
-
- def _wait_transfers_done(self):
- for coordinator in self._future_coordinators:
- coordinator.wait_until_on_done_callbacks_complete()
-
- def _shutdown(self, cancel=False):
- if cancel:
- self._cancel_transfers()
- try:
- self._finish_transfers()
-
- except KeyboardInterrupt:
- self._cancel_transfers()
- except Exception:
- pass
- finally:
- self._wait_transfers_done()
-
- def _release_semaphore(self, **kwargs):
- self._semaphore.release()
-
- def _submit_transfer(self, request_type, call_args):
- on_done_after_calls = [self._release_semaphore]
- coordinator = CRTTransferCoordinator(transfer_id=self._id_counter)
- components = {
- 'meta': CRTTransferMeta(self._id_counter, call_args),
- 'coordinator': coordinator,
- }
- future = CRTTransferFuture(**components)
- afterdone = AfterDoneHandler(coordinator)
- on_done_after_calls.append(afterdone)
-
- try:
- self._semaphore.acquire()
- on_queued = self._s3_args_creator.get_crt_callback(
- future, 'queued'
- )
- on_queued()
- crt_callargs = self._s3_args_creator.get_make_request_args(
- request_type,
- call_args,
- coordinator,
- future,
- on_done_after_calls,
- )
- crt_s3_request = self._crt_s3_client.make_request(**crt_callargs)
- except Exception as e:
- coordinator.set_exception(e, True)
- on_done = self._s3_args_creator.get_crt_callback(
- future, 'done', after_subscribers=on_done_after_calls
- )
- on_done(error=e)
- else:
- coordinator.set_s3_request(crt_s3_request)
- self._future_coordinators.append(coordinator)
-
- self._id_counter += 1
- return future
-
-
-class CRTTransferMeta(BaseTransferMeta):
- """Holds metadata about the CRTTransferFuture"""
-
- def __init__(self, transfer_id=None, call_args=None):
- self._transfer_id = transfer_id
- self._call_args = call_args
- self._user_context = {}
-
- @property
- def call_args(self):
- return self._call_args
-
- @property
- def transfer_id(self):
- return self._transfer_id
-
- @property
- def user_context(self):
- return self._user_context
-
-
-class CRTTransferFuture(BaseTransferFuture):
- def __init__(self, meta=None, coordinator=None):
- """The future associated to a submitted transfer request via CRT S3 client
-
- :type meta: s3transfer.crt.CRTTransferMeta
- :param meta: The metadata associated to the transfer future.
-
- :type coordinator: s3transfer.crt.CRTTransferCoordinator
- :param coordinator: The coordinator associated to the transfer future.
- """
- self._meta = meta
- if meta is None:
- self._meta = CRTTransferMeta()
- self._coordinator = coordinator
-
- @property
- def meta(self):
- return self._meta
-
- def done(self):
- return self._coordinator.done()
-
- def result(self, timeout=None):
- self._coordinator.result(timeout)
-
- def cancel(self):
- self._coordinator.cancel()
-
- def set_exception(self, exception):
- """Sets the exception on the future."""
- if not self.done():
- raise TransferNotDoneError(
- 'set_exception can only be called once the transfer is '
- 'complete.'
- )
- self._coordinator.set_exception(exception, override=True)
-
-
-class BaseCRTRequestSerializer:
- def serialize_http_request(self, transfer_type, future):
- """Serialize CRT HTTP requests.
-
- :type transfer_type: string
- :param transfer_type: the type of transfer made,
- e.g 'put_object', 'get_object', 'delete_object'
-
- :type future: s3transfer.crt.CRTTransferFuture
-
- :rtype: awscrt.http.HttpRequest
- :returns: An unsigned HTTP request to be used for the CRT S3 client
- """
- raise NotImplementedError('serialize_http_request()')
-
-
-class BotocoreCRTRequestSerializer(BaseCRTRequestSerializer):
- def __init__(self, session, client_kwargs=None):
- """Serialize CRT HTTP request using botocore logic
- It also takes into account configuration from both the session
- and any keyword arguments that could be passed to
- `Session.create_client()` when serializing the request.
-
- :type session: botocore.session.Session
-
- :type client_kwargs: Optional[Dict[str, str]])
- :param client_kwargs: The kwargs for the botocore
- s3 client initialization.
- """
- self._session = session
- if client_kwargs is None:
- client_kwargs = {}
- self._resolve_client_config(session, client_kwargs)
- self._client = session.create_client(**client_kwargs)
- self._client.meta.events.register(
- 'request-created.s3.*', self._capture_http_request
- )
- self._client.meta.events.register(
- 'after-call.s3.*', self._change_response_to_serialized_http_request
- )
- self._client.meta.events.register(
- 'before-send.s3.*', self._make_fake_http_response
- )
-
- def _resolve_client_config(self, session, client_kwargs):
- user_provided_config = None
- if session.get_default_client_config():
- user_provided_config = session.get_default_client_config()
- if 'config' in client_kwargs:
- user_provided_config = client_kwargs['config']
-
- client_config = Config(signature_version=UNSIGNED)
- if user_provided_config:
- client_config = user_provided_config.merge(client_config)
- client_kwargs['config'] = client_config
- client_kwargs["service_name"] = "s3"
-
- def _crt_request_from_aws_request(self, aws_request):
- url_parts = urlsplit(aws_request.url)
- crt_path = url_parts.path
- if url_parts.query:
- crt_path = f'{crt_path}?{url_parts.query}'
- headers_list = []
- for name, value in aws_request.headers.items():
- if isinstance(value, str):
- headers_list.append((name, value))
- else:
- headers_list.append((name, str(value, 'utf-8')))
-
- crt_headers = awscrt.http.HttpHeaders(headers_list)
- # CRT requires body (if it exists) to be an I/O stream.
- crt_body_stream = None
- if aws_request.body:
- if hasattr(aws_request.body, 'seek'):
- crt_body_stream = aws_request.body
- else:
- crt_body_stream = BytesIO(aws_request.body)
-
- crt_request = awscrt.http.HttpRequest(
- method=aws_request.method,
- path=crt_path,
- headers=crt_headers,
- body_stream=crt_body_stream,
- )
- return crt_request
-
- def _convert_to_crt_http_request(self, botocore_http_request):
- # Logic that does CRTUtils.crt_request_from_aws_request
- crt_request = self._crt_request_from_aws_request(botocore_http_request)
- if crt_request.headers.get("host") is None:
- # If host is not set, set it for the request before using CRT s3
- url_parts = urlsplit(botocore_http_request.url)
- crt_request.headers.set("host", url_parts.netloc)
- if crt_request.headers.get('Content-MD5') is not None:
- crt_request.headers.remove("Content-MD5")
- return crt_request
-
- def _capture_http_request(self, request, **kwargs):
- request.context['http_request'] = request
-
- def _change_response_to_serialized_http_request(
- self, context, parsed, **kwargs
- ):
- request = context['http_request']
- parsed['HTTPRequest'] = request.prepare()
-
- def _make_fake_http_response(self, request, **kwargs):
- return botocore.awsrequest.AWSResponse(
- None,
- 200,
- {},
- FakeRawResponse(b""),
- )
-
- def _get_botocore_http_request(self, client_method, call_args):
- return getattr(self._client, client_method)(
- Bucket=call_args.bucket, Key=call_args.key, **call_args.extra_args
- )['HTTPRequest']
-
- def serialize_http_request(self, transfer_type, future):
- botocore_http_request = self._get_botocore_http_request(
- transfer_type, future.meta.call_args
- )
- crt_request = self._convert_to_crt_http_request(botocore_http_request)
- return crt_request
-
-
-class FakeRawResponse(BytesIO):
- def stream(self, amt=1024, decode_content=None):
- while True:
- chunk = self.read(amt)
- if not chunk:
- break
- yield chunk
-
-
-class CRTTransferCoordinator:
- """A helper class for managing CRTTransferFuture"""
-
- def __init__(self, transfer_id=None, s3_request=None):
- self.transfer_id = transfer_id
- self._s3_request = s3_request
- self._lock = threading.Lock()
- self._exception = None
- self._crt_future = None
- self._done_event = threading.Event()
-
- @property
- def s3_request(self):
- return self._s3_request
-
- def set_done_callbacks_complete(self):
- self._done_event.set()
-
- def wait_until_on_done_callbacks_complete(self, timeout=None):
- self._done_event.wait(timeout)
-
- def set_exception(self, exception, override=False):
- with self._lock:
- if not self.done() or override:
- self._exception = exception
-
- def cancel(self):
- if self._s3_request:
- self._s3_request.cancel()
-
- def result(self, timeout=None):
- if self._exception:
- raise self._exception
- try:
- self._crt_future.result(timeout)
- except KeyboardInterrupt:
- self.cancel()
- raise
- finally:
- if self._s3_request:
- self._s3_request = None
- self._crt_future.result(timeout)
-
- def done(self):
- if self._crt_future is None:
- return False
- return self._crt_future.done()
-
- def set_s3_request(self, s3_request):
- self._s3_request = s3_request
- self._crt_future = self._s3_request.finished_future
-
-
-class S3ClientArgsCreator:
- def __init__(self, crt_request_serializer, os_utils):
- self._request_serializer = crt_request_serializer
- self._os_utils = os_utils
-
- def get_make_request_args(
- self, request_type, call_args, coordinator, future, on_done_after_calls
- ):
- recv_filepath = None
- send_filepath = None
- s3_meta_request_type = getattr(
- S3RequestType, request_type.upper(), S3RequestType.DEFAULT
- )
- on_done_before_calls = []
- if s3_meta_request_type == S3RequestType.GET_OBJECT:
- final_filepath = call_args.fileobj
- recv_filepath = self._os_utils.get_temp_filename(final_filepath)
- file_ondone_call = RenameTempFileHandler(
- coordinator, final_filepath, recv_filepath, self._os_utils
- )
- on_done_before_calls.append(file_ondone_call)
- elif s3_meta_request_type == S3RequestType.PUT_OBJECT:
- send_filepath = call_args.fileobj
- data_len = self._os_utils.get_file_size(send_filepath)
- call_args.extra_args["ContentLength"] = data_len
-
- crt_request = self._request_serializer.serialize_http_request(
- request_type, future
- )
-
- return {
- 'request': crt_request,
- 'type': s3_meta_request_type,
- 'recv_filepath': recv_filepath,
- 'send_filepath': send_filepath,
- 'on_done': self.get_crt_callback(
- future, 'done', on_done_before_calls, on_done_after_calls
- ),
- 'on_progress': self.get_crt_callback(future, 'progress'),
- }
-
- def get_crt_callback(
- self,
- future,
- callback_type,
- before_subscribers=None,
- after_subscribers=None,
- ):
- def invoke_all_callbacks(*args, **kwargs):
- callbacks_list = []
- if before_subscribers is not None:
- callbacks_list += before_subscribers
- callbacks_list += get_callbacks(future, callback_type)
- if after_subscribers is not None:
- callbacks_list += after_subscribers
- for callback in callbacks_list:
- # The get_callbacks helper will set the first augment
- # by keyword, the other augments need to be set by keyword
- # as well
- if callback_type == "progress":
- callback(bytes_transferred=args[0])
- else:
- callback(*args, **kwargs)
-
- return invoke_all_callbacks
-
-
-class RenameTempFileHandler:
- def __init__(self, coordinator, final_filename, temp_filename, osutil):
- self._coordinator = coordinator
- self._final_filename = final_filename
- self._temp_filename = temp_filename
- self._osutil = osutil
-
- def __call__(self, **kwargs):
- error = kwargs['error']
- if error:
- self._osutil.remove_file(self._temp_filename)
- else:
- try:
- self._osutil.rename_file(
- self._temp_filename, self._final_filename
- )
- except Exception as e:
- self._osutil.remove_file(self._temp_filename)
- # the CRT future has done already at this point
- self._coordinator.set_exception(e)
-
-
-class AfterDoneHandler:
- def __init__(self, coordinator):
- self._coordinator = coordinator
-
- def __call__(self, **kwargs):
- self._coordinator.set_done_callbacks_complete()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/file_util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/file_util.py
deleted file mode 100644
index 1f1e444b1c30d93ca28ac15115ef73e63b9f6169..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/file_util.py
+++ /dev/null
@@ -1,249 +0,0 @@
-"""distutils.file_util
-
-Utility functions for operating on single files.
-"""
-
-import os
-from distutils.errors import DistutilsFileError
-from distutils import log
-
-# for generating verbose output in 'copy_file()'
-_copy_action = {None: 'copying', 'hard': 'hard linking', 'sym': 'symbolically linking'}
-
-
-def _copy_file_contents(src, dst, buffer_size=16 * 1024): # noqa: C901
- """Copy the file 'src' to 'dst'; both must be filenames. Any error
- opening either file, reading from 'src', or writing to 'dst', raises
- DistutilsFileError. Data is read/written in chunks of 'buffer_size'
- bytes (default 16k). No attempt is made to handle anything apart from
- regular files.
- """
- # Stolen from shutil module in the standard library, but with
- # custom error-handling added.
- fsrc = None
- fdst = None
- try:
- try:
- fsrc = open(src, 'rb')
- except OSError as e:
- raise DistutilsFileError("could not open '{}': {}".format(src, e.strerror))
-
- if os.path.exists(dst):
- try:
- os.unlink(dst)
- except OSError as e:
- raise DistutilsFileError(
- "could not delete '{}': {}".format(dst, e.strerror)
- )
-
- try:
- fdst = open(dst, 'wb')
- except OSError as e:
- raise DistutilsFileError(
- "could not create '{}': {}".format(dst, e.strerror)
- )
-
- while True:
- try:
- buf = fsrc.read(buffer_size)
- except OSError as e:
- raise DistutilsFileError(
- "could not read from '{}': {}".format(src, e.strerror)
- )
-
- if not buf:
- break
-
- try:
- fdst.write(buf)
- except OSError as e:
- raise DistutilsFileError(
- "could not write to '{}': {}".format(dst, e.strerror)
- )
- finally:
- if fdst:
- fdst.close()
- if fsrc:
- fsrc.close()
-
-
-def copy_file( # noqa: C901
- src,
- dst,
- preserve_mode=1,
- preserve_times=1,
- update=0,
- link=None,
- verbose=1,
- dry_run=0,
-):
- """Copy a file 'src' to 'dst'. If 'dst' is a directory, then 'src' is
- copied there with the same name; otherwise, it must be a filename. (If
- the file exists, it will be ruthlessly clobbered.) If 'preserve_mode'
- is true (the default), the file's mode (type and permission bits, or
- whatever is analogous on the current platform) is copied. If
- 'preserve_times' is true (the default), the last-modified and
- last-access times are copied as well. If 'update' is true, 'src' will
- only be copied if 'dst' does not exist, or if 'dst' does exist but is
- older than 'src'.
-
- 'link' allows you to make hard links (os.link) or symbolic links
- (os.symlink) instead of copying: set it to "hard" or "sym"; if it is
- None (the default), files are copied. Don't set 'link' on systems that
- don't support it: 'copy_file()' doesn't check if hard or symbolic
- linking is available. If hardlink fails, falls back to
- _copy_file_contents().
-
- Under Mac OS, uses the native file copy function in macostools; on
- other systems, uses '_copy_file_contents()' to copy file contents.
-
- Return a tuple (dest_name, copied): 'dest_name' is the actual name of
- the output file, and 'copied' is true if the file was copied (or would
- have been copied, if 'dry_run' true).
- """
- # XXX if the destination file already exists, we clobber it if
- # copying, but blow up if linking. Hmmm. And I don't know what
- # macostools.copyfile() does. Should definitely be consistent, and
- # should probably blow up if destination exists and we would be
- # changing it (ie. it's not already a hard/soft link to src OR
- # (not update) and (src newer than dst).
-
- from distutils.dep_util import newer
- from stat import ST_ATIME, ST_MTIME, ST_MODE, S_IMODE
-
- if not os.path.isfile(src):
- raise DistutilsFileError(
- "can't copy '%s': doesn't exist or not a regular file" % src
- )
-
- if os.path.isdir(dst):
- dir = dst
- dst = os.path.join(dst, os.path.basename(src))
- else:
- dir = os.path.dirname(dst)
-
- if update and not newer(src, dst):
- if verbose >= 1:
- log.debug("not copying %s (output up-to-date)", src)
- return (dst, 0)
-
- try:
- action = _copy_action[link]
- except KeyError:
- raise ValueError("invalid value '%s' for 'link' argument" % link)
-
- if verbose >= 1:
- if os.path.basename(dst) == os.path.basename(src):
- log.info("%s %s -> %s", action, src, dir)
- else:
- log.info("%s %s -> %s", action, src, dst)
-
- if dry_run:
- return (dst, 1)
-
- # If linking (hard or symbolic), use the appropriate system call
- # (Unix only, of course, but that's the caller's responsibility)
- elif link == 'hard':
- if not (os.path.exists(dst) and os.path.samefile(src, dst)):
- try:
- os.link(src, dst)
- return (dst, 1)
- except OSError:
- # If hard linking fails, fall back on copying file
- # (some special filesystems don't support hard linking
- # even under Unix, see issue #8876).
- pass
- elif link == 'sym':
- if not (os.path.exists(dst) and os.path.samefile(src, dst)):
- os.symlink(src, dst)
- return (dst, 1)
-
- # Otherwise (non-Mac, not linking), copy the file contents and
- # (optionally) copy the times and mode.
- _copy_file_contents(src, dst)
- if preserve_mode or preserve_times:
- st = os.stat(src)
-
- # According to David Ascher , utime() should be done
- # before chmod() (at least under NT).
- if preserve_times:
- os.utime(dst, (st[ST_ATIME], st[ST_MTIME]))
- if preserve_mode:
- os.chmod(dst, S_IMODE(st[ST_MODE]))
-
- return (dst, 1)
-
-
-# XXX I suspect this is Unix-specific -- need porting help!
-def move_file(src, dst, verbose=1, dry_run=0): # noqa: C901
-
- """Move a file 'src' to 'dst'. If 'dst' is a directory, the file will
- be moved into it with the same name; otherwise, 'src' is just renamed
- to 'dst'. Return the new full name of the file.
-
- Handles cross-device moves on Unix using 'copy_file()'. What about
- other systems???
- """
- from os.path import exists, isfile, isdir, basename, dirname
- import errno
-
- if verbose >= 1:
- log.info("moving %s -> %s", src, dst)
-
- if dry_run:
- return dst
-
- if not isfile(src):
- raise DistutilsFileError("can't move '%s': not a regular file" % src)
-
- if isdir(dst):
- dst = os.path.join(dst, basename(src))
- elif exists(dst):
- raise DistutilsFileError(
- "can't move '{}': destination '{}' already exists".format(src, dst)
- )
-
- if not isdir(dirname(dst)):
- raise DistutilsFileError(
- "can't move '{}': destination '{}' not a valid path".format(src, dst)
- )
-
- copy_it = False
- try:
- os.rename(src, dst)
- except OSError as e:
- (num, msg) = e.args
- if num == errno.EXDEV:
- copy_it = True
- else:
- raise DistutilsFileError(
- "couldn't move '{}' to '{}': {}".format(src, dst, msg)
- )
-
- if copy_it:
- copy_file(src, dst, verbose=verbose)
- try:
- os.unlink(src)
- except OSError as e:
- (num, msg) = e.args
- try:
- os.unlink(dst)
- except OSError:
- pass
- raise DistutilsFileError(
- "couldn't move '%s' to '%s' by copy/delete: "
- "delete '%s' failed: %s" % (src, dst, src, msg)
- )
- return dst
-
-
-def write_file(filename, contents):
- """Create a file with the specified name and write 'contents' (a
- sequence of strings without line terminators) to it.
- """
- f = open(filename, "w")
- try:
- for line in contents:
- f.write(line + "\n")
- finally:
- f.close()
diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/archs/__init__.py b/spaces/Billyosoro/ESRGAN/realesrgan/archs/__init__.py
deleted file mode 100644
index f3fbbf3b78e33b61fd4c33a564a9a617010d90de..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/realesrgan/archs/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import arch modules for registry
-# scan all the files that end with '_arch.py' under the archs folder
-arch_folder = osp.dirname(osp.abspath(__file__))
-arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]
-# import all the arch modules
-_arch_modules = [importlib.import_module(f'realesrgan.archs.{file_name}') for file_name in arch_filenames]
diff --git a/spaces/BuBBLe1q/anything-v3.0/app.py b/spaces/BuBBLe1q/anything-v3.0/app.py
deleted file mode 100644
index 99a6a3762d5e337f08e960c4a31b4ac2467bca49..0000000000000000000000000000000000000000
--- a/spaces/BuBBLe1q/anything-v3.0/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-
-description = """
-
-
- """
-
-gr.Interface.load("models/Linaqruf/anything-v3.0", description=description).launch()
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_system.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_system.h
deleted file mode 100644
index fd378fae7314fac33f4fadf5cb1ae348dbeaa0e7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_category_to_system.h
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-// forward declaration
-template struct is_iterator_system;
-
-template struct device_iterator_category_to_backend_system;
-
-// XXX this should work entirely differently
-// we should just specialize this metafunction for iterator_category_with_system_and_traversal
-template
- struct iterator_category_to_system
- // convertible to host iterator?
- : eval_if<
- or_<
- is_convertible,
- is_convertible
- >::value,
-
- detail::identity_,
-
- // convertible to device iterator?
- eval_if<
- or_<
- is_convertible,
- is_convertible
- >::value,
-
- detail::identity_,
-
- // unknown system
- detail::identity_
- > // if device
- > // if host
-{
-}; // end iterator_category_to_system
-
-
-template
- struct iterator_category_or_traversal_to_system
- : eval_if<
- is_iterator_system::value,
- detail::identity_,
- iterator_category_to_system
- >
-{
-}; // end iterator_category_or_traversal_to_system
-
-} // end detail
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan.h
deleted file mode 100644
index f47dbbc3087c613f36de65f704505340bb8a85b0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scan.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits scan
-#include
-
diff --git a/spaces/CVPR/WALT/configs/_base_/datasets/walt_vehicle.py b/spaces/CVPR/WALT/configs/_base_/datasets/walt_vehicle.py
deleted file mode 100644
index 466fa524d0f43b8684a01abe57188501787db8a4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/configs/_base_/datasets/walt_vehicle.py
+++ /dev/null
@@ -1,49 +0,0 @@
-dataset_type = 'WaltDataset'
-data_root = 'data/cwalt_train/'
-data_root_test = 'data/cwalt_test/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=5,
- workers_per_gpu=5,
- train=dict(
- type=dataset_type,
- ann_file=data_root + '/',
- img_prefix=data_root + '/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root_test + '/',
- img_prefix=data_root_test + '/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root_test + '/',
- img_prefix=data_root_test + '/',
- pipeline=test_pipeline))
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/unittest.py b/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/unittest.py
deleted file mode 100644
index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/modules/unittest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : unittest.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import unittest
-
-import numpy as np
-from torch.autograd import Variable
-
-
-def as_numpy(v):
- if isinstance(v, Variable):
- v = v.data
- return v.cpu().numpy()
-
-
-class TorchTestCase(unittest.TestCase):
- def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3):
- npa, npb = as_numpy(a), as_numpy(b)
- self.assertTrue(
- np.allclose(npa, npb, atol=atol),
- 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max())
- )
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/depth_estimator.py b/spaces/ChrisCaviar/ControlNet-v1-1/depth_estimator.py
deleted file mode 100644
index 8af14987f58b59329e5c8441dec43f1075a29d8b..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/depth_estimator.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy as np
-import PIL.Image
-from controlnet_aux.util import HWC3
-from transformers import pipeline
-
-from cv_utils import resize_image
-
-
-class DepthEstimator:
- def __init__(self):
- self.model = pipeline('depth-estimation')
-
- def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image:
- detect_resolution = kwargs.pop('detect_resolution', 512)
- image_resolution = kwargs.pop('image_resolution', 512)
- image = np.array(image)
- image = HWC3(image)
- image = resize_image(image, resolution=detect_resolution)
- image = PIL.Image.fromarray(image)
- image = self.model(image)
- image = image['depth']
- image = np.array(image)
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- return PIL.Image.fromarray(image)
diff --git a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/libJPG/jpge.h
deleted file mode 100644
index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/libJPG/jpge.h
+++ /dev/null
@@ -1,172 +0,0 @@
-
-// jpge.h - C++ class for JPEG compression.
-// Public domain, Rich Geldreich
-// Alex Evans: Added RGBA support, linear memory allocator.
-#ifndef JPEG_ENCODER_H
-#define JPEG_ENCODER_H
-
-#include
-
-namespace jpge
-{
- typedef unsigned char uint8;
- typedef signed short int16;
- typedef signed int int32;
- typedef unsigned short uint16;
- typedef unsigned int uint32;
- typedef unsigned int uint;
-
- // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
- enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
-
- // JPEG compression parameters structure.
- struct params
- {
- inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
-
- inline bool check_valid() const
- {
- if ((m_quality < 1) || (m_quality > 100)) return false;
- if ((uint)m_subsampling > (uint)H2V2) return false;
- return true;
- }
-
- // Quality: 1-100, higher is better. Typical values are around 50-95.
- int m_quality;
-
- // m_subsampling:
- // 0 = Y (grayscale) only
- // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
- // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
- // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
- subsampling_t m_subsampling;
-
- // Disables CbCr discrimination - only intended for testing.
- // If true, the Y quantization table is also used for the CbCr channels.
- bool m_no_chroma_discrim_flag;
-
- bool m_two_pass_flag;
- };
-
- // Writes JPEG image to a file.
- // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
- bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Writes JPEG image to memory buffer.
- // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
- // If return value is true, buf_size will be set to the size of the compressed data.
- bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
- // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
- class output_stream
- {
- public:
- virtual ~output_stream() { };
- virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
- template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
- };
-
- // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
- class jpeg_encoder
- {
- public:
- jpeg_encoder();
- ~jpeg_encoder();
-
- // Initializes the compressor.
- // pStream: The stream object to use for writing compressed data.
- // params - Compression parameters structure, defined above.
- // width, height - Image dimensions.
- // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
- // Returns false on out of memory or if a stream write fails.
- bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
-
- const params &get_params() const { return m_params; }
-
- // Deinitializes the compressor, freeing any allocated memory. May be called at any time.
- void deinit();
-
- uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
- inline uint get_cur_pass() { return m_pass_num; }
-
- // Call this method with each source scanline.
- // width * src_channels bytes per scanline is expected (RGB or Y format).
- // You must call with NULL after all scanlines are processed to finish compression.
- // Returns false on out of memory or if a stream write fails.
- bool process_scanline(const void* pScanline);
-
- private:
- jpeg_encoder(const jpeg_encoder &);
- jpeg_encoder &operator =(const jpeg_encoder &);
-
- typedef int32 sample_array_t;
-
- output_stream *m_pStream;
- params m_params;
- uint8 m_num_components;
- uint8 m_comp_h_samp[3], m_comp_v_samp[3];
- int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
- int m_image_x_mcu, m_image_y_mcu;
- int m_image_bpl_xlt, m_image_bpl_mcu;
- int m_mcus_per_row;
- int m_mcu_x, m_mcu_y;
- uint8 *m_mcu_lines[16];
- uint8 m_mcu_y_ofs;
- sample_array_t m_sample_array[64];
- int16 m_coefficient_array[64];
- int32 m_quantization_tables[2][64];
- uint m_huff_codes[4][256];
- uint8 m_huff_code_sizes[4][256];
- uint8 m_huff_bits[4][17];
- uint8 m_huff_val[4][256];
- uint32 m_huff_count[4][256];
- int m_last_dc_val[3];
- enum { JPGE_OUT_BUF_SIZE = 2048 };
- uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
- uint8 *m_pOut_buf;
- uint m_out_buf_left;
- uint32 m_bit_buffer;
- uint m_bits_in;
- uint8 m_pass_num;
- bool m_all_stream_writes_succeeded;
-
- void optimize_huffman_table(int table_num, int table_len);
- void emit_byte(uint8 i);
- void emit_word(uint i);
- void emit_marker(int marker);
- void emit_jfif_app0();
- void emit_dqt();
- void emit_sof();
- void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
- void emit_dhts();
- void emit_sos();
- void emit_markers();
- void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
- void compute_quant_table(int32 *dst, int16 *src);
- void adjust_quant_table(int32 *dst, int32 *src);
- void first_pass_init();
- bool second_pass_init();
- bool jpg_open(int p_x_res, int p_y_res, int src_channels);
- void load_block_8_8_grey(int x);
- void load_block_8_8(int x, int y, int c);
- void load_block_16_8(int x, int c);
- void load_block_16_8_8(int x, int c);
- void load_quantized_coefficients(int component_num);
- void flush_output_buffer();
- void put_bits(uint bits, uint len);
- void code_coefficients_pass_one(int component_num);
- void code_coefficients_pass_two(int component_num);
- void code_block(int component_num);
- void process_mcu_row();
- bool terminate_pass_one();
- bool terminate_pass_two();
- bool process_end_of_image();
- void load_mcu(const void* src);
- void clear();
- void init();
- };
-
-} // namespace jpge
-
-#endif // JPEG_ENCODER
\ No newline at end of file
diff --git a/spaces/Cong723/gpt-academic-public/request_llm/bridge_all.py b/spaces/Cong723/gpt-academic-public/request_llm/bridge_all.py
deleted file mode 100644
index fddc9a756f062b68610737123ea39b6a83698a42..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/request_llm/bridge_all.py
+++ /dev/null
@@ -1,240 +0,0 @@
-
-"""
- 该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节
-
- 不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程
- 1. predict(...)
-
- 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
- 2. predict_no_ui_long_connection(...)
-"""
-import tiktoken
-from functools import lru_cache
-from concurrent.futures import ThreadPoolExecutor
-from toolbox import get_conf, trimmed_format_exc
-
-from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
-from .bridge_chatgpt import predict as chatgpt_ui
-
-from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
-from .bridge_chatglm import predict as chatglm_ui
-
-from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
-from .bridge_newbing import predict as newbing_ui
-
-# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
-# from .bridge_tgui import predict as tgui_ui
-
-colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
-
-class LazyloadTiktoken(object):
- def __init__(self, model):
- self.model = model
-
- @staticmethod
- @lru_cache(maxsize=128)
- def get_encoder(model):
- print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数')
- tmp = tiktoken.encoding_for_model(model)
- print('加载tokenizer完毕')
- return tmp
-
- def encode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.encode(*args, **kwargs)
-
- def decode(self, *args, **kwargs):
- encoder = self.get_encoder(self.model)
- return encoder.decode(*args, **kwargs)
-
-# Endpoint 重定向
-API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
-openai_endpoint = "https://api.openai.com/v1/chat/completions"
-api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
-newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
-# 兼容旧版的配置
-try:
- API_URL, = get_conf("API_URL")
- if API_URL != "https://api.openai.com/v1/chat/completions":
- openai_endpoint = API_URL
- print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
-except:
- pass
-# 新版配置
-if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
-if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
-if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
-
-
-# 获取tokenizer
-tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
-tokenizer_gpt4 = LazyloadTiktoken("gpt-4")
-get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=()))
-get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=()))
-
-
-model_info = {
- # openai
- "gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": openai_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # api_2d
- "api2d-gpt-3.5-turbo": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
- "api2d-gpt-4": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 8192,
- "tokenizer": tokenizer_gpt4,
- "token_cnt": get_token_num_gpt4,
- },
-
- # chatglm
- "chatglm": {
- "fn_with_ui": chatglm_ui,
- "fn_without_ui": chatglm_noui,
- "endpoint": None,
- "max_token": 1024,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
- # newbing
- "newbing": {
- "fn_with_ui": newbing_ui,
- "fn_without_ui": newbing_noui,
- "endpoint": newbing_endpoint,
- "max_token": 4096,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-}
-
-
-def LLM_CATCH_EXCEPTION(f):
- """
- 装饰器函数,将错误显示出来
- """
- def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience):
- try:
- return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- except Exception as e:
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- observe_window[0] = tb_str
- return tb_str
- return decorated
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False):
- """
- 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
- inputs:
- 是本次问询的输入
- sys_prompt:
- 系统静默prompt
- llm_kwargs:
- LLM的内部调优参数
- history:
- 是之前的对话列表
- observe_window = None:
- 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
- """
- import threading, time, copy
-
- model = llm_kwargs['llm_model']
- n_model = 1
- if '&' not in model:
- assert not model.startswith("tgui"), "TGUI不支持函数插件的实现"
-
- # 如果只询问1个大语言模型:
- method = model_info[model]["fn_without_ui"]
- return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
- else:
- # 如果同时询问多个大语言模型:
- executor = ThreadPoolExecutor(max_workers=4)
- models = model.split('&')
- n_model = len(models)
-
- window_len = len(observe_window)
- assert window_len==3
- window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True]
-
- futures = []
- for i in range(n_model):
- model = models[i]
- method = model_info[model]["fn_without_ui"]
- llm_kwargs_feedin = copy.deepcopy(llm_kwargs)
- llm_kwargs_feedin['llm_model'] = model
- future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience)
- futures.append(future)
-
- def mutex_manager(window_mutex, observe_window):
- while True:
- time.sleep(0.25)
- if not window_mutex[-1]: break
- # 看门狗(watchdog)
- for i in range(n_model):
- window_mutex[i][1] = observe_window[1]
- # 观察窗(window)
- chat_string = []
- for i in range(n_model):
- chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " )
- res = '
\n\n---\n\n'.join(chat_string)
- # # # # # # # # # # #
- observe_window[0] = res
-
- t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True)
- t_model.start()
-
- return_string_collect = []
- while True:
- worker_done = [h.done() for h in futures]
- if all(worker_done):
- executor.shutdown()
- break
- time.sleep(1)
-
- for i, future in enumerate(futures): # wait and get
- return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " )
-
- window_mutex[-1] = False # stop mutex thread
- res = '
\n\n---\n\n'.join(return_string_collect)
- return res
-
-
-def predict(inputs, llm_kwargs, *args, **kwargs):
- """
- 发送至LLM,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是LLM的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
-
- method = model_info[llm_kwargs['llm_model']]["fn_with_ui"]
- yield from method(inputs, llm_kwargs, *args, **kwargs)
-
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/mask.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/mask.py
deleted file mode 100644
index d660607b1a798c38ed0495ec4acb3b14de735d35..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/mask.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import cv2
-import numpy as np
-
-import util
-from util import nb as neighbour
-
-
-def find_white_components(mask, min_area = 0):
- mask = (mask == 0) * 1
- return find_black_components(mask, min_area);
-
-def find_black_components(mask, min_area = 0):
- """
- find components of zeros.
- mask is a 0-1 matrix, ndarray.
- """
- neighbour_type = neighbour.N4
- visited = mask.copy()
- c_mask = util.img.black(mask)
-
- root_idx = [1]
- def get_new_root():
- root_idx[0] += 1
- return root_idx[0]
-
- def is_visited(xy):
- x, y = xy
- return visited[y][x]
-
- def set_visited(xy):
- x, y = xy
- visited[y][x] = 255
-
- def set_root(xy, root):
- x, y = xy
- c_mask[y][x] = root
-
- def get_root(xy):
- x, y = xy
- return c_mask[y][x]
-
- rows, cols = np.shape(mask)
- q = []
- for y in xrange(rows):
- for x in xrange(cols):
- xy = (x, y)
- if is_visited(xy):
- continue
-
- q.append(xy)
- new_root = get_new_root()
- while len(q) > 0:
- cp = q.pop()
- set_root(cp, new_root)
- set_visited(cp)
- nbs = neighbour.get_neighbours(cp[0], cp[1], cols, rows, neighbour_type)
- for nb in nbs:
- if not is_visited(nb) and nb not in q:
-# q.append(nb)
- q.insert(0, nb)
-
- components = {}
- for y in xrange(rows):
- for x in xrange(cols):
- root = get_root((x, y))
- if root == 0:
- continue
-
- if root not in components:
- components[root] = []
-
- components[root].append((x,y))
-
- ret = []
-
- for root in components:
- if len(components[root]) >= min_area:
- ret.append(components[root])
-
- return ret
-
-
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/tasks/image_text_pretrain.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/tasks/image_text_pretrain.py
deleted file mode 100644
index db955f27bb7dc8093cffd95b3a26917bb681c846..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/tasks/image_text_pretrain.py
+++ /dev/null
@@ -1,18 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-from video_llama.common.registry import registry
-from video_llama.tasks.base_task import BaseTask
-
-
-@registry.register_task("image_text_pretrain")
-class ImageTextPretrainTask(BaseTask):
- def __init__(self):
- super().__init__()
-
- def evaluation(self, model, data_loader, cuda_enabled=True):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/chat_interface.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/chat_interface.py
deleted file mode 100644
index 7c6bc63455e91bc0709eb3d238e573eb18897271..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/chat_interface.py
+++ /dev/null
@@ -1,355 +0,0 @@
-"""
-This file defines a useful high-level abstraction to build Gradio chatbots: ChatInterface.
-"""
-
-
-from __future__ import annotations
-
-import inspect
-import warnings
-from typing import Callable, Generator
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.blocks import Blocks
-from gradio.components import (
- Button,
- Chatbot,
- Markdown,
- State,
- Textbox,
-)
-from gradio.helpers import create_examples as Examples # noqa: N812
-from gradio.layouts import Group, Row
-from gradio.themes import ThemeClass as Theme
-
-set_documentation_group("chatinterface")
-
-
-@document()
-class ChatInterface(Blocks):
- """
- ChatInterface is Gradio's high-level abstraction for creating chatbot UIs, and allows you to create
- a web-based demo around a chatbot model in a few lines of code. Only one parameter is required: fn, which
- takes a function that governs the response of the chatbot based on the user input and chat history. Additional
- parameters can be used to control the appearance and behavior of the demo.
-
- Example:
- import gradio as gr
-
- def echo(message, history):
- return message
-
- demo = gr.ChatInterface(fn=echo, examples=["hello", "hola", "merhaba"], title="Echo Bot")
- demo.launch()
- Demos: chatinterface_random_response, chatinterface_streaming_echo
- Guides: creating-a-chatbot-fast, sharing-your-app
- """
-
- def __init__(
- self,
- fn: Callable,
- *,
- chatbot: Chatbot | None = None,
- textbox: Textbox | None = None,
- examples: list[str] | None = None,
- cache_examples: bool | None = None,
- title: str | None = None,
- description: str | None = None,
- theme: Theme | str | None = None,
- css: str | None = None,
- analytics_enabled: bool | None = None,
- submit_btn: str | None | Button = "Submit",
- retry_btn: str | None | Button = "🔄 Retry",
- undo_btn: str | None | Button = "↩️ Undo",
- clear_btn: str | None | Button = "🗑️ Clear",
- ):
- """
- Parameters:
- fn: the function to wrap the chat interface around. Should accept two parameters: a string input message and list of two-element lists of the form [[user_message, bot_message], ...] representing the chat history, and return a string response. See the Chatbot documentation for more information on the chat history format.
- chatbot: an instance of the gr.Chatbot component to use for the chat interface, if you would like to customize the chatbot properties. If not provided, a default gr.Chatbot component will be created.
- textbox: an instance of the gr.Textbox component to use for the chat interface, if you would like to customize the textbox properties. If not provided, a default gr.Textbox component will be created.
- examples: sample inputs for the function; if provided, appear below the chatbot and can be clicked to populate the chatbot input.
- cache_examples: If True, caches examples in the server for fast runtime in examples. The default option in HuggingFace Spaces is True. The default option elsewhere is False.
- title: a title for the interface; if provided, appears above chatbot in large font. Also used as the tab title when opened in a browser window.
- description: a description for the interface; if provided, appears above the chatbot and beneath the title in regular font. Accepts Markdown and HTML content.
- theme: Theme to use, loaded from gradio.themes.
- css: custom css or path to custom css file to use with interface.
- analytics_enabled: Whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable if defined, or default to True.
- submit_btn: Text to display on the submit button. If None, no button will be displayed. If a Button object, that button will be used.
- retry_btn: Text to display on the retry button. If None, no button will be displayed. If a Button object, that button will be used.
- undo_btn: Text to display on the delete last button. If None, no button will be displayed. If a Button object, that button will be used.
- clear_btn: Text to display on the clear button. If None, no button will be displayed. If a Button object, that button will be used.
- """
- super().__init__(
- analytics_enabled=analytics_enabled,
- mode="chat_interface",
- css=css,
- title=title or "Gradio",
- theme=theme,
- )
- if len(inspect.signature(fn).parameters) != 2:
- warnings.warn(
- "The function to ChatInterface should take two inputs (message, history) and return a single string response.",
- UserWarning,
- )
-
- self.fn = fn
- self.examples = examples
- if self.space_id and cache_examples is None:
- self.cache_examples = True
- else:
- self.cache_examples = cache_examples or False
- self.buttons: list[Button] = []
-
- with self:
- if title:
- Markdown(
- f"
{self.title}
"
- )
- if description:
- Markdown(description)
-
- with Group():
- if chatbot:
- self.chatbot = chatbot.render()
- else:
- self.chatbot = Chatbot(label="Chatbot")
- with Row():
- if textbox:
- self.textbox = textbox.render()
- else:
- self.textbox = Textbox(
- container=False,
- show_label=False,
- placeholder="Type a message...",
- scale=10,
- )
- if submit_btn:
- if isinstance(submit_btn, Button):
- submit_btn.render()
- elif isinstance(submit_btn, str):
- submit_btn = Button(
- submit_btn, variant="primary", scale=1, min_width=0
- )
- else:
- raise ValueError(
- f"The submit_btn parameter must be a gr.Button, string, or None, not {type(submit_btn)}"
- )
- self.buttons.append(submit_btn)
-
- with Row():
- self.stop_btn = Button("Stop", variant="stop", visible=False)
-
- for btn in [retry_btn, undo_btn, clear_btn]:
- if btn:
- if isinstance(btn, Button):
- btn.render()
- elif isinstance(btn, str):
- btn = Button(btn, variant="secondary")
- else:
- raise ValueError(
- f"All the _btn parameters must be a gr.Button, string, or None, not {type(btn)}"
- )
- self.buttons.append(btn)
-
- self.fake_api_btn = Button("Fake API", visible=False)
- self.fake_response_textbox = Textbox(label="Response", visible=False)
- (
- self.submit_btn,
- self.retry_btn,
- self.undo_btn,
- self.clear_btn,
- ) = self.buttons
-
- if examples:
- if inspect.isgeneratorfunction(self.fn):
- examples_fn = self._examples_stream_fn
- else:
- examples_fn = self._examples_fn
-
- self.examples_handler = Examples(
- examples=examples,
- inputs=self.textbox,
- outputs=self.chatbot,
- fn=examples_fn,
- cache_examples=self.cache_examples,
- )
-
- self.saved_input = State()
-
- self._setup_events()
- self._setup_api()
-
- def _setup_events(self):
- if inspect.isgeneratorfunction(self.fn):
- submit_fn = self._stream_fn
- else:
- submit_fn = self._submit_fn
-
- self.textbox.submit(
- self._clear_and_save_textbox,
- [self.textbox],
- [self.textbox, self.saved_input],
- api_name=False,
- queue=False,
- ).then(
- self._display_input,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- queue=False,
- ).then(
- submit_fn,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- )
-
- if self.submit_btn:
- self.submit_btn.click(
- self._clear_and_save_textbox,
- [self.textbox],
- [self.textbox, self.saved_input],
- api_name=False,
- queue=False,
- ).then(
- self._display_input,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- queue=False,
- ).then(
- submit_fn,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- )
-
- if self.retry_btn:
- self.retry_btn.click(
- self._delete_prev_fn,
- [self.chatbot],
- [self.chatbot, self.saved_input],
- api_name=False,
- queue=False,
- ).then(
- self._display_input,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- queue=False,
- ).then(
- submit_fn,
- [self.saved_input, self.chatbot],
- [self.chatbot],
- api_name=False,
- )
-
- if self.undo_btn:
- self.undo_btn.click(
- self._delete_prev_fn,
- [self.chatbot],
- [self.chatbot, self.saved_input],
- api_name=False,
- queue=False,
- ).then(
- lambda x: x,
- [self.saved_input],
- [self.textbox],
- api_name=False,
- queue=False,
- )
-
- if self.clear_btn:
- self.clear_btn.click(
- lambda: ([], None),
- None,
- [self.chatbot, self.saved_input],
- queue=False,
- api_name=False,
- )
-
- def _setup_api(self):
- if inspect.isgeneratorfunction(self.fn):
- api_fn = self._api_stream_fn
- else:
- api_fn = self._api_submit_fn
-
- # Use a gr.State() instead of self.chatbot so that the API doesn't require passing forth
- # a chat history, instead it is just stored internally in the state.
- history = State([])
-
- self.fake_api_btn.click(
- api_fn,
- [self.textbox, history],
- [self.textbox, history],
- api_name="chat",
- )
-
- def _clear_and_save_textbox(self, message: str) -> tuple[str, str]:
- return "", message
-
- def _display_input(
- self, message: str, history: list[list[str | None]]
- ) -> list[list[str | None]]:
- history.append([message, None])
- return history
-
- def _submit_fn(
- self, message: str, history_with_input: list[list[str | None]]
- ) -> list[list[str | None]]:
- history = history_with_input[:-1]
- response = self.fn(message, history)
- history.append([message, response])
- return history
-
- def _stream_fn(
- self, message: str, history_with_input: list[list[str | None]]
- ) -> Generator[list[list[str | None]], None, None]:
- history = history_with_input[:-1]
- generator = self.fn(message, history)
- try:
- first_response = next(generator)
- yield history + [[message, first_response]]
- except StopIteration:
- yield history + [[message, None]]
- for response in generator:
- yield history + [[message, response]]
-
- def _api_submit_fn(
- self, message: str, history: list[list[str | None]]
- ) -> tuple[str, list[list[str | None]]]:
- response = self.fn(message, history)
- history.append([message, response])
- return response, history
-
- def _api_stream_fn(
- self, message: str, history: list[list[str | None]]
- ) -> Generator[tuple[str | None, list[list[str | None]]], None, None]:
- generator = self.fn(message, history)
- try:
- first_response = next(generator)
- yield first_response, history + [[message, first_response]]
- except StopIteration:
- yield None, history + [[message, None]]
- for response in generator:
- yield response, history + [[message, response]]
-
- def _examples_fn(self, message: str) -> list[list[str | None]]:
- return [[message, self.fn(message, [])]]
-
- def _examples_stream_fn(
- self, message: str
- ) -> Generator[list[list[str | None]], None, None]:
- for response in self.fn(message, []):
- yield [[message, response]]
-
- def _delete_prev_fn(
- self, history: list[list[str | None]]
- ) -> tuple[list[list[str | None]], str]:
- try:
- message, _ = history.pop()
- except IndexError:
- message = ""
- return history, message or ""
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/cli.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/cli.py
deleted file mode 100644
index aa8e8b9b099adbde4cee9f683feaaa5023895120..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/cli.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import sys
-
-import gradio.deploy_space
-import gradio.reload
-
-
-def cli():
- args = sys.argv[1:]
- if len(args) == 0:
- raise ValueError("No file specified.")
- if args[0] == "deploy":
- gradio.deploy_space.deploy()
- else:
- gradio.reload.main()
diff --git a/spaces/Dimitre/sentence-similarity-use/README.md b/spaces/Dimitre/sentence-similarity-use/README.md
deleted file mode 100644
index 9a01c8d94d19668d29238c87009e90d8876036ad..0000000000000000000000000000000000000000
--- a/spaces/Dimitre/sentence-similarity-use/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sentence Similarity Use
-emoji: 💩
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.1.5
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py
deleted file mode 100644
index 41f71fe4bfb85218cc283b3f7bc3a34fea5f790d..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/metrics/frechet_inception_distance.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# This work is licensed under the Creative Commons Attribution-NonCommercial
-# 4.0 International License. To view a copy of this license, visit
-# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
-# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
-
-"""Frechet Inception Distance (FID)."""
-
-import os
-import numpy as np
-import scipy
-import tensorflow as tf
-import dnnlib.tflib as tflib
-
-from metrics import metric_base
-from training import misc
-
-#----------------------------------------------------------------------------
-
-class FID(metric_base.MetricBase):
- def __init__(self, num_images, minibatch_per_gpu, **kwargs):
- super().__init__(**kwargs)
- self.num_images = num_images
- self.minibatch_per_gpu = minibatch_per_gpu
-
- def _evaluate(self, Gs, num_gpus):
- minibatch_size = num_gpus * self.minibatch_per_gpu
- inception = misc.load_pkl('https://drive.google.com/uc?id=1MzTY44rLToO5APn8TZmfR7_ENSe5aZUn') # inception_v3_features.pkl
- activations = np.empty([self.num_images, inception.output_shape[1]], dtype=np.float32)
-
- # Calculate statistics for reals.
- cache_file = self._get_cache_file_for_reals(num_images=self.num_images)
- os.makedirs(os.path.dirname(cache_file), exist_ok=True)
- if os.path.isfile(cache_file):
- mu_real, sigma_real = misc.load_pkl(cache_file)
- else:
- for idx, images in enumerate(self._iterate_reals(minibatch_size=minibatch_size)):
- begin = idx * minibatch_size
- end = min(begin + minibatch_size, self.num_images)
- activations[begin:end] = inception.run(images[:end-begin], num_gpus=num_gpus, assume_frozen=True)
- if end == self.num_images:
- break
- mu_real = np.mean(activations, axis=0)
- sigma_real = np.cov(activations, rowvar=False)
- misc.save_pkl((mu_real, sigma_real), cache_file)
-
- # Construct TensorFlow graph.
- result_expr = []
- for gpu_idx in range(num_gpus):
- with tf.device('/gpu:%d' % gpu_idx):
- Gs_clone = Gs.clone()
- inception_clone = inception.clone()
- latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:])
- images = Gs_clone.get_output_for(latents, None, is_validation=True, randomize_noise=True)
- images = tflib.convert_images_to_uint8(images)
- result_expr.append(inception_clone.get_output_for(images))
-
- # Calculate statistics for fakes.
- for begin in range(0, self.num_images, minibatch_size):
- end = min(begin + minibatch_size, self.num_images)
- activations[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin]
- mu_fake = np.mean(activations, axis=0)
- sigma_fake = np.cov(activations, rowvar=False)
-
- # Calculate FID.
- m = np.square(mu_fake - mu_real).sum()
- s, _ = scipy.linalg.sqrtm(np.dot(sigma_fake, sigma_real), disp=False) # pylint: disable=no-member
- dist = m + np.trace(sigma_fake + sigma_real - 2*s)
- self._report_result(np.real(dist))
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DiweshUIT/Spectrometer/app.py b/spaces/DiweshUIT/Spectrometer/app.py
deleted file mode 100644
index 2f23f60e1161e739270b9ed8247d31fa5de8c5cc..0000000000000000000000000000000000000000
--- a/spaces/DiweshUIT/Spectrometer/app.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import os
-os.system('pip install -e pycopy-colorsys')
-from gradio.components import Label
-import colorsys
-import cv2 as cv
-import gradio as gr
-import matplotlib
-import math
-import matplotlib.pyplot as plt
-import numpy as np
-
-
-def image_mod(image):
- #plt.figure(figsize=(10,10))
- #image1 = cv.imread(r"/content/photo1.jpg")
- grey = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
- #plt.imshow(grey)
- shape=(grey.shape)
- #pixelc=( grey.shape[0] * grey.shape[1])
-
- return grey
-def greet(image):
- grey = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
- #plt.imshow(grey)
- shape=(grey.shape)
- pixelc=( grey.shape[0] * grey.shape[1])
- return shape, pixelc
-
-
-def avg(image):
- l=[]
-
-#You're free to do a resize or not, just for the example
- cap = cv.resize(image, (340,480))
- for x in range (0,340,1):
- for y in range(0,480,1):
- color = cap[y,x]
- l.append(color)
- #print(color)
- n=len(l)
- l2 = [item[0] for item in l]
-#print(l2)
- sumred=0
-#l2 = [item[1] for item in l]
- #print(l2)
- for ele in range(0, len(l2)):
- sumred = sumred + l2[ele]
- answer=sumred/n
- sumgreen=0
- l3 = [item[1] for item in l]
- #print(l3)
- for ele in range(0, len(l3)):
- sumgreen = sumgreen + l3[ele]
- answer1=sumgreen/n
- sumblue=0
- l4 = [item[2] for item in l]
-#print(l4)
- for ele in range(0, len(l4)):
- sumblue = sumblue + l4[ele]
- answer2=sumblue/n
-#print(answer2)
- newp=(answer1+answer2+answer)/3
- red=answer #red ko blue see change
- green=answer2
- blue=answer1
- #rgb_to_name((0, 0, 0))
- fig=plt.figure()
- plt.imshow([[(math.ceil(red), math.ceil(blue), math.ceil(green))]])
- #plt.show()
- return plt
-def wave(image):
- l=[]
-
-#You're free to do a resize or not, just for the example
- cap = cv.resize(image, (340,480))
- for x in range (0,340,1):
- for y in range(0,480,1):
- color = cap[y,x]
- l.append(color)
- #print(color)
- n=len(l)
- l2 = [item[0] for item in l]
-#print(l2)
- sumred=0
-#l2 = [item[1] for item in l]
- #print(l2)
- for ele in range(0, len(l2)):
- sumred = sumred + l2[ele]
- answer=sumred/n
- sumgreen=0
- l3 = [item[1] for item in l]
- #print(l3)
- for ele in range(0, len(l3)):
- sumgreen = sumgreen + l3[ele]
- answer1=sumgreen/n
- sumblue=0
- l4 = [item[2] for item in l]
-#print(l4)
- for ele in range(0, len(l4)):
- sumblue = sumblue + l4[ele]
- answer2=sumblue/n
-#print(answer2)
- newp=(answer1+answer2+answer)/3
- a1=math.ceil(answer)
- a2=math.ceil(answer1)
- a3=math.ceil(answer2)
-
-
- #rgb normal: range (0-255, 0-255, 0.255)
- blue=answer2 #red ko blue see change
- green=answer1
- red=answer #red ko blue see change
- #rgb normal: range (0-255, 0-255, 0.255)
- #get rgb percentage: range (0-1, 0-1, 0-1 )
- red_percentage= red / float(255)
- green_percentage= green/ float(255)
- blue_percentage=blue / float(255)
-
-
- #get hsv percentage: range (0-1, 0-1, 0-1)
- color_hsv_percentage=colorsys.rgb_to_hsv(red_percentage, green_percentage, blue_percentage)
- #print('color_hsv_percentage: ', color_hsv_percentage)
-
-
-
- #get normal hsv: range (0-360, 0-255, 0-255)
- color_h=round(360*color_hsv_percentage[0])
- color_s=round(255*color_hsv_percentage[1])
- color_v=round(255*color_hsv_percentage[2])
-
- color_hsv=[color_h, color_s, color_h]
- l= 650 - 250 / 270 *color_h
- #print('color_hsv: ', color_hsv)
- return l
-
-
-
-demo4=gr.Interface(wave,gr.Image(label="Select your Image"),outputs=[gr.outputs.Textbox(label="Wave Length in nm (Nanometre)")],title="Nature of Wave Length Emitted")
-#demo5=gr.Interface(avg,gr.Image(label="Average Color seen by Cameraman"),outputs="text",title="Color Analysis")
-output3 = gr.Plot()
-demo1 = gr.Interface(image_mod, gr.Image(label="UPLOAD YOUR IMAGE HERE",shape=(2000, 2000)),"image",title="Spectrometer")
-demo3=gr.Interface(avg,gr.Image(label="Average Color seen by the camera"),outputs=[gr.Plot(label="Matplotlib Plot")],title="Expected Color Seen by Camera")
-demo2 = gr.Interface(greet,gr.Image(label="UPLOAD YOUR IMAGE HERE"),outputs="text",title="Dimension and No.of Pixels")
-demo = gr.TabbedInterface([demo1,demo2,demo3,demo4])
-demo.launch()
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/configs/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/configs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py b/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py
deleted file mode 100644
index ede4360148e260363887662bae7fe68c987ee60e..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py
+++ /dev/null
@@ -1,674 +0,0 @@
-import math
-import random
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from .op.fused_act import FusedLeakyReLU, fused_leaky_relu
-from .op.upfirdn2d import upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- return_features=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- elif return_features:
- return image, out
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/edit/edit_config.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/edit/edit_config.py
deleted file mode 100644
index 25fb4e500f5ce6ec6ec07631899b851492b08bb9..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/edit/edit_config.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-attr_dict = dict(
- interface_gan={ # strength
- # strength: negative for shorter, positive for longer
- 'upper_length': [-1],
- 'bottom_length': [1]
- },
- stylespace={ # layer, strength, threshold
- # strength: negative for shorter, positive for longer
- 'upper_length': [5, -5, 0.0028],
- 'bottom_length': [3, 5, 0.003]
- },
- sefa={ # layer, strength
- # -5 # strength: negative for longer, positive for shorter
- 'upper_length': [[4, 5, 6, 7], 5],
- 'bottom_length': [[4, 5, 6, 7], 5]
- }
-)
diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/motr_det.py b/spaces/ECCV2022/bytetrack/tutorials/motr/motr_det.py
deleted file mode 100644
index b9f74fdf8520385a79653a557631fa4a9ac1b9fc..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/motr/motr_det.py
+++ /dev/null
@@ -1,677 +0,0 @@
-# ------------------------------------------------------------------------
-# Copyright (c) 2021 megvii-model. All Rights Reserved.
-# ------------------------------------------------------------------------
-# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)
-# Copyright (c) 2020 SenseTime. All Rights Reserved.
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# ------------------------------------------------------------------------
-
-"""
-DETR model and criterion classes.
-"""
-import copy
-import math
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn, Tensor
-from typing import List
-
-from util import box_ops
-from util.misc import (NestedTensor, nested_tensor_from_tensor_list,
- accuracy, get_world_size, interpolate, get_rank,
- is_dist_avail_and_initialized, inverse_sigmoid)
-
-from models.structures import Instances, Boxes, pairwise_iou, matched_boxlist_iou
-
-from .backbone import build_backbone
-from .matcher import build_matcher
-from .deformable_transformer_plus import build_deforamble_transformer
-from .qim import build as build_query_interaction_layer
-from .memory_bank import build_memory_bank
-from .deformable_detr import SetCriterion, MLP
-from .segmentation import sigmoid_focal_loss
-
-
-class ClipMatcher(SetCriterion):
- def __init__(self, num_classes,
- matcher,
- weight_dict,
- losses):
- """ Create the criterion.
- Parameters:
- num_classes: number of object categories, omitting the special no-object category
- matcher: module able to compute a matching between targets and proposals
- weight_dict: dict containing as key the names of the losses and as values their relative weight.
- eos_coef: relative classification weight applied to the no-object category
- losses: list of all the losses to be applied. See get_loss for list of available losses.
- """
- super().__init__(num_classes, matcher, weight_dict, losses)
- self.num_classes = num_classes
- self.matcher = matcher
- self.weight_dict = weight_dict
- self.losses = losses
- self.focal_loss = True
- self.losses_dict = {}
- self._current_frame_idx = 0
-
- def initialize_for_single_clip(self, gt_instances: List[Instances]):
- self.gt_instances = gt_instances
- self.num_samples = 0
- self.sample_device = None
- self._current_frame_idx = 0
- self.losses_dict = {}
-
- def _step(self):
- self._current_frame_idx += 1
-
- def calc_loss_for_track_scores(self, track_instances: Instances):
- frame_id = self._current_frame_idx - 1
- gt_instances = self.gt_instances[frame_id]
- outputs = {
- 'pred_logits': track_instances.track_scores[None],
- }
- device = track_instances.track_scores.device
-
- num_tracks = len(track_instances)
- src_idx = torch.arange(num_tracks, dtype=torch.long, device=device)
- tgt_idx = track_instances.matched_gt_idxes # -1 for FP tracks and disappeared tracks
-
- track_losses = self.get_loss('labels',
- outputs=outputs,
- gt_instances=[gt_instances],
- indices=[(src_idx, tgt_idx)],
- num_boxes=1)
- self.losses_dict.update(
- {'frame_{}_track_{}'.format(frame_id, key): value for key, value in
- track_losses.items()})
-
- def get_num_boxes(self, num_samples):
- num_boxes = torch.as_tensor(num_samples, dtype=torch.float, device=self.sample_device)
- if is_dist_avail_and_initialized():
- torch.distributed.all_reduce(num_boxes)
- num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item()
- return num_boxes
-
- def get_loss(self, loss, outputs, gt_instances, indices, num_boxes, **kwargs):
- loss_map = {
- 'labels': self.loss_labels,
- 'cardinality': self.loss_cardinality,
- 'boxes': self.loss_boxes,
- }
- assert loss in loss_map, f'do you really want to compute {loss} loss?'
- return loss_map[loss](outputs, gt_instances, indices, num_boxes, **kwargs)
-
- def loss_boxes(self, outputs, gt_instances: List[Instances], indices: List[tuple], num_boxes):
- """Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
- targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
- The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size.
- """
- # We ignore the regression loss of the track-disappear slots.
- #TODO: Make this filter process more elegant.
- filtered_idx = []
- for src_per_img, tgt_per_img in indices:
- keep = tgt_per_img != -1
- filtered_idx.append((src_per_img[keep], tgt_per_img[keep]))
- indices = filtered_idx
- idx = self._get_src_permutation_idx(indices)
- src_boxes = outputs['pred_boxes'][idx]
- target_boxes = torch.cat([gt_per_img.boxes[i] for gt_per_img, (_, i) in zip(gt_instances, indices)], dim=0)
-
- # for pad target, don't calculate regression loss, judged by whether obj_id=-1
- target_obj_ids = torch.cat([gt_per_img.obj_ids[i] for gt_per_img, (_, i) in zip(gt_instances, indices)], dim=0) # size(16)
- mask = (target_obj_ids != -1)
-
- loss_bbox = F.l1_loss(src_boxes[mask], target_boxes[mask], reduction='none')
- loss_giou = 1 - torch.diag(box_ops.generalized_box_iou(
- box_ops.box_cxcywh_to_xyxy(src_boxes[mask]),
- box_ops.box_cxcywh_to_xyxy(target_boxes[mask])))
-
- losses = {}
- losses['loss_bbox'] = loss_bbox.sum() / num_boxes
- losses['loss_giou'] = loss_giou.sum() / num_boxes
-
- return losses
-
- def loss_labels(self, outputs, gt_instances: List[Instances], indices, num_boxes, log=False):
- """Classification loss (NLL)
- targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
- """
- src_logits = outputs['pred_logits']
- idx = self._get_src_permutation_idx(indices)
- target_classes = torch.full(src_logits.shape[:2], self.num_classes,
- dtype=torch.int64, device=src_logits.device)
- # The matched gt for disappear track query is set -1.
- labels = []
- for gt_per_img, (_, J) in zip(gt_instances, indices):
- labels_per_img = torch.ones_like(J)
- # set labels of track-appear slots to 0.
- if len(gt_per_img) > 0:
- labels_per_img[J != -1] = gt_per_img.labels[J[J != -1]]
- labels.append(labels_per_img)
- target_classes_o = torch.cat(labels)
- target_classes[idx] = target_classes_o
- if self.focal_loss:
- gt_labels_target = F.one_hot(target_classes, num_classes=self.num_classes + 1)[:, :, :-1] # no loss for the last (background) class
- gt_labels_target = gt_labels_target.to(src_logits)
- loss_ce = sigmoid_focal_loss(src_logits.flatten(1),
- gt_labels_target.flatten(1),
- alpha=0.25,
- gamma=2,
- num_boxes=num_boxes, mean_in_dim1=False)
- loss_ce = loss_ce.sum()
- else:
- loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)
- losses = {'loss_ce': loss_ce}
-
- if log:
- # TODO this should probably be a separate loss, not hacked in this one here
- losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]
-
- return losses
-
- def match_for_single_frame(self, outputs: dict):
- outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs'}
-
- gt_instances_i = self.gt_instances[self._current_frame_idx] # gt instances of i-th image.
- track_instances: Instances = outputs_without_aux['track_instances']
- pred_logits_i = track_instances.pred_logits # predicted logits of i-th image.
- pred_boxes_i = track_instances.pred_boxes # predicted boxes of i-th image.
-
- obj_idxes = gt_instances_i.obj_ids
- obj_idxes_list = obj_idxes.detach().cpu().numpy().tolist()
- obj_idx_to_gt_idx = {obj_idx: gt_idx for gt_idx, obj_idx in enumerate(obj_idxes_list)}
- outputs_i = {
- 'pred_logits': pred_logits_i.unsqueeze(0),
- 'pred_boxes': pred_boxes_i.unsqueeze(0),
- }
-
- # step1. inherit and update the previous tracks.
- num_disappear_track = 0
- for j in range(len(track_instances)):
- obj_id = track_instances.obj_idxes[j].item()
- # set new target idx.
- if obj_id >= 0:
- if obj_id in obj_idx_to_gt_idx:
- track_instances.matched_gt_idxes[j] = obj_idx_to_gt_idx[obj_id]
- else:
- num_disappear_track += 1
- track_instances.matched_gt_idxes[j] = -1 # track-disappear case.
- else:
- track_instances.matched_gt_idxes[j] = -1
-
- full_track_idxes = torch.arange(len(track_instances), dtype=torch.long).to(pred_logits_i.device)
- matched_track_idxes = (track_instances.obj_idxes >= 0) # occu
- prev_matched_indices = torch.stack(
- [full_track_idxes[matched_track_idxes], track_instances.matched_gt_idxes[matched_track_idxes]], dim=1).to(
- pred_logits_i.device)
-
- # step2. select the unmatched slots.
- # note that the FP tracks whose obj_idxes are -2 will not be selected here.
- unmatched_track_idxes = full_track_idxes[track_instances.obj_idxes == -1]
-
- # step3. select the untracked gt instances (new tracks).
- tgt_indexes = track_instances.matched_gt_idxes
- tgt_indexes = tgt_indexes[tgt_indexes != -1]
-
- tgt_state = torch.zeros(len(gt_instances_i)).to(pred_logits_i.device)
- tgt_state[tgt_indexes] = 1
- untracked_tgt_indexes = torch.arange(len(gt_instances_i)).to(pred_logits_i.device)[tgt_state == 0]
- # untracked_tgt_indexes = select_unmatched_indexes(tgt_indexes, len(gt_instances_i))
- untracked_gt_instances = gt_instances_i[untracked_tgt_indexes]
-
- def match_for_single_decoder_layer(unmatched_outputs, matcher):
- new_track_indices = matcher(unmatched_outputs,
- [untracked_gt_instances]) # list[tuple(src_idx, tgt_idx)]
-
- src_idx = new_track_indices[0][0]
- tgt_idx = new_track_indices[0][1]
- # concat src and tgt.
- new_matched_indices = torch.stack([unmatched_track_idxes[src_idx], untracked_tgt_indexes[tgt_idx]],
- dim=1).to(pred_logits_i.device)
- return new_matched_indices
-
- # step4. do matching between the unmatched slots and GTs.
- unmatched_outputs = {
- 'pred_logits': track_instances.pred_logits[unmatched_track_idxes].unsqueeze(0),
- 'pred_boxes': track_instances.pred_boxes[unmatched_track_idxes].unsqueeze(0),
- }
- new_matched_indices = match_for_single_decoder_layer(unmatched_outputs, self.matcher)
-
- # step5. update obj_idxes according to the new matching result.
- track_instances.obj_idxes[new_matched_indices[:, 0]] = gt_instances_i.obj_ids[new_matched_indices[:, 1]].long()
- track_instances.matched_gt_idxes[new_matched_indices[:, 0]] = new_matched_indices[:, 1]
-
- # step6. calculate iou.
- active_idxes = (track_instances.obj_idxes >= 0) & (track_instances.matched_gt_idxes >= 0)
- active_track_boxes = track_instances.pred_boxes[active_idxes]
- if len(active_track_boxes) > 0:
- gt_boxes = gt_instances_i.boxes[track_instances.matched_gt_idxes[active_idxes]]
- active_track_boxes = box_ops.box_cxcywh_to_xyxy(active_track_boxes)
- gt_boxes = box_ops.box_cxcywh_to_xyxy(gt_boxes)
- track_instances.iou[active_idxes] = matched_boxlist_iou(Boxes(active_track_boxes), Boxes(gt_boxes))
-
- # step7. merge the unmatched pairs and the matched pairs.
- matched_indices = torch.cat([new_matched_indices, prev_matched_indices], dim=0)
-
- # step8. calculate losses.
- self.num_samples += len(gt_instances_i) + num_disappear_track
- self.sample_device = pred_logits_i.device
- for loss in self.losses:
- new_track_loss = self.get_loss(loss,
- outputs=outputs_i,
- gt_instances=[gt_instances_i],
- indices=[(matched_indices[:, 0], matched_indices[:, 1])],
- num_boxes=1)
- self.losses_dict.update(
- {'frame_{}_{}'.format(self._current_frame_idx, key): value for key, value in new_track_loss.items()})
-
- if 'aux_outputs' in outputs:
- for i, aux_outputs in enumerate(outputs['aux_outputs']):
- unmatched_outputs_layer = {
- 'pred_logits': aux_outputs['pred_logits'][0, unmatched_track_idxes].unsqueeze(0),
- 'pred_boxes': aux_outputs['pred_boxes'][0, unmatched_track_idxes].unsqueeze(0),
- }
- new_matched_indices_layer = match_for_single_decoder_layer(unmatched_outputs_layer, self.matcher)
- matched_indices_layer = torch.cat([new_matched_indices_layer, prev_matched_indices], dim=0)
- for loss in self.losses:
- if loss == 'masks':
- # Intermediate masks losses are too costly to compute, we ignore them.
- continue
- l_dict = self.get_loss(loss,
- aux_outputs,
- gt_instances=[gt_instances_i],
- indices=[(matched_indices_layer[:, 0], matched_indices_layer[:, 1])],
- num_boxes=1, )
- self.losses_dict.update(
- {'frame_{}_aux{}_{}'.format(self._current_frame_idx, i, key): value for key, value in
- l_dict.items()})
- self._step()
- return track_instances
-
- def forward(self, outputs, input_data: dict):
- # losses of each frame are calculated during the model's forwarding and are outputted by the model as outputs['losses_dict].
- losses = outputs.pop("losses_dict")
- num_samples = self.get_num_boxes(self.num_samples)
- for loss_name, loss in losses.items():
- losses[loss_name] /= num_samples
- return losses
-
-
-class RuntimeTrackerBase(object):
- def __init__(self, score_thresh=0.8, filter_score_thresh=0.6, miss_tolerance=5):
- self.score_thresh = score_thresh
- self.filter_score_thresh = filter_score_thresh
- self.miss_tolerance = miss_tolerance
- self.max_obj_id = 0
-
- def clear(self):
- self.max_obj_id = 0
-
- def update(self, track_instances: Instances):
- track_instances.disappear_time[track_instances.scores >= self.score_thresh] = 0
- for i in range(len(track_instances)):
- if track_instances.obj_idxes[i] == -1 and track_instances.scores[i] >= self.score_thresh:
- # print("track {} has score {}, assign obj_id {}".format(i, track_instances.scores[i], self.max_obj_id))
- track_instances.obj_idxes[i] = self.max_obj_id
- self.max_obj_id += 1
- elif track_instances.obj_idxes[i] >= 0 and track_instances.scores[i] < self.filter_score_thresh:
- track_instances.disappear_time[i] += 1
- if track_instances.disappear_time[i] >= self.miss_tolerance:
- # Set the obj_id to -1.
- # Then this track will be removed by TrackEmbeddingLayer.
- track_instances.obj_idxes[i] = -1
-
-
-class TrackerPostProcess(nn.Module):
- """ This module converts the model's output into the format expected by the coco api"""
- def __init__(self):
- super().__init__()
-
- @torch.no_grad()
- def forward(self, track_instances: Instances, target_size) -> Instances:
- """ Perform the computation
- Parameters:
- outputs: raw outputs of the model
- target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch
- For evaluation, this must be the original image size (before any data augmentation)
- For visualization, this should be the image size after data augment, but before padding
- """
- out_logits = track_instances.pred_logits
- out_bbox = track_instances.pred_boxes
-
- prob = out_logits.sigmoid()
- # prob = out_logits[...,:1].sigmoid()
- scores, labels = prob.max(-1)
-
- # convert to [x0, y0, x1, y1] format
- boxes = box_ops.box_cxcywh_to_xyxy(out_bbox)
- # and from relative [0, 1] to absolute [0, height] coordinates
- img_h, img_w = target_size
- scale_fct = torch.Tensor([img_w, img_h, img_w, img_h]).to(boxes)
- boxes = boxes * scale_fct[None, :]
-
- track_instances.boxes = boxes
- track_instances.scores = scores
- track_instances.labels = labels
-# track_instances.remove('pred_logits')
-# track_instances.remove('pred_boxes')
- return track_instances
-
-
-def _get_clones(module, N):
- return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
-
-
-class MOTR(nn.Module):
- def __init__(self, backbone, transformer, num_classes, num_queries, num_feature_levels, criterion, track_embed,
- aux_loss=True, with_box_refine=False, two_stage=False, memory_bank=None):
- """ Initializes the model.
- Parameters:
- backbone: torch module of the backbone to be used. See backbone.py
- transformer: torch module of the transformer architecture. See transformer.py
- num_classes: number of object classes
- num_queries: number of object queries, ie detection slot. This is the maximal number of objects
- DETR can detect in a single image. For COCO, we recommend 100 queries.
- aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.
- with_box_refine: iterative bounding box refinement
- two_stage: two-stage Deformable DETR
- """
- super().__init__()
- self.num_queries = num_queries
- self.track_embed = track_embed
- self.transformer = transformer
- hidden_dim = transformer.d_model
- self.num_classes = num_classes
- self.class_embed = nn.Linear(hidden_dim, num_classes)
- self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
- self.num_feature_levels = num_feature_levels
- if not two_stage:
- self.query_embed = nn.Embedding(num_queries, hidden_dim * 2)
- if num_feature_levels > 1:
- num_backbone_outs = len(backbone.strides)
- input_proj_list = []
- for _ in range(num_backbone_outs):
- in_channels = backbone.num_channels[_]
- input_proj_list.append(nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- ))
- for _ in range(num_feature_levels - num_backbone_outs):
- input_proj_list.append(nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(32, hidden_dim),
- ))
- in_channels = hidden_dim
- self.input_proj = nn.ModuleList(input_proj_list)
- else:
- self.input_proj = nn.ModuleList([
- nn.Sequential(
- nn.Conv2d(backbone.num_channels[0], hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- )])
- self.backbone = backbone
- self.aux_loss = aux_loss
- self.with_box_refine = with_box_refine
- self.two_stage = two_stage
-
- prior_prob = 0.01
- bias_value = -math.log((1 - prior_prob) / prior_prob)
- self.class_embed.bias.data = torch.ones(num_classes) * bias_value
- nn.init.constant_(self.bbox_embed.layers[-1].weight.data, 0)
- nn.init.constant_(self.bbox_embed.layers[-1].bias.data, 0)
- for proj in self.input_proj:
- nn.init.xavier_uniform_(proj[0].weight, gain=1)
- nn.init.constant_(proj[0].bias, 0)
-
- # if two-stage, the last class_embed and bbox_embed is for region proposal generation
- num_pred = (transformer.decoder.num_layers + 1) if two_stage else transformer.decoder.num_layers
- if with_box_refine:
- self.class_embed = _get_clones(self.class_embed, num_pred)
- self.bbox_embed = _get_clones(self.bbox_embed, num_pred)
- nn.init.constant_(self.bbox_embed[0].layers[-1].bias.data[2:], -2.0)
- # hack implementation for iterative bounding box refinement
- self.transformer.decoder.bbox_embed = self.bbox_embed
- else:
- nn.init.constant_(self.bbox_embed.layers[-1].bias.data[2:], -2.0)
- self.class_embed = nn.ModuleList([self.class_embed for _ in range(num_pred)])
- self.bbox_embed = nn.ModuleList([self.bbox_embed for _ in range(num_pred)])
- self.transformer.decoder.bbox_embed = None
- if two_stage:
- # hack implementation for two-stage
- self.transformer.decoder.class_embed = self.class_embed
- for box_embed in self.bbox_embed:
- nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0)
- self.post_process = TrackerPostProcess()
- self.track_base = RuntimeTrackerBase()
- self.criterion = criterion
- self.memory_bank = memory_bank
- self.mem_bank_len = 0 if memory_bank is None else memory_bank.max_his_length
-
- def _generate_empty_tracks(self):
- track_instances = Instances((1, 1))
- num_queries, dim = self.query_embed.weight.shape # (300, 512)
- device = self.query_embed.weight.device
- track_instances.ref_pts = self.transformer.reference_points(self.query_embed.weight[:, :dim // 2])
- track_instances.query_pos = self.query_embed.weight
- track_instances.output_embedding = torch.zeros((num_queries, dim >> 1), device=device)
- track_instances.obj_idxes = torch.full((len(track_instances),), -1, dtype=torch.long, device=device)
- track_instances.matched_gt_idxes = torch.full((len(track_instances),), -1, dtype=torch.long, device=device)
- track_instances.disappear_time = torch.zeros((len(track_instances), ), dtype=torch.long, device=device)
- track_instances.iou = torch.zeros((len(track_instances),), dtype=torch.float, device=device)
- track_instances.scores = torch.zeros((len(track_instances),), dtype=torch.float, device=device)
- track_instances.track_scores = torch.zeros((len(track_instances),), dtype=torch.float, device=device)
- track_instances.pred_boxes = torch.zeros((len(track_instances), 4), dtype=torch.float, device=device)
- track_instances.pred_logits = torch.zeros((len(track_instances), self.num_classes), dtype=torch.float, device=device)
-
- mem_bank_len = self.mem_bank_len
- track_instances.mem_bank = torch.zeros((len(track_instances), mem_bank_len, dim // 2), dtype=torch.float32, device=device)
- track_instances.mem_padding_mask = torch.ones((len(track_instances), mem_bank_len), dtype=torch.bool, device=device)
- track_instances.save_period = torch.zeros((len(track_instances), ), dtype=torch.float32, device=device)
-
- return track_instances.to(self.query_embed.weight.device)
-
- def clear(self):
- self.track_base.clear()
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_coord):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- return [{'pred_logits': a, 'pred_boxes': b, }
- for a, b in zip(outputs_class[:-1], outputs_coord[:-1])]
-
- def _forward_single_image(self, samples, track_instances: Instances):
- features, pos = self.backbone(samples)
- src, mask = features[-1].decompose()
- assert mask is not None
-
- srcs = []
- masks = []
- for l, feat in enumerate(features):
- src, mask = feat.decompose()
- srcs.append(self.input_proj[l](src))
- masks.append(mask)
- assert mask is not None
-
- if self.num_feature_levels > len(srcs):
- _len_srcs = len(srcs)
- for l in range(_len_srcs, self.num_feature_levels):
- if l == _len_srcs:
- src = self.input_proj[l](features[-1].tensors)
- else:
- src = self.input_proj[l](srcs[-1])
- m = samples.mask
- mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]
- pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)
- srcs.append(src)
- masks.append(mask)
- pos.append(pos_l)
-
- hs, init_reference, inter_references, enc_outputs_class, enc_outputs_coord_unact = self.transformer(srcs, masks, pos, track_instances.query_pos, ref_pts=track_instances.ref_pts)
-
- outputs_classes = []
- outputs_coords = []
- for lvl in range(hs.shape[0]):
- if lvl == 0:
- reference = init_reference
- else:
- reference = inter_references[lvl - 1]
- reference = inverse_sigmoid(reference)
- outputs_class = self.class_embed[lvl](hs[lvl])
- tmp = self.bbox_embed[lvl](hs[lvl])
- if reference.shape[-1] == 4:
- tmp += reference
- else:
- assert reference.shape[-1] == 2
- tmp[..., :2] += reference
- outputs_coord = tmp.sigmoid()
- outputs_classes.append(outputs_class)
- outputs_coords.append(outputs_coord)
- outputs_class = torch.stack(outputs_classes)
- outputs_coord = torch.stack(outputs_coords)
-
- ref_pts_all = torch.cat([init_reference[None], inter_references[:, :, :, :2]], dim=0)
- out = {'pred_logits': outputs_class[-1], 'pred_boxes': outputs_coord[-1], 'ref_pts': ref_pts_all[5]}
- if self.aux_loss:
- out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord)
-
- with torch.no_grad():
- if self.training:
- track_scores = outputs_class[-1, 0, :].sigmoid().max(dim=-1).values
- else:
- track_scores = outputs_class[-1, 0, :, 0].sigmoid()
-
- track_instances.scores = track_scores
- track_instances.pred_logits = outputs_class[-1, 0]
- track_instances.pred_boxes = outputs_coord[-1, 0]
- track_instances.output_embedding = hs[-1, 0]
- if self.training:
- # the track id will be assigned by the mather.
- out['track_instances'] = track_instances
- track_instances = self.criterion.match_for_single_frame(out)
- else:
- # each track will be assigned an unique global id by the track base.
- self.track_base.update(track_instances)
- if self.memory_bank is not None:
- track_instances = self.memory_bank(track_instances)
- # track_instances.track_scores = track_instances.track_scores[..., 0]
- # track_instances.scores = track_instances.track_scores.sigmoid()
- if self.training:
- self.criterion.calc_loss_for_track_scores(track_instances)
- tmp = {}
- tmp['init_track_instances'] = self._generate_empty_tracks()
- tmp['track_instances'] = track_instances
- out_track_instances = self.track_embed(tmp)
- out['track_instances'] = out_track_instances
- return out
-
- @torch.no_grad()
- def inference_single_image(self, img, ori_img_size, track_instances=None):
- if not isinstance(img, NestedTensor):
- img = nested_tensor_from_tensor_list(img)
-# if track_instances is None:
-# track_instances = self._generate_empty_tracks()
- track_instances = self._generate_empty_tracks()
-
- res = self._forward_single_image(img, track_instances=track_instances)
-
- track_instances = res['track_instances']
- track_instances = self.post_process(track_instances, ori_img_size)
- ret = {'track_instances': track_instances}
- if 'ref_pts' in res:
- ref_pts = res['ref_pts']
- img_h, img_w = ori_img_size
- scale_fct = torch.Tensor([img_w, img_h]).to(ref_pts)
- ref_pts = ref_pts * scale_fct[None]
- ret['ref_pts'] = ref_pts
- return ret
-
- def forward(self, data: dict):
- if self.training:
- self.criterion.initialize_for_single_clip(data['gt_instances'])
- frames = data['imgs'] # list of Tensor.
- outputs = {
- 'pred_logits': [],
- 'pred_boxes': [],
- }
-
- track_instances = self._generate_empty_tracks()
- for frame in frames:
- if not isinstance(frame, NestedTensor):
- frame = nested_tensor_from_tensor_list([frame])
- frame_res = self._forward_single_image(frame, track_instances)
- track_instances = frame_res['track_instances']
- outputs['pred_logits'].append(frame_res['pred_logits'])
- outputs['pred_boxes'].append(frame_res['pred_boxes'])
-
- if not self.training:
- outputs['track_instances'] = track_instances
- else:
- outputs['losses_dict'] = self.criterion.losses_dict
- return outputs
-
-
-def build(args):
- dataset_to_num_classes = {
- 'coco': 91,
- 'coco_panoptic': 250,
- 'e2e_mot': 1,
- 'e2e_joint': 1,
- 'e2e_static_mot': 1
- }
- assert args.dataset_file in dataset_to_num_classes
- num_classes = dataset_to_num_classes[args.dataset_file]
- device = torch.device(args.device)
-
- backbone = build_backbone(args)
-
- transformer = build_deforamble_transformer(args)
- d_model = transformer.d_model
- hidden_dim = args.dim_feedforward
- query_interaction_layer = build_query_interaction_layer(args, args.query_interaction_layer, d_model, hidden_dim, d_model*2)
-
- img_matcher = build_matcher(args)
- num_frames_per_batch = max(args.sampler_lengths)
- weight_dict = {}
- for i in range(num_frames_per_batch):
- weight_dict.update({"frame_{}_loss_ce".format(i): args.cls_loss_coef,
- 'frame_{}_loss_bbox'.format(i): args.bbox_loss_coef,
- 'frame_{}_loss_giou'.format(i): args.giou_loss_coef,
- })
-
- # TODO this is a hack
- if args.aux_loss:
- for i in range(num_frames_per_batch):
- for j in range(args.dec_layers - 1):
- weight_dict.update({"frame_{}_aux{}_loss_ce".format(i, j): args.cls_loss_coef,
- 'frame_{}_aux{}_loss_bbox'.format(i, j): args.bbox_loss_coef,
- 'frame_{}_aux{}_loss_giou'.format(i, j): args.giou_loss_coef,
- })
- if args.memory_bank_type is not None and len(args.memory_bank_type) > 0:
- memory_bank = build_memory_bank(args, d_model, hidden_dim, d_model * 2)
- for i in range(num_frames_per_batch):
- weight_dict.update({"frame_{}_track_loss_ce".format(i): args.cls_loss_coef})
- else:
- memory_bank = None
- losses = ['labels', 'boxes']
- criterion = ClipMatcher(num_classes, matcher=img_matcher, weight_dict=weight_dict, losses=losses)
- criterion.to(device)
- postprocessors = {}
- model = MOTR(
- backbone,
- transformer,
- track_embed=query_interaction_layer,
- num_feature_levels=args.num_feature_levels,
- num_classes=num_classes,
- num_queries=args.num_queries,
- aux_loss=args.aux_loss,
- criterion=criterion,
- with_box_refine=args.with_box_refine,
- two_stage=args.two_stage,
- memory_bank=memory_bank,
- )
- return model, criterion, postprocessors
diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/clip/model.py b/spaces/EleutherAI/VQGAN_CLIP/CLIP/clip/model.py
deleted file mode 100644
index f2c95c481724270116998b90de64cee8ef58c94e..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/clip/model.py
+++ /dev/null
@@ -1,432 +0,0 @@
-from collections import OrderedDict
-from typing import Tuple, Union
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1):
- super().__init__()
-
- # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
- self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
- self.bn1 = nn.BatchNorm2d(planes)
-
- self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
- self.bn2 = nn.BatchNorm2d(planes)
-
- self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
-
- self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
-
- self.relu = nn.ReLU(inplace=True)
- self.downsample = None
- self.stride = stride
-
- if stride > 1 or inplanes != planes * Bottleneck.expansion:
- # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
- self.downsample = nn.Sequential(OrderedDict([
- ("-1", nn.AvgPool2d(stride)),
- ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
- ("1", nn.BatchNorm2d(planes * self.expansion))
- ]))
-
- def forward(self, x: torch.Tensor):
- identity = x
-
- out = self.relu(self.bn1(self.conv1(x)))
- out = self.relu(self.bn2(self.conv2(out)))
- out = self.avgpool(out)
- out = self.bn3(self.conv3(out))
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu(out)
- return out
-
-
-class AttentionPool2d(nn.Module):
- def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
- super().__init__()
- self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
- self.k_proj = nn.Linear(embed_dim, embed_dim)
- self.q_proj = nn.Linear(embed_dim, embed_dim)
- self.v_proj = nn.Linear(embed_dim, embed_dim)
- self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
- self.num_heads = num_heads
-
- def forward(self, x):
- x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
- x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
- x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
- x, _ = F.multi_head_attention_forward(
- query=x, key=x, value=x,
- embed_dim_to_check=x.shape[-1],
- num_heads=self.num_heads,
- q_proj_weight=self.q_proj.weight,
- k_proj_weight=self.k_proj.weight,
- v_proj_weight=self.v_proj.weight,
- in_proj_weight=None,
- in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
- bias_k=None,
- bias_v=None,
- add_zero_attn=False,
- dropout_p=0,
- out_proj_weight=self.c_proj.weight,
- out_proj_bias=self.c_proj.bias,
- use_separate_proj_weight=True,
- training=self.training,
- need_weights=False
- )
-
- return x[0]
-
-
-class ModifiedResNet(nn.Module):
- """
- A ResNet class that is similar to torchvision's but contains the following changes:
- - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
- - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
- - The final pooling layer is a QKV attention instead of an average pool
- """
-
- def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
- super().__init__()
- self.output_dim = output_dim
- self.input_resolution = input_resolution
-
- # the 3-layer stem
- self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
- self.bn1 = nn.BatchNorm2d(width // 2)
- self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
- self.bn2 = nn.BatchNorm2d(width // 2)
- self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
- self.bn3 = nn.BatchNorm2d(width)
- self.avgpool = nn.AvgPool2d(2)
- self.relu = nn.ReLU(inplace=True)
-
- # residual layers
- self._inplanes = width # this is a *mutable* variable used during construction
- self.layer1 = self._make_layer(width, layers[0])
- self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
- self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
- self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
-
- embed_dim = width * 32 # the ResNet feature dimension
- self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
-
- def _make_layer(self, planes, blocks, stride=1):
- layers = [Bottleneck(self._inplanes, planes, stride)]
-
- self._inplanes = planes * Bottleneck.expansion
- for _ in range(1, blocks):
- layers.append(Bottleneck(self._inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- def stem(x):
- for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
- x = self.relu(bn(conv(x)))
- x = self.avgpool(x)
- return x
-
- x = x.type(self.conv1.weight.dtype)
- x = stem(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.attnpool(x)
-
- return x
-
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x: torch.Tensor):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-
-class QuickGELU(nn.Module):
- def forward(self, x: torch.Tensor):
- return x * torch.sigmoid(1.702 * x)
-
-
-class ResidualAttentionBlock(nn.Module):
- def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
- super().__init__()
-
- self.attn = nn.MultiheadAttention(d_model, n_head)
- self.ln_1 = LayerNorm(d_model)
- self.mlp = nn.Sequential(OrderedDict([
- ("c_fc", nn.Linear(d_model, d_model * 4)),
- ("gelu", QuickGELU()),
- ("c_proj", nn.Linear(d_model * 4, d_model))
- ]))
- self.ln_2 = LayerNorm(d_model)
- self.attn_mask = attn_mask
-
- def attention(self, x: torch.Tensor):
- self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
-
- def forward(self, x: torch.Tensor):
- x = x + self.attention(self.ln_1(x))
- x = x + self.mlp(self.ln_2(x))
- return x
-
-
-class Transformer(nn.Module):
- def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
- super().__init__()
- self.width = width
- self.layers = layers
- self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
-
- def forward(self, x: torch.Tensor):
- return self.resblocks(x)
-
-
-class VisionTransformer(nn.Module):
- def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
- super().__init__()
- self.input_resolution = input_resolution
- self.output_dim = output_dim
- self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
-
- scale = width ** -0.5
- self.class_embedding = nn.Parameter(scale * torch.randn(width))
- self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
- self.ln_pre = LayerNorm(width)
-
- self.transformer = Transformer(width, layers, heads)
-
- self.ln_post = LayerNorm(width)
- self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
-
- def forward(self, x: torch.Tensor):
- x = self.conv1(x) # shape = [*, width, grid, grid]
- x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
- x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
- x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
- x = x + self.positional_embedding.to(x.dtype)
- x = self.ln_pre(x)
-
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer(x)
- x = x.permute(1, 0, 2) # LND -> NLD
-
- x = self.ln_post(x[:, 0, :])
-
- if self.proj is not None:
- x = x @ self.proj
-
- return x
-
-
-class CLIP(nn.Module):
- def __init__(self,
- embed_dim: int,
- # vision
- image_resolution: int,
- vision_layers: Union[Tuple[int, int, int, int], int],
- vision_width: int,
- vision_patch_size: int,
- # text
- context_length: int,
- vocab_size: int,
- transformer_width: int,
- transformer_heads: int,
- transformer_layers: int
- ):
- super().__init__()
-
- self.context_length = context_length
-
- if isinstance(vision_layers, (tuple, list)):
- vision_heads = vision_width * 32 // 64
- self.visual = ModifiedResNet(
- layers=vision_layers,
- output_dim=embed_dim,
- heads=vision_heads,
- input_resolution=image_resolution,
- width=vision_width
- )
- else:
- vision_heads = vision_width // 64
- self.visual = VisionTransformer(
- input_resolution=image_resolution,
- patch_size=vision_patch_size,
- width=vision_width,
- layers=vision_layers,
- heads=vision_heads,
- output_dim=embed_dim
- )
-
- self.transformer = Transformer(
- width=transformer_width,
- layers=transformer_layers,
- heads=transformer_heads,
- attn_mask=self.build_attention_mask()
- )
-
- self.vocab_size = vocab_size
- self.token_embedding = nn.Embedding(vocab_size, transformer_width)
- self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
- self.ln_final = LayerNorm(transformer_width)
-
- self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
- self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
-
- self.initialize_parameters()
-
- def initialize_parameters(self):
- nn.init.normal_(self.token_embedding.weight, std=0.02)
- nn.init.normal_(self.positional_embedding, std=0.01)
-
- if isinstance(self.visual, ModifiedResNet):
- if self.visual.attnpool is not None:
- std = self.visual.attnpool.c_proj.in_features ** -0.5
- nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
-
- for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
- for name, param in resnet_block.named_parameters():
- if name.endswith("bn3.weight"):
- nn.init.zeros_(param)
-
- proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
- attn_std = self.transformer.width ** -0.5
- fc_std = (2 * self.transformer.width) ** -0.5
- for block in self.transformer.resblocks:
- nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
- nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
- nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
- nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
-
- if self.text_projection is not None:
- nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
-
- def build_attention_mask(self):
- # lazily create causal attention mask, with full attention between the vision tokens
- # pytorch uses additive attention mask; fill with -inf
- mask = torch.empty(self.context_length, self.context_length)
- mask.fill_(float("-inf"))
- mask.triu_(1) # zero out the lower diagonal
- return mask
-
- @property
- def dtype(self):
- return self.visual.conv1.weight.dtype
-
- def encode_image(self, image):
- return self.visual(image.type(self.dtype))
-
- def encode_text(self, text):
- x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
-
- x = x + self.positional_embedding.type(self.dtype)
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer(x)
- x = x.permute(1, 0, 2) # LND -> NLD
- x = self.ln_final(x).type(self.dtype)
-
- # x.shape = [batch_size, n_ctx, transformer.width]
- # take features from the eot embedding (eot_token is the highest number in each sequence)
- x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
-
- return x
-
- def forward(self, image, text):
- image_features = self.encode_image(image)
- text_features = self.encode_text(text)
-
- # normalized features
- image_features = image_features / image_features.norm(dim=-1, keepdim=True)
- text_features = text_features / text_features.norm(dim=-1, keepdim=True)
-
- # cosine similarity as logits
- logit_scale = self.logit_scale.exp()
- logits_per_image = logit_scale * image_features @ text_features.t()
- logits_per_text = logit_scale * text_features @ image_features.t()
-
- # shape = [global_batch_size, global_batch_size]
- return logits_per_image, logits_per_text
-
-
-def convert_weights(model: nn.Module):
- """Convert applicable model parameters to fp16"""
-
- def _convert_weights_to_fp16(l):
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
- if isinstance(l, nn.MultiheadAttention):
- for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
- tensor = getattr(l, attr)
- if tensor is not None:
- tensor.data = tensor.data.half()
-
- for name in ["text_projection", "proj"]:
- if hasattr(l, name):
- attr = getattr(l, name)
- if attr is not None:
- attr.data = attr.data.half()
-
- model.apply(_convert_weights_to_fp16)
-
-
-def build_model(state_dict: dict):
- vit = "visual.proj" in state_dict
-
- if vit:
- vision_width = state_dict["visual.conv1.weight"].shape[0]
- vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
- vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
- grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
- image_resolution = vision_patch_size * grid_size
- else:
- counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
- vision_layers = tuple(counts)
- vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
- output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
- vision_patch_size = None
- assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
- image_resolution = output_width * 32
-
- embed_dim = state_dict["text_projection"].shape[1]
- context_length = state_dict["positional_embedding"].shape[0]
- vocab_size = state_dict["token_embedding.weight"].shape[0]
- transformer_width = state_dict["ln_final.weight"].shape[0]
- transformer_heads = transformer_width // 64
- transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
-
- model = CLIP(
- embed_dim,
- image_resolution, vision_layers, vision_width, vision_patch_size,
- context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
- )
-
- for key in ["input_resolution", "context_length", "vocab_size"]:
- if key in state_dict:
- del state_dict[key]
-
- convert_weights(model)
- model.load_state_dict(state_dict)
- return model.eval()
diff --git a/spaces/Emmawang/audio_summarizer/README.md b/spaces/Emmawang/audio_summarizer/README.md
deleted file mode 100644
index ee4c5041b1687984f17df318710daa9509007617..0000000000000000000000000000000000000000
--- a/spaces/Emmawang/audio_summarizer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Audio Summarizer
-emoji: 📉
-colorFrom: blue
-colorTo: green
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/layers.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/layers.py
deleted file mode 100644
index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/layers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/ExpertPrompters/AskIDF/__init__.py b/spaces/ExpertPrompters/AskIDF/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py"
deleted file mode 100644
index ffbb05599ef09c9de25334ebeca2eef8022b9aaf..0000000000000000000000000000000000000000
--- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py"
+++ /dev/null
@@ -1,160 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-fast_debug = False
-
-def readPdf(pdfPath):
- """
- 读取pdf文件,返回文本内容
- """
- import pdfminer
- from pdfminer.pdfparser import PDFParser
- from pdfminer.pdfdocument import PDFDocument
- from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed
- from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
- from pdfminer.pdfdevice import PDFDevice
- from pdfminer.layout import LAParams
- from pdfminer.converter import PDFPageAggregator
-
- fp = open(pdfPath, 'rb')
-
- # Create a PDF parser object associated with the file object
- parser = PDFParser(fp)
-
- # Create a PDF document object that stores the document structure.
- # Password for initialization as 2nd parameter
- document = PDFDocument(parser)
- # Check if the document allows text extraction. If not, abort.
- if not document.is_extractable:
- raise PDFTextExtractionNotAllowed
-
- # Create a PDF resource manager object that stores shared resources.
- rsrcmgr = PDFResourceManager()
-
- # Create a PDF device object.
- # device = PDFDevice(rsrcmgr)
-
- # BEGIN LAYOUT ANALYSIS.
- # Set parameters for analysis.
- laparams = LAParams(
- char_margin=10.0,
- line_margin=0.2,
- boxes_flow=0.2,
- all_texts=False,
- )
- # Create a PDF page aggregator object.
- device = PDFPageAggregator(rsrcmgr, laparams=laparams)
- # Create a PDF interpreter object.
- interpreter = PDFPageInterpreter(rsrcmgr, device)
-
- # loop over all pages in the document
- outTextList = []
- for page in PDFPage.create_pages(document):
- # read the page into a layout object
- interpreter.process_page(page)
- layout = device.get_result()
- for obj in layout._objs:
- if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal):
- # print(obj.get_text())
- outTextList.append(obj.get_text())
-
- return outTextList
-
-
-def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os
- from bs4 import BeautifulSoup
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- if ".tex" in fp:
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- if ".pdf" in fp.lower():
- file_content = readPdf(fp)
- file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk')
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-
-@CatchException
-def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import pdfminer, bs4
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
diff --git a/spaces/Fouzia/Harvard-USPTO_Patentability-Score/app.py b/spaces/Fouzia/Harvard-USPTO_Patentability-Score/app.py
deleted file mode 100644
index 43a83bf2086aaa3f3bf944d0967bfd7c65db446c..0000000000000000000000000000000000000000
--- a/spaces/Fouzia/Harvard-USPTO_Patentability-Score/app.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import streamlit as st
-from datasets import load_dataset
-from transformers import pipeline
-import pandas as pd
-import torch
-from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
-from datasets import load_dataset
-
-dataset_dict = load_dataset('HUPD/hupd',
- name='sample',
- data_files="https://huggingface.co/datasets/HUPD/hupd/blob/main/hupd_metadata_2022-02-22.feather",
- icpr_label=None,
- train_filing_start_date='2016-01-01',
- train_filing_end_date='2016-01-31',
- val_filing_start_date='2017-01-22',
- val_filing_end_date='2017-01-31',
-)
-
-df = pd.DataFrame.from_dict(dataset_dict["train"])
-df = pd.DataFrame(df,columns =['patent_number','decision', 'abstract', 'claims','filing_date'])
-#st.dataframe(df)
-PAN = df['patent_number'].drop_duplicates()
-
-st.title('Harvard USPTO Patentability Score')
-#make_choice = st.sidebar.selectbox('Select the Patent Application Number:', PAN)
-
-#####NEW
-with st.form("patent-form"):
- make_choice = st.selectbox('Select the Patent Application Number:', PAN)
- submitted = st.form_submit_button(label='submit')
-
- if submitted:
- #st.write("Outside the form")
- model_name = "distilbert-base-uncased-finetuned-sst-2-english"
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
-
- #abstract = df['abstract'].loc[df['patent_number'] == make_choice]
-
- decision = df['decision'].loc[df['patent_number'] == make_choice]
- #X_train = abstract.to_string()
- X_train = decision.to_string()
- #X_train = abstract.values.tolist()
- results = classifier(X_train, truncation=True)
-
- for result in results:
- print(result)
- score = result['score']
- print(score)
- st.write("The Patentability Score is:", score)
-
-
-######NEW
-
-pd.options.display.max_colwidth = 100000
-
-abstract = df["abstract"].loc[df["patent_number"] == make_choice]
-st.subheader(':red[Patent Application]')
-st.subheader(':red[Abstract:]')
-st.info(abstract)
-
-
-claims = df["claims"].loc[df["patent_number"] == make_choice]
-st.subheader(':red[Claim:]')
-st.info(claims)
-
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/modules.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/modules.py
deleted file mode 100644
index f63ac6a794100cc95da21dcba78b23377a1f133d..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/uvr5/modules.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import os
-import traceback
-import logging
-
-logger = logging.getLogger(__name__)
-
-import ffmpeg
-import torch
-
-from configs.config import Config
-from infer.modules.uvr5.mdxnet import MDXNetDereverb
-from infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho
-
-config = Config()
-
-
-def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0):
- infos = []
- try:
- inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- save_root_vocal = (
- save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- save_root_ins = (
- save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- if model_name == "onnx_dereverb_By_FoxJoy":
- pre_fun = MDXNetDereverb(15, config.device)
- else:
- func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho
- pre_fun = func(
- agg=int(agg),
- model_path=os.path.join(
- os.getenv("weight_uvr5_root"), model_name + ".pth"
- ),
- device=config.device,
- is_half=config.is_half,
- )
- if inp_root != "":
- paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)]
- else:
- paths = [path.name for path in paths]
- for path in paths:
- inp_path = os.path.join(inp_root, path)
- need_reformat = 1
- done = 0
- try:
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
- if (
- info["streams"][0]["channels"] == 2
- and info["streams"][0]["sample_rate"] == "44100"
- ):
- need_reformat = 0
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- done = 1
- except:
- need_reformat = 1
- traceback.print_exc()
- if need_reformat == 1:
- tmp_path = "%s/%s.reformatted.wav" % (
- os.path.join(os.environ["TEMP"]),
- os.path.basename(inp_path),
- )
- os.system(
- "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"
- % (inp_path, tmp_path)
- )
- inp_path = tmp_path
- try:
- if done == 0:
- pre_fun.path_audio(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- infos.append("%s->Success" % (os.path.basename(inp_path)))
- yield "\n".join(infos)
- except:
- try:
- if done == 0:
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- infos.append("%s->Success" % (os.path.basename(inp_path)))
- yield "\n".join(infos)
- except:
- infos.append(
- "%s->%s" % (os.path.basename(inp_path), traceback.format_exc())
- )
- yield "\n".join(infos)
- except:
- infos.append(traceback.format_exc())
- yield "\n".join(infos)
- finally:
- try:
- if model_name == "onnx_dereverb_By_FoxJoy":
- del pre_fun.pred.model
- del pre_fun.pred.model_
- else:
- del pre_fun.model
- del pre_fun
- except:
- traceback.print_exc()
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- logger.info("Executed torch.cuda.empty_cache()")
- yield "\n".join(infos)
diff --git a/spaces/Friklogff/xx-xhai/app.py b/spaces/Friklogff/xx-xhai/app.py
deleted file mode 100644
index 1fa15d802f477119aa1e3a7515c7af43b272aba9..0000000000000000000000000000000000000000
--- a/spaces/Friklogff/xx-xhai/app.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# -*- coding = utf-8 -*-
-"""
-# @Time : 2023/7/31 19:33
-# @Author : CSDN:FriKlogff
-# @File : PublicGui.py
-# @Software: PyCharm
-# @Function: 请输入项目功能
-"""
-import os
-os.system("""python -m pip install -i https://mirrors.aliyun.com/pypi/simple/ --upgrade pip setuptools
-pip install -i https://mirrors.aliyun.com/pypi/simple/ websocket
-pip install -i https://mirrors.aliyun.com/pypi/simple/ websocket-client
-pip install -i https://mirrors.aliyun.com/pypi/simple/ gradio
-pip install -i https://mirrors.aliyun.com/pypi/simple/ sxtwl
-""")
-from PublicFunctions import *
-import gradio as gr
-
-# 定义星座选项
-signs = ["白羊座", "金牛座", "双子座", "巨蟹座", "狮子座", "处女座",
- "天秤座", "天蝎座", "射手座", "摩羯座", "水瓶座", "双鱼座"]
-cards_num = [1, 2, 3, 4, 5]
-months = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
-days = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
- 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
-hours = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
-# 使用 Gradio 的模块化组件,构建包含五个选项卡的界面
-with gr.Blocks() as demo:
- with gr.Tab("星火api配置"):
- xh_input = [
- gr.components.Textbox(label="appid"),
- gr.components.Textbox(label="api_secret"),
- gr.components.Textbox(label="api_key"),
- gr.components.Textbox(label="gpt_url")
- ]
- xh_output = gr.components.Textbox(label="点击提交返回配置情况,请自行配置星火大模型API再使用后续功能")
- xh_button = gr.components.Button("提交")
- xh_button.click(xh_api, inputs=xh_input, outputs=xh_output
- )
- with gr.Tab("AI星座解读"):
- horoscope_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- gr.components.Dropdown(signs, label="选择您的星座")
- ]
- horoscope_output = gr.components.Textbox(label="星座解读(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- horoscope_button = gr.components.Button("提交")
- horoscope_button.click(horoscope_reading, inputs=horoscope_input, outputs=horoscope_output
- )
-
- with gr.Tab("AI塔罗牌解读"):
- tarot_input = [gr.components.Textbox(label="你想问的问题"),
- gr.components.Dropdown(cards_num, label="你想抽几张牌"),
- ]
- tarot_output = gr.components.Textbox(label="塔罗牌解析(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- upload_button = gr.components.Button("抽取")
- upload_button.click(tarot_reading, inputs=tarot_input, outputs=tarot_output)
- with gr.Tab("AI八字合婚分析"):
- marriage_input = [gr.components.Textbox(label="新郎姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
-
- gr.components.Textbox(label="新娘姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- ]
- marriage_analysis_output = gr.components.Textbox(label="婚姻分析(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- analyze_button = gr.components.Button("马上测算")
- analyze_button.click(marriage_bazi_analysis,
- inputs=marriage_input,
- outputs=marriage_analysis_output)
- with gr.Tab("AI兔年运程预测"):
- birth_year_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- ]
- prediction_output = gr.components.Textbox(label="运程预测(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- predict_button = gr.components.Button("预测运势")
- predict_button.click(rabbit_year_prediction,
- inputs=birth_year_input,
- outputs=prediction_output)
- with gr.Tab("AI公司命理解析"):
- company_name_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- gr.components.Textbox(label="公司名称"),
- gr.components.Textbox(label="所属行业")]
- name_analysis_output = gr.components.Textbox(label="命理分析(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- analyze_button = gr.components.Button("分析")
- analyze_button.click(company_name_analysis,
- inputs=company_name_input,
- outputs=name_analysis_output)
- with gr.Tab("AI姓名配对"):
- name1_input = [gr.components.Textbox(label="姓名1"),
- gr.components.Textbox(label="姓名2"),
- ]
- matching_output = gr.components.Textbox(label="配对结果(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- match_button = gr.components.Button("分析配对")
- match_button.click(name_compatibility,
- inputs=name1_input,
- outputs=matching_output)
-
- with gr.Tab("AI月老姻缘"):
- yue_lau_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- ]
- affinity_output = gr.components.Textbox(label="姻缘分析(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- analyze_button = gr.components.Button("分析姻缘")
- analyze_button.click(yue_lau_affinity,
- inputs=yue_lau_input,
- outputs=affinity_output)
-
- with gr.Tab("AI八字精批"):
- bazi_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- ]
- analysis_output = gr.components.Textbox(label="精批结果(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- batch_button = gr.components.Button("八字精批")
- batch_button.click(bazi_analysis,
- inputs=bazi_input,
- outputs=analysis_output)
-
- with gr.Tab("AI姓名分析"):
- name_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名")]
- name_output = gr.components.Textbox(label="命理分析(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- analyze_button = gr.components.Button("分析姓名")
- analyze_button.click(name_analysis,
- inputs=name_input,
- outputs=name_output)
- with gr.Tab("AI紫薇斗数解析"):
- zhiwei_input = [gr.components.Radio(choices=["男", "女"], label="性别"),
- gr.components.Textbox(label="姓名"),
- gr.components.Number(label="出生年份"),
- gr.components.Dropdown(months, label="出生月份"),
- gr.components.Dropdown(days, label="出生日"),
- gr.components.Dropdown(hours, label="出生时辰"),
- ]
- zhiwei_output = gr.components.Textbox(label="紫薇解读(由于我们的解析是由AI生成的,结果仅供娱乐,如果不成功请多试几次)")
- zhiwei_button = gr.components.Button("解读运势")
- zhiwei_button.click(zhiwei_analysis,
- inputs=zhiwei_input,
- outputs=zhiwei_output)
-demo.launch()
-# demo.launch(share=True)
diff --git a/spaces/GFXY/stabilityai-stable-diffusion-2-1-base/README.md b/spaces/GFXY/stabilityai-stable-diffusion-2-1-base/README.md
deleted file mode 100644
index e7b76ff3eba58f890806a1bc210f707bc51d8230..0000000000000000000000000000000000000000
--- a/spaces/GFXY/stabilityai-stable-diffusion-2-1-base/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1 Base
-emoji: 🏆
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: agpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GIZ/embedding_visualisation/README.md b/spaces/GIZ/embedding_visualisation/README.md
deleted file mode 100644
index fd7990229da29e116048e587c169e7fc9348167d..0000000000000000000000000000000000000000
--- a/spaces/GIZ/embedding_visualisation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Embedding Visualisation
-emoji: 🐢
-colorFrom: purple
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-python_version: 3.7.15
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_cylinder_structure.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_cylinder_structure.py
deleted file mode 100644
index 8e454f45f88dd8b7d331da4d047aec971efb63a1..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/build_cylinder_structure.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class BuildCylinderStructure(Task):
- """Construct a structure using four colored cylinders (red, blue, green, yellow) on a square base."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 5
- self.lang_template = "construct a structure using four colored cylinders on a square base"
- self.task_completed_desc = "done building the cylinder structure."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add square base.
- # x, y, z dimensions for the asset size
- base_size = (0.15, 0.15, 0.005)
- base_urdf = 'square/square-template.urdf'
- base_pose = self.get_random_pose(env, base_size)
- env.add_object(base_urdf, base_pose, category='fixed')
-
- # Cylinder colors.
- colors = [
- utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], utils.COLORS['yellow']
- ]
-
- # Add cylinders.
- # x, y, z dimensions for the asset size
- cylinder_size = (0.04, 0.04, 0.08)
- cylinder_urdf = 'cylinder/cylinder-template.urdf'
-
- objs = []
- for i in range(4):
- cylinder_pose = self.get_random_pose(env, cylinder_size)
- cylinder_id = env.add_object(cylinder_urdf, cylinder_pose, color=colors[i])
- objs.append(cylinder_id)
-
- # Associate placement locations for goals.
- place_pos = [(0, -0.05, 0.04), (0, 0.05, 0.04),
- (0, 0.05, 0.12), (0, -0.05, 0.12)]
- targs = [(utils.apply(base_pose, i), base_pose[1]) for i in place_pos]
-
- # Goal: red and blue cylinders are placed side by side on the base.
- self.add_goal(objs=objs[:2], matches=np.ones((2, 2)), targ_poses=targs[:2], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2]*2,
- language_goal="place the red and blue cylinders side by side on the base")
-
- # Goal: green cylinder is placed on top of the blue cylinder.
- self.add_goal(objs=[objs[2]], matches=np.ones((1, 1)), targ_poses=[targs[2]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2],
- language_goal="place the green cylinder on top of the blue cylinder")
-
- # Goal: yellow cylinder is placed on top of the red cylinder.
- self.add_goal(objs=[objs[3]], matches=np.ones((1, 1)), targ_poses=[targs[3]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 2, symmetries=[np.pi/2],
- language_goal="place the yellow cylinder on top of the red cylinder")
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train3_gptmixcliport3_small.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train3_gptmixcliport3_small.sh
deleted file mode 100644
index 8a6d0311fb48c598836814349a207af362f608a9..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train3_gptmixcliport3_small.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-now=$(date "+%Y-%m-%d_%H-%M-%S")
-
-
-sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \
- "[place-red-in-green,stack-block-pyramid,put-block-in-bowl,color-coordinated-sphere-insertion,rainbow-stack,vertical-insertion-blocks]" \
- "[place-red-in-green,stack-block-pyramid,put-block-in-bowl]" \
- gpt3_mixcliport3_${now}
\ No newline at end of file
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md
deleted file mode 100644
index 132cc514bac6b447addac8485e0622a834d34474..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# :european_castle: Model Zoo
-
-- [For General Images](#for-general-images)
-- [For Anime Images](#for-anime-images)
-- [For Anime Videos](#for-anime-videos)
-
----
-
-## For General Images
-
-| Models | Scale | Description |
-| ------------------------------------------------------------------------------------------------------------------------------- | :---- | :------------------------------------------- |
-| [RealESRGAN_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) | X4 | X4 model for general images |
-| [RealESRGAN_x2plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth) | X2 | X2 model for general images |
-| [RealESRNet_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth) | X4 | X4 model with MSE loss (over-smooth effects) |
-| [official ESRGAN_x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) | X4 | official ESRGAN model |
-| [realesr-general-x4v3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth) | X4 (can also be used for X1, X2, X3) | A tiny small model (consume much fewer GPU memory and time); not too strong deblur and denoise capacity |
-
-The following models are **discriminators**, which are usually used for fine-tuning.
-
-| Models | Corresponding model |
-| ---------------------------------------------------------------------------------------------------------------------- | :------------------ |
-| [RealESRGAN_x4plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth) | RealESRGAN_x4plus |
-| [RealESRGAN_x2plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth) | RealESRGAN_x2plus |
-
-## For Anime Images / Illustrations
-
-| Models | Scale | Description |
-| ------------------------------------------------------------------------------------------------------------------------------ | :---- | :---------------------------------------------------------- |
-| [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) | X4 | Optimized for anime images; 6 RRDB blocks (smaller network) |
-
-The following models are **discriminators**, which are usually used for fine-tuning.
-
-| Models | Corresponding model |
-| ---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------- |
-| [RealESRGAN_x4plus_anime_6B_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth) | RealESRGAN_x4plus_anime_6B |
-
-## For Animation Videos
-
-| Models | Scale | Description |
-| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |
-| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X41 | Anime video model with XS size |
-
-Note:
-1 This model can also be used for X1, X2, X3.
-
-The following models are **discriminators**, which are usually used for fine-tuning.
-
-TODO
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Dataloader.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Dataloader.py
deleted file mode 100644
index 05a6d191de076299fa6bc9a571572f3cc05d279c..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Dataloader.py
+++ /dev/null
@@ -1,231 +0,0 @@
-import glob
-import io
-import numpy as np
-import re
-import os
-import random
-from io import BytesIO
-from uuid import uuid4
-import sqlite3
-import h5py
-import torch
-from PIL import Image
-from torch.utils.data import Dataset
-from torchvision.transforms import RandomCrop
-from torchvision.transforms.functional import to_tensor
-
-
-class ImageH5Data(Dataset):
- def __init__(self, h5py_file, folder_name):
- self.data = h5py.File(h5py_file, "r")[folder_name]
- self.data_hr = self.data["train_hr"]
- self.data_lr = self.data["train_lr"]
- self.len_imgs = len(self.data_hr)
- self.h5py_file = h5py_file
- self.folder_name = folder_name
-
- def __len__(self):
- # with h5py.File(self.h5py_file, 'r') as f:
- # return len(f[self.folder_name]['train_lr'])
- return self.len_imgs
-
- def __getitem__(self, index):
- # with h5py.File(self.h5py_file, 'r') as f:
- # data_lr = f[self.folder_name]['train_lr'][index]
- # data_hr = f[self.folder_name]['train_lr'][index]
- #
- # return data_lr, data_hr
- return self.data_lr[index], self.data_hr[index]
-
-
-class ImageData(Dataset):
- def __init__(
- self,
- img_folder,
- patch_size=96,
- shrink_size=2,
- noise_level=1,
- down_sample_method=None,
- color_mod="RGB",
- dummy_len=None,
- ):
-
- self.img_folder = img_folder
- all_img = glob.glob(self.img_folder + "/**", recursive=True)
- self.img = list(
- filter(
- lambda x: x.endswith("png") or x.endswith("jpg") or x.endswith("jpeg"),
- all_img,
- )
- )
- self.total_img = len(self.img)
- self.dummy_len = dummy_len if dummy_len is not None else self.total_img
- self.random_cropper = RandomCrop(size=patch_size)
- self.color_mod = color_mod
- self.img_augmenter = ImageAugment(shrink_size, noise_level, down_sample_method)
-
- def get_img_patches(self, img_file):
- img_pil = Image.open(img_file).convert("RGB")
- img_patch = self.random_cropper(img_pil)
- lr_hr_patches = self.img_augmenter.process(img_patch)
- return lr_hr_patches
-
- def __len__(self):
- return self.dummy_len # len(self.img)
-
- def __getitem__(self, index):
- idx = random.choice(range(0, self.total_img))
- img = self.img[idx]
- patch = self.get_img_patches(img)
- if self.color_mod == "RGB":
- lr_img = patch[0].convert("RGB")
- hr_img = patch[1].convert("RGB")
- elif self.color_mod == "YCbCr":
- lr_img, _, _ = patch[0].convert("YCbCr").split()
- hr_img, _, _ = patch[1].convert("YCbCr").split()
- else:
- raise KeyError("Either RGB or YCbCr")
- return to_tensor(lr_img), to_tensor(hr_img)
-
-
-class Image2Sqlite(ImageData):
- def __getitem__(self, item):
- img = self.img[item]
- lr_hr_patch = self.get_img_patches(img)
- if self.color_mod == "RGB":
- lr_img = lr_hr_patch[0].convert("RGB")
- hr_img = lr_hr_patch[1].convert("RGB")
- elif self.color_mod == "YCbCr":
- lr_img, _, _ = lr_hr_patch[0].convert("YCbCr").split()
- hr_img, _, _ = lr_hr_patch[1].convert("YCbCr").split()
- else:
- raise KeyError("Either RGB or YCbCr")
- lr_byte = self.convert_to_bytevalue(lr_img)
- hr_byte = self.convert_to_bytevalue(hr_img)
- return [lr_byte, hr_byte]
-
- @staticmethod
- def convert_to_bytevalue(pil_img):
- img_byte = io.BytesIO()
- pil_img.save(img_byte, format="png")
- return img_byte.getvalue()
-
-
-class ImageDBData(Dataset):
- def __init__(
- self,
- db_file,
- db_table="images",
- lr_col="lr_img",
- hr_col="hr_img",
- max_images=None,
- ):
- self.db_file = db_file
- self.db_table = db_table
- self.lr_col = lr_col
- self.hr_col = hr_col
- self.total_images = self.get_num_rows(max_images)
- # self.lr_hr_images = self.get_all_images()
-
- def __len__(self):
- return self.total_images
-
- # def get_all_images(self):
- # with sqlite3.connect(self.db_file) as conn:
- # cursor = conn.cursor()
- # cursor.execute(f"SELECT * FROM {self.db_table} LIMIT {self.total_images}")
- # return cursor.fetchall()
-
- def get_num_rows(self, max_images):
- with sqlite3.connect(self.db_file) as conn:
- cursor = conn.cursor()
- cursor.execute(f"SELECT MAX(ROWID) FROM {self.db_table}")
- db_rows = cursor.fetchone()[0]
- if max_images:
- return min(max_images, db_rows)
- else:
- return db_rows
-
- def __getitem__(self, item):
- # lr, hr = self.lr_hr_images[item]
- # lr = Image.open(io.BytesIO(lr))
- # hr = Image.open(io.BytesIO(hr))
- # return to_tensor(lr), to_tensor(hr)
- # note sqlite rowid starts with 1
- with sqlite3.connect(self.db_file) as conn:
- cursor = conn.cursor()
- cursor.execute(
- f"SELECT {self.lr_col}, {self.hr_col} FROM {self.db_table} WHERE ROWID={item + 1}"
- )
- lr, hr = cursor.fetchone()
- lr = Image.open(io.BytesIO(lr)).convert("RGB")
- hr = Image.open(io.BytesIO(hr)).convert("RGB")
- # lr = np.array(lr) # use scale [0, 255] instead of [0,1]
- # hr = np.array(hr)
- return to_tensor(lr), to_tensor(hr)
-
-
-class ImagePatchData(Dataset):
- def __init__(self, lr_folder, hr_folder):
- self.lr_folder = lr_folder
- self.hr_folder = hr_folder
- self.lr_imgs = glob.glob(os.path.join(lr_folder, "**"))
- self.total_imgs = len(self.lr_imgs)
-
- def __len__(self):
- return self.total_imgs
-
- def __getitem__(self, item):
- lr_file = self.lr_imgs[item]
- hr_path = re.sub("lr", "hr", os.path.dirname(lr_file))
- filename = os.path.basename(lr_file)
- hr_file = os.path.join(hr_path, filename)
- return to_tensor(Image.open(lr_file)), to_tensor(Image.open(hr_file))
-
-
-class ImageAugment:
- def __init__(self, shrink_size=2, noise_level=1, down_sample_method=None):
- # noise_level (int): 0: no noise; 1: 75-95% quality; 2:50-75%
- if noise_level == 0:
- self.noise_level = [0, 0]
- elif noise_level == 1:
- self.noise_level = [5, 25]
- elif noise_level == 2:
- self.noise_level = [25, 50]
- else:
- raise KeyError("Noise level should be either 0, 1, 2")
- self.shrink_size = shrink_size
- self.down_sample_method = down_sample_method
-
- def shrink_img(self, hr_img):
-
- if self.down_sample_method is None:
- resample_method = random.choice(
- [Image.BILINEAR, Image.BICUBIC, Image.LANCZOS]
- )
- else:
- resample_method = self.down_sample_method
- img_w, img_h = tuple(map(lambda x: int(x / self.shrink_size), hr_img.size))
- lr_img = hr_img.resize((img_w, img_h), resample_method)
- return lr_img
-
- def add_jpeg_noise(self, hr_img):
- quality = 100 - round(random.uniform(*self.noise_level))
- lr_img = BytesIO()
- hr_img.save(lr_img, format="JPEG", quality=quality)
- lr_img.seek(0)
- lr_img = Image.open(lr_img)
- return lr_img
-
- def process(self, hr_patch_pil):
- lr_patch_pil = self.shrink_img(hr_patch_pil)
- if self.noise_level[1] > 0:
- lr_patch_pil = self.add_jpeg_noise(lr_patch_pil)
-
- return lr_patch_pil, hr_patch_pil
-
- def up_sample(self, img, resample):
- width, height = img.size
- return img.resize(
- (self.shrink_size * width, self.shrink_size * height), resample=resample
- )
diff --git a/spaces/GowthamSiddharth/MyAssist_ChatBot/README.md b/spaces/GowthamSiddharth/MyAssist_ChatBot/README.md
deleted file mode 100644
index acf2cbfb7efa210a2c887be8f5f19c29d69d7e19..0000000000000000000000000000000000000000
--- a/spaces/GowthamSiddharth/MyAssist_ChatBot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MyAssist ChatBot
-emoji: 📚
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/__init__.py b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/__init__.py
deleted file mode 100644
index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp
deleted file mode 100644
index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-
-torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
- CHECK_CUDA(input);
- CHECK_CUDA(kernel);
-
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/are-you-wearing-a-mask/app.py b/spaces/Gradio-Blocks/are-you-wearing-a-mask/app.py
deleted file mode 100644
index b897953f05e74498cbccbb9ab06a5844b3931164..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/are-you-wearing-a-mask/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Are you wearing a mask?
-import gradio as gr
-import torch
-import torchvision
-import numpy as np
-from PIL import Image
-
-# Face masks
-# TODO: Allow user selectable model?
-model = torch.hub.load('ultralytics/yolov5:v6.2', 'custom', "model_weights/face_masks_v8.pt")
-
-def yolo(im, size=640):
- g = (size / max(im.size)) # gain
- im = im.resize((int(x * g) for x in im.size), Image.ANTIALIAS) # resize
-
- results = model(im) # inference
- results.render() # updates results.imgs with boxes and labels
- return Image.fromarray(results.imgs[0])
-
-
-inputs = gr.inputs.Image(type='pil', label="Original Image")
-outputs = gr.outputs.Image(type="pil", label="Output Image")
-
-title = "Are you wearing a mask?"
-description = "Detecting masked and unmasked faces with YOLOv5. Take a picture, upload an image, or click an example image to use."
-article = "
This app makes predictions using a YOLOv5s model that was fine tuned on a dataset of people with and without masks. All of the code for training the model is available on GitHub. This app and the model behind it were created by Henry Lydecker, for a course he developed for the Sydney Informatics Hub, a Core Research Facility of The University of Sydney. Find out more about the YOLO model from the original creator, Joseph Redmon. YOLOv5 is a family of compound-scaled object detection models trained on the COCO dataset and developed by Ultralytics, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite. Source code | PyTorch Hub
"
-
-examples = [['data/picard.jpg'], ['data/crowd.jpeg'],['data/baseball2.jpeg'],['data/santa-claus-orig.jpg'],['data/kfc_anime2.jpg'],['data/doge2.webp'],['data/cat_mask.jpg']]
-gr.Interface(yolo, inputs, outputs, title=title, description=description, article=article, examples=examples, theme="huggingface").launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py
deleted file mode 100644
index 927609206e1323dcf1173c4a5393e3f03d534c0a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './faster_rcnn_r50_fpn_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py
deleted file mode 100644
index ef81123a2ebd5a30eb812d321eb7a3764e315a72..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py
+++ /dev/null
@@ -1,97 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- type='NASFCOS',
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False, eps=0),
- style='caffe'),
- neck=dict(
- type='NASFCOS_FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs=True,
- num_outs=5,
- norm_cfg=dict(type='BN'),
- conv_cfg=dict(type='DCNv2', deform_groups=2)),
- bbox_head=dict(
- type='NASFCOSHead',
- num_classes=80,
- in_channels=256,
- feat_channels=256,
- strides=[8, 16, 32, 64, 128],
- norm_cfg=dict(type='GN', num_groups=32),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='IoULoss', loss_weight=1.0),
- loss_centerness=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
- train_cfg=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0,
- ignore_iof_thr=-1),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.6),
- max_per_img=100))
-
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=2,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-
-optimizer = dict(
- lr=0.01, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fcos_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fcos_head.py
deleted file mode 100644
index 905a703507f279ac8d34cff23c99af33c0d5f973..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fcos_head.py
+++ /dev/null
@@ -1,629 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import Scale, normal_init
-from mmcv.runner import force_fp32
-
-from mmdet.core import distance2bbox, multi_apply, multiclass_nms, reduce_mean
-from ..builder import HEADS, build_loss
-from .anchor_free_head import AnchorFreeHead
-
-INF = 1e8
-
-
-@HEADS.register_module()
-class FCOSHead(AnchorFreeHead):
- """Anchor-free head used in `FCOS `_.
-
- The FCOS head does not use anchor boxes. Instead bounding boxes are
- predicted at each pixel and a centerness measure is used to suppress
- low-quality predictions.
- Here norm_on_bbox, centerness_on_reg, dcn_on_last_conv are training
- tricks used in official repo, which will bring remarkable mAP gains
- of up to 4.9. Please see https://github.com/tianzhi0549/FCOS for
- more detail.
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- strides (list[int] | list[tuple[int, int]]): Strides of points
- in multiple feature levels. Default: (4, 8, 16, 32, 64).
- regress_ranges (tuple[tuple[int, int]]): Regress range of multiple
- level points.
- center_sampling (bool): If true, use center sampling. Default: False.
- center_sample_radius (float): Radius of center sampling. Default: 1.5.
- norm_on_bbox (bool): If true, normalize the regression targets
- with FPN strides. Default: False.
- centerness_on_reg (bool): If true, position centerness on the
- regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042.
- Default: False.
- conv_bias (bool | str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias of conv will be set as True if `norm_cfg` is None, otherwise
- False. Default: "auto".
- loss_cls (dict): Config of classification loss.
- loss_bbox (dict): Config of localization loss.
- loss_centerness (dict): Config of centerness loss.
- norm_cfg (dict): dictionary to construct and config norm layer.
- Default: norm_cfg=dict(type='GN', num_groups=32, requires_grad=True).
-
- Example:
- >>> self = FCOSHead(11, 7)
- >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
- >>> cls_score, bbox_pred, centerness = self.forward(feats)
- >>> assert len(cls_score) == len(self.scales)
- """ # noqa: E501
-
- def __init__(self,
- num_classes,
- in_channels,
- regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512),
- (512, INF)),
- center_sampling=False,
- center_sample_radius=1.5,
- norm_on_bbox=False,
- centerness_on_reg=False,
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='IoULoss', loss_weight=1.0),
- loss_centerness=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
- **kwargs):
- self.regress_ranges = regress_ranges
- self.center_sampling = center_sampling
- self.center_sample_radius = center_sample_radius
- self.norm_on_bbox = norm_on_bbox
- self.centerness_on_reg = centerness_on_reg
- super().__init__(
- num_classes,
- in_channels,
- loss_cls=loss_cls,
- loss_bbox=loss_bbox,
- norm_cfg=norm_cfg,
- **kwargs)
- self.loss_centerness = build_loss(loss_centerness)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- super()._init_layers()
- self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1)
- self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides])
-
- def init_weights(self):
- """Initialize weights of the head."""
- super().init_weights()
- normal_init(self.conv_centerness, std=0.01)
-
- def forward(self, feats):
- """Forward features from the upstream network.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple:
- cls_scores (list[Tensor]): Box scores for each scale level, \
- each is a 4D-tensor, the channel number is \
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each \
- scale level, each is a 4D-tensor, the channel number is \
- num_points * 4.
- centernesses (list[Tensor]): centerness for each scale level, \
- each is a 4D-tensor, the channel number is num_points * 1.
- """
- return multi_apply(self.forward_single, feats, self.scales,
- self.strides)
-
- def forward_single(self, x, scale, stride):
- """Forward features of a single scale level.
-
- Args:
- x (Tensor): FPN feature maps of the specified stride.
- scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize
- the bbox prediction.
- stride (int): The corresponding stride for feature maps, only
- used to normalize the bbox prediction when self.norm_on_bbox
- is True.
-
- Returns:
- tuple: scores for each class, bbox predictions and centerness \
- predictions of input feature maps.
- """
- cls_score, bbox_pred, cls_feat, reg_feat = super().forward_single(x)
- if self.centerness_on_reg:
- centerness = self.conv_centerness(reg_feat)
- else:
- centerness = self.conv_centerness(cls_feat)
- # scale the bbox_pred of different level
- # float to avoid overflow when enabling FP16
- bbox_pred = scale(bbox_pred).float()
- if self.norm_on_bbox:
- bbox_pred = F.relu(bbox_pred)
- if not self.training:
- bbox_pred *= stride
- else:
- bbox_pred = bbox_pred.exp()
- return cls_score, bbox_pred, centerness
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses'))
- def loss(self,
- cls_scores,
- bbox_preds,
- centernesses,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute loss of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, the channel number is
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level, each is a 4D-tensor, the channel number is
- num_points * 4.
- centernesses (list[Tensor]): centerness for each scale level, each
- is a 4D-tensor, the channel number is num_points * 1.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert len(cls_scores) == len(bbox_preds) == len(centernesses)
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
- bbox_preds[0].device)
- labels, bbox_targets = self.get_targets(all_level_points, gt_bboxes,
- gt_labels)
-
- num_imgs = cls_scores[0].size(0)
- # flatten cls_scores, bbox_preds and centerness
- flatten_cls_scores = [
- cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels)
- for cls_score in cls_scores
- ]
- flatten_bbox_preds = [
- bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
- for bbox_pred in bbox_preds
- ]
- flatten_centerness = [
- centerness.permute(0, 2, 3, 1).reshape(-1)
- for centerness in centernesses
- ]
- flatten_cls_scores = torch.cat(flatten_cls_scores)
- flatten_bbox_preds = torch.cat(flatten_bbox_preds)
- flatten_centerness = torch.cat(flatten_centerness)
- flatten_labels = torch.cat(labels)
- flatten_bbox_targets = torch.cat(bbox_targets)
- # repeat points to align with bbox_preds
- flatten_points = torch.cat(
- [points.repeat(num_imgs, 1) for points in all_level_points])
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = self.num_classes
- pos_inds = ((flatten_labels >= 0)
- & (flatten_labels < bg_class_ind)).nonzero().reshape(-1)
- num_pos = torch.tensor(
- len(pos_inds), dtype=torch.float, device=bbox_preds[0].device)
- num_pos = max(reduce_mean(num_pos), 1.0)
- loss_cls = self.loss_cls(
- flatten_cls_scores, flatten_labels, avg_factor=num_pos)
-
- pos_bbox_preds = flatten_bbox_preds[pos_inds]
- pos_centerness = flatten_centerness[pos_inds]
-
- if len(pos_inds) > 0:
- pos_bbox_targets = flatten_bbox_targets[pos_inds]
- pos_centerness_targets = self.centerness_target(pos_bbox_targets)
- pos_points = flatten_points[pos_inds]
- pos_decoded_bbox_preds = distance2bbox(pos_points, pos_bbox_preds)
- pos_decoded_target_preds = distance2bbox(pos_points,
- pos_bbox_targets)
- # centerness weighted iou loss
- centerness_denorm = max(
- reduce_mean(pos_centerness_targets.sum().detach()), 1e-6)
- loss_bbox = self.loss_bbox(
- pos_decoded_bbox_preds,
- pos_decoded_target_preds,
- weight=pos_centerness_targets,
- avg_factor=centerness_denorm)
- loss_centerness = self.loss_centerness(
- pos_centerness, pos_centerness_targets, avg_factor=num_pos)
- else:
- loss_bbox = pos_bbox_preds.sum()
- loss_centerness = pos_centerness.sum()
-
- return dict(
- loss_cls=loss_cls,
- loss_bbox=loss_bbox,
- loss_centerness=loss_centerness)
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses'))
- def get_bboxes(self,
- cls_scores,
- bbox_preds,
- centernesses,
- img_metas,
- cfg=None,
- rescale=False,
- with_nms=True):
- """Transform network output for a batch into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- with shape (N, num_points * num_classes, H, W).
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_points * 4, H, W).
- centernesses (list[Tensor]): Centerness for each scale level with
- shape (N, num_points * 1, H, W).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- cfg (mmcv.Config | None): Test / postprocessing configuration,
- if None, test_cfg would be used. Default: None.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
-
- Returns:
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
- The first item is an (n, 5) tensor, where 5 represent
- (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
- The shape of the second tensor in the tuple is (n,), and
- each element represents the class label of the corresponding
- box.
- """
- assert len(cls_scores) == len(bbox_preds)
- num_levels = len(cls_scores)
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
- bbox_preds[0].device)
-
- cls_score_list = [cls_scores[i].detach() for i in range(num_levels)]
- bbox_pred_list = [bbox_preds[i].detach() for i in range(num_levels)]
- centerness_pred_list = [
- centernesses[i].detach() for i in range(num_levels)
- ]
- if torch.onnx.is_in_onnx_export():
- assert len(
- img_metas
- ) == 1, 'Only support one input image while in exporting to ONNX'
- img_shapes = img_metas[0]['img_shape_for_onnx']
- else:
- img_shapes = [
- img_metas[i]['img_shape']
- for i in range(cls_scores[0].shape[0])
- ]
- scale_factors = [
- img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0])
- ]
- result_list = self._get_bboxes(cls_score_list, bbox_pred_list,
- centerness_pred_list, mlvl_points,
- img_shapes, scale_factors, cfg, rescale,
- with_nms)
- return result_list
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- centernesses,
- mlvl_points,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box scores for a single scale level
- with shape (N, num_points * num_classes, H, W).
- bbox_preds (list[Tensor]): Box energies / deltas for a single scale
- level with shape (N, num_points * 4, H, W).
- centernesses (list[Tensor]): Centerness for a single scale level
- with shape (N, num_points * 4, H, W).
- mlvl_points (list[Tensor]): Box reference for a single scale level
- with shape (num_total_points, 4).
- img_shapes (list[tuple[int]]): Shape of the input image,
- list[(height, width, 3)].
- scale_factors (list[ndarray]): Scale factor of the image arrange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config | None): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
-
- Returns:
- tuple(Tensor):
- det_bboxes (Tensor): BBox predictions in shape (n, 5), where
- the first 4 columns are bounding box positions
- (tl_x, tl_y, br_x, br_y) and the 5-th column is a score
- between 0 and 1.
- det_labels (Tensor): A (n,) tensor where each item is the
- predicted class label of the corresponding box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_points)
- device = cls_scores[0].device
- batch_size = cls_scores[0].shape[0]
- # convert to tensor to keep tracing
- nms_pre_tensor = torch.tensor(
- cfg.get('nms_pre', -1), device=device, dtype=torch.long)
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_centerness = []
- for cls_score, bbox_pred, centerness, points in zip(
- cls_scores, bbox_preds, centernesses, mlvl_points):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
- scores = cls_score.permute(0, 2, 3, 1).reshape(
- batch_size, -1, self.cls_out_channels).sigmoid()
- centerness = centerness.permute(0, 2, 3,
- 1).reshape(batch_size,
- -1).sigmoid()
-
- bbox_pred = bbox_pred.permute(0, 2, 3,
- 1).reshape(batch_size, -1, 4)
- # Always keep topk op for dynamic input in onnx
- if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export()
- or scores.shape[-2] > nms_pre_tensor):
- from torch import _shape_as_tensor
- # keep shape as tensor and get k
- num_anchor = _shape_as_tensor(scores)[-2].to(device)
- nms_pre = torch.where(nms_pre_tensor < num_anchor,
- nms_pre_tensor, num_anchor)
-
- max_scores, _ = (scores * centerness[..., None]).max(-1)
- _, topk_inds = max_scores.topk(nms_pre)
- points = points[topk_inds, :]
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds).long()
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
- scores = scores[batch_inds, topk_inds, :]
- centerness = centerness[batch_inds, topk_inds]
-
- bboxes = distance2bbox(points, bbox_pred, max_shape=img_shapes)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_centerness.append(centerness)
-
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
- if rescale:
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
- scale_factors).unsqueeze(1)
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- batch_mlvl_centerness = torch.cat(mlvl_centerness, dim=1)
-
- # Set max number of box to be feed into nms in deployment
- deploy_nms_pre = cfg.get('deploy_nms_pre', -1)
- if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export():
- batch_mlvl_scores, _ = (
- batch_mlvl_scores *
- batch_mlvl_centerness.unsqueeze(2).expand_as(batch_mlvl_scores)
- ).max(-1)
- _, topk_inds = batch_mlvl_scores.topk(deploy_nms_pre)
- batch_inds = torch.arange(batch_mlvl_scores.shape[0]).view(
- -1, 1).expand_as(topk_inds)
- batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds, :]
- batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds, :]
- batch_mlvl_centerness = batch_mlvl_centerness[batch_inds,
- topk_inds]
-
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = batch_mlvl_scores.new_zeros(batch_size,
- batch_mlvl_scores.shape[1], 1)
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
-
- if with_nms:
- det_results = []
- for (mlvl_bboxes, mlvl_scores,
- mlvl_centerness) in zip(batch_mlvl_bboxes, batch_mlvl_scores,
- batch_mlvl_centerness):
- det_bbox, det_label = multiclass_nms(
- mlvl_bboxes,
- mlvl_scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=mlvl_centerness)
- det_results.append(tuple([det_bbox, det_label]))
- else:
- det_results = [
- tuple(mlvl_bs)
- for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores,
- batch_mlvl_centerness)
- ]
- return det_results
-
- def _get_points_single(self,
- featmap_size,
- stride,
- dtype,
- device,
- flatten=False):
- """Get points according to feature map sizes."""
- y, x = super()._get_points_single(featmap_size, stride, dtype, device)
- points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride),
- dim=-1) + stride // 2
- return points
-
- def get_targets(self, points, gt_bboxes_list, gt_labels_list):
- """Compute regression, classification and centerness targets for points
- in multiple images.
-
- Args:
- points (list[Tensor]): Points of each fpn level, each has shape
- (num_points, 2).
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image,
- each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]): Ground truth labels of each box,
- each has shape (num_gt,).
-
- Returns:
- tuple:
- concat_lvl_labels (list[Tensor]): Labels of each level. \
- concat_lvl_bbox_targets (list[Tensor]): BBox targets of each \
- level.
- """
- assert len(points) == len(self.regress_ranges)
- num_levels = len(points)
- # expand regress ranges to align with points
- expanded_regress_ranges = [
- points[i].new_tensor(self.regress_ranges[i])[None].expand_as(
- points[i]) for i in range(num_levels)
- ]
- # concat all levels points and regress ranges
- concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0)
- concat_points = torch.cat(points, dim=0)
-
- # the number of points per img, per lvl
- num_points = [center.size(0) for center in points]
-
- # get labels and bbox_targets of each image
- labels_list, bbox_targets_list = multi_apply(
- self._get_target_single,
- gt_bboxes_list,
- gt_labels_list,
- points=concat_points,
- regress_ranges=concat_regress_ranges,
- num_points_per_lvl=num_points)
-
- # split to per img, per level
- labels_list = [labels.split(num_points, 0) for labels in labels_list]
- bbox_targets_list = [
- bbox_targets.split(num_points, 0)
- for bbox_targets in bbox_targets_list
- ]
-
- # concat per level image
- concat_lvl_labels = []
- concat_lvl_bbox_targets = []
- for i in range(num_levels):
- concat_lvl_labels.append(
- torch.cat([labels[i] for labels in labels_list]))
- bbox_targets = torch.cat(
- [bbox_targets[i] for bbox_targets in bbox_targets_list])
- if self.norm_on_bbox:
- bbox_targets = bbox_targets / self.strides[i]
- concat_lvl_bbox_targets.append(bbox_targets)
- return concat_lvl_labels, concat_lvl_bbox_targets
-
- def _get_target_single(self, gt_bboxes, gt_labels, points, regress_ranges,
- num_points_per_lvl):
- """Compute regression and classification targets for a single image."""
- num_points = points.size(0)
- num_gts = gt_labels.size(0)
- if num_gts == 0:
- return gt_labels.new_full((num_points,), self.num_classes), \
- gt_bboxes.new_zeros((num_points, 4))
-
- areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * (
- gt_bboxes[:, 3] - gt_bboxes[:, 1])
- # TODO: figure out why these two are different
- # areas = areas[None].expand(num_points, num_gts)
- areas = areas[None].repeat(num_points, 1)
- regress_ranges = regress_ranges[:, None, :].expand(
- num_points, num_gts, 2)
- gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4)
- xs, ys = points[:, 0], points[:, 1]
- xs = xs[:, None].expand(num_points, num_gts)
- ys = ys[:, None].expand(num_points, num_gts)
-
- left = xs - gt_bboxes[..., 0]
- right = gt_bboxes[..., 2] - xs
- top = ys - gt_bboxes[..., 1]
- bottom = gt_bboxes[..., 3] - ys
- bbox_targets = torch.stack((left, top, right, bottom), -1)
-
- if self.center_sampling:
- # condition1: inside a `center bbox`
- radius = self.center_sample_radius
- center_xs = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) / 2
- center_ys = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) / 2
- center_gts = torch.zeros_like(gt_bboxes)
- stride = center_xs.new_zeros(center_xs.shape)
-
- # project the points on current lvl back to the `original` sizes
- lvl_begin = 0
- for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl):
- lvl_end = lvl_begin + num_points_lvl
- stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius
- lvl_begin = lvl_end
-
- x_mins = center_xs - stride
- y_mins = center_ys - stride
- x_maxs = center_xs + stride
- y_maxs = center_ys + stride
- center_gts[..., 0] = torch.where(x_mins > gt_bboxes[..., 0],
- x_mins, gt_bboxes[..., 0])
- center_gts[..., 1] = torch.where(y_mins > gt_bboxes[..., 1],
- y_mins, gt_bboxes[..., 1])
- center_gts[..., 2] = torch.where(x_maxs > gt_bboxes[..., 2],
- gt_bboxes[..., 2], x_maxs)
- center_gts[..., 3] = torch.where(y_maxs > gt_bboxes[..., 3],
- gt_bboxes[..., 3], y_maxs)
-
- cb_dist_left = xs - center_gts[..., 0]
- cb_dist_right = center_gts[..., 2] - xs
- cb_dist_top = ys - center_gts[..., 1]
- cb_dist_bottom = center_gts[..., 3] - ys
- center_bbox = torch.stack(
- (cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1)
- inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0
- else:
- # condition1: inside a gt bbox
- inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0
-
- # condition2: limit the regression range for each location
- max_regress_distance = bbox_targets.max(-1)[0]
- inside_regress_range = (
- (max_regress_distance >= regress_ranges[..., 0])
- & (max_regress_distance <= regress_ranges[..., 1]))
-
- # if there are still more than one objects for a location,
- # we choose the one with minimal area
- areas[inside_gt_bbox_mask == 0] = INF
- areas[inside_regress_range == 0] = INF
- min_area, min_area_inds = areas.min(dim=1)
-
- labels = gt_labels[min_area_inds]
- labels[min_area == INF] = self.num_classes # set as BG
- bbox_targets = bbox_targets[range(num_points), min_area_inds]
-
- return labels, bbox_targets
-
- def centerness_target(self, pos_bbox_targets):
- """Compute centerness targets.
-
- Args:
- pos_bbox_targets (Tensor): BBox targets of positive bboxes in shape
- (num_pos, 4)
-
- Returns:
- Tensor: Centerness target.
- """
- # only calculate pos centerness targets, otherwise there may be nan
- left_right = pos_bbox_targets[:, [0, 2]]
- top_bottom = pos_bbox_targets[:, [1, 3]]
- centerness_targets = (
- left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * (
- top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])
- return torch.sqrt(centerness_targets)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/mobilenet_v2.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/mobilenet_v2.py
deleted file mode 100644
index 5820b4b13c0019d67801c5f924650e928acca72e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/mobilenet_v2.py
+++ /dev/null
@@ -1,180 +0,0 @@
-import logging
-
-import torch.nn as nn
-from mmcv.cnn import ConvModule, constant_init, kaiming_init
-from mmcv.runner import load_checkpoint
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from ..builder import BACKBONES
-from ..utils import InvertedResidual, make_divisible
-
-
-@BACKBONES.register_module()
-class MobileNetV2(nn.Module):
- """MobileNetV2 backbone.
-
- Args:
- widen_factor (float): Width multiplier, multiply number of
- channels in each layer by this amount. Default: 1.0.
- strides (Sequence[int], optional): Strides of the first block of each
- layer. If not specified, default config in ``arch_setting`` will
- be used.
- dilations (Sequence[int]): Dilation of each layer.
- out_indices (None or Sequence[int]): Output from which stages.
- Default: (7, ).
- frozen_stages (int): Stages to be frozen (all param fixed).
- Default: -1, which means not freezing any parameters.
- conv_cfg (dict): Config dict for convolution layer.
- Default: None, which means using conv2d.
- norm_cfg (dict): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict): Config dict for activation layer.
- Default: dict(type='ReLU6').
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- """
-
- # Parameters to build layers. 3 parameters are needed to construct a
- # layer, from left to right: expand_ratio, channel, num_blocks.
- arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4],
- [6, 96, 3], [6, 160, 3], [6, 320, 1]]
-
- def __init__(self,
- widen_factor=1.,
- strides=(1, 2, 2, 2, 1, 2, 1),
- dilations=(1, 1, 1, 1, 1, 1, 1),
- out_indices=(1, 2, 4, 6),
- frozen_stages=-1,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU6'),
- norm_eval=False,
- with_cp=False):
- super(MobileNetV2, self).__init__()
- self.widen_factor = widen_factor
- self.strides = strides
- self.dilations = dilations
- assert len(strides) == len(dilations) == len(self.arch_settings)
- self.out_indices = out_indices
- for index in out_indices:
- if index not in range(0, 7):
- raise ValueError('the item in out_indices must in '
- f'range(0, 8). But received {index}')
-
- if frozen_stages not in range(-1, 7):
- raise ValueError('frozen_stages must be in range(-1, 7). '
- f'But received {frozen_stages}')
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.norm_eval = norm_eval
- self.with_cp = with_cp
-
- self.in_channels = make_divisible(32 * widen_factor, 8)
-
- self.conv1 = ConvModule(
- in_channels=3,
- out_channels=self.in_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- self.layers = []
-
- for i, layer_cfg in enumerate(self.arch_settings):
- expand_ratio, channel, num_blocks = layer_cfg
- stride = self.strides[i]
- dilation = self.dilations[i]
- out_channels = make_divisible(channel * widen_factor, 8)
- inverted_res_layer = self.make_layer(
- out_channels=out_channels,
- num_blocks=num_blocks,
- stride=stride,
- dilation=dilation,
- expand_ratio=expand_ratio)
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, inverted_res_layer)
- self.layers.append(layer_name)
-
- def make_layer(self, out_channels, num_blocks, stride, dilation,
- expand_ratio):
- """Stack InvertedResidual blocks to build a layer for MobileNetV2.
-
- Args:
- out_channels (int): out_channels of block.
- num_blocks (int): Number of blocks.
- stride (int): Stride of the first block.
- dilation (int): Dilation of the first block.
- expand_ratio (int): Expand the number of channels of the
- hidden layer in InvertedResidual by this ratio.
- """
- layers = []
- for i in range(num_blocks):
- layers.append(
- InvertedResidual(
- self.in_channels,
- out_channels,
- stride if i == 0 else 1,
- expand_ratio=expand_ratio,
- dilation=dilation if i == 0 else 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg,
- with_cp=self.with_cp))
- self.in_channels = out_channels
-
- return nn.Sequential(*layers)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- x = self.conv1(x)
-
- outs = []
- for i, layer_name in enumerate(self.layers):
- layer = getattr(self, layer_name)
- x = layer(x)
- if i in self.out_indices:
- outs.append(x)
-
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- for param in self.conv1.parameters():
- param.requires_grad = False
- for i in range(1, self.frozen_stages + 1):
- layer = getattr(self, f'layer{i}')
- layer.eval()
- for param in layer.parameters():
- param.requires_grad = False
-
- def train(self, mode=True):
- super(MobileNetV2, self).train(mode)
- self._freeze_stages()
- if mode and self.norm_eval:
- for m in self.modules():
- if isinstance(m, _BatchNorm):
- m.eval()
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CHANGELOG.md b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CHANGELOG.md
deleted file mode 100644
index 24fc214df236b40efead4b1585b01632d9658e9b..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/CHANGELOG.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Changelog
-
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
-
-## [0.0.2a] - TBD
-
-Improved demo, fixed top p (thanks @jnordberg).
-
-Compressor tanh on output to avoid clipping with some style (especially piano).
-Now repeating the conditioning periodically if it is too short.
-
-More options when launching Gradio app locally (thanks @ashleykleynhans).
-
-Testing out PyTorch 2.0 memory efficient attention.
-
-Added extended generation (infinite length) by slowly moving the windows.
-Note that other implementations exist: https://github.com/camenduru/MusicGen-colab.
-
-## [0.0.1] - 2023-06-09
-
-Initial release, with model evaluation only.
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/__init__.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Hitmanny/BigGAN-text-to-image/app.py b/spaces/Hitmanny/BigGAN-text-to-image/app.py
deleted file mode 100644
index f22f259d84dd634dd567e2f923bad84afd6bae16..0000000000000000000000000000000000000000
--- a/spaces/Hitmanny/BigGAN-text-to-image/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import gradio as gr
-description = "BigGAN text-to-image demo."
-title = "BigGAN ImageNet"
-interface = gr.Interface.load("huggingface/osanseviero/BigGAN-deep-128",
- description=description,
- title = title,
- examples=[["american robin"]]
-)
-interface.launch()
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/docs/hydra_integration.md b/spaces/ICML2022/OFA/fairseq/docs/hydra_integration.md
deleted file mode 100644
index 6a15298382a6a16dfc4c5a4a812ea1cd0477ed52..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/docs/hydra_integration.md
+++ /dev/null
@@ -1,284 +0,0 @@
-## Hydra
-
-[Hydra](https://github.com/facebookresearch/hydra) is an open-source Python
-framework that simplifies the development of research and other complex
-applications. The key feature is the ability to dynamically create a
-hierarchical configuration by composition and override it through config files
-and the command line. The name Hydra comes from its ability to run multiple
-similar jobs - much like a Hydra with multiple heads.
-
-## Motivation
-
-Until recently, all components in fairseq were configured through a shared
-`args` namespace that was created at application startup. Components declared
-their own `add_args` method to update the argparse parser, hoping that the names
-would not clash with arguments from other components. While this model works for
-smaller applications, as fairseq grew and became integrated into other
-applications, this became problematic. In order to determine how to configure
-each component, one needed to a) examine what args were added by this component,
-and b) read the code to figure out what shared arguments it is using that were
-added in other places. Reproducing models involved sharing commands that often
-contained dozens of command line switches.
-
-The model described above is still supported by fairseq for backward
-compatibility, but will be deprecated some time in the future.
-
-New components in fairseq should now create a dataclass that encapsulates all
-parameters required to configure this component. The dataclass is registered
-along with the component, and fairseq takes care of constructing and providing
-this configuration object to the component's constructor. Note that sharing
-parameters can optionally still work, but one has to explicitly point to the
-"source of truth" (see inheritance example below). These changes make components
-in fairseq more independent and re-usable by other applications: all that is
-needed to create a component is to initialize its dataclass and overwrite some
-of the defaults.
-
-While configuring fairseq through command line (using either the legacy argparse
-based or the new Hydra based entry points) is still fully supported, you can now
-take advantage of configuring fairseq completely or piece-by-piece through
-hierarchical YAML configuration files. These files can also be shipped as
-examples that others can use to run an identically configured job.
-
-Additionally, Hydra has a rich and growing [library of
-plugins](https://github.com/facebookresearch/hydra/tree/master/plugins) that
-provide functionality such as hyperparameter sweeping (including using bayesian
-optimization through the [Ax](https://github.com/facebook/Ax) library), job
-launching across various platforms, and more.
-
-## Creating or migrating components
-
-In general, each new (or updated) component should provide a companion
-[dataclass](https://www.python.org/dev/peps/pep-0557/). These dataclass are
-typically located in the same file as the component and are passed as arguments
-to the `register_*()` functions. Top-level configs that should be present in
-every fairseq application are placed in the
-[global](fairseq/dataclass/configs.py) config file and added to the
-`FairseqConfig` object.
-
-Each dataclass is a plain-old-data object, similar to a `NamedTuple`. These
-classes are decorated with a `@dataclass` decorator, and typically inherit from
-`FairseqDataclass` (which adds some functionality for backward compatibility).
-Each field must have a type, and generally has metadata (such as a help string)
-and a default value. Only primitive types or other config objects are allowed as
-data types for each field.
-
-#### Example:
-
-```python
-from dataclasses import dataclass, field
-from fairseq.dataclass import FairseqDataclass
-
-@dataclass
-class InteractiveConfig(FairseqDataclass):
- buffer_size: int = field(
- default=0,
- metadata={
- "help": "read this many sentences into a buffer before processing them"
- },
- )
- input: str = field(
- default="-",
- metadata={"help": "file to read from; use - for stdin"},
- )
-```
-
-### Inherting values
-
-Some components require sharing a value. For example, a learning rate scheduler
-and an optimizer may both need to know the initial learning rate value. One can
-declare a field that, by default, will inherit its value from another config
-node in the same hierarchy:
-
-```python
-@dataclass
-FairseqAdamConfig(FairseqDataclass):
- ...
- lr: List[float] = II("optimization.lr")
- ...
-```
-
-`II("optimization.lr")` is syntactic sugar for `"${optimization.lr}"`, which is
-the value one can use in a YAML config file or through command line to achieve
-the same effect. Note that this assumes that there is an "optimization" config
-object in the root config and it has a field called "lr".
-
-### Tasks and Models
-
-Creating Tasks and Models works same as before, except that legacy
-implementations now inherit from `LegacyFairseq*` base classes, while new
-components inherit from `FairseqTask` and `FairseqModel` and provide a dataclass
-to the `register_*()` functions.
-
-#### Task example:
-
-```python
-@dataclass
-class LanguageModelingConfig(FairseqDataclass):
- data: Optional[str] = field(
- default=None, metadata={"help": "path to data directory"}
- )
- ...
-
-@register_task("language_modeling", dataclass=LanguageModelingConfig)
-class LanguageModelingTask(FairseqTask):
- ...
- @classmethod
- def setup_task(cls, cfg: LanguageModelingConfig):
- ...
-```
-
-#### Model example:
-
-```python
-@dataclass
-class TransformerLanguageModelConfig(FairseqDataclass):
- activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field(
- default="relu", metadata={"help": "activation function to use"}
- )
- dropout: float = field(default=0.1, metadata={"help": "dropout probability"})
- ...
-
-@register_model("transformer_lm", dataclass=TransformerLanguageModelConfig)
-class TransformerLanguageModel(FairseqLanguageModel):
- ...
- @classmethod
- def build_model(cls, cfg: TransformerLanguageModelConfig, task: FairseqTask):
- ...
-```
-
-### Other components
-
-Other components work as before, but they now take their configuration dataclass
-as the only constructor argument:
-
-```python
-@dataclass
-class MosesTokenizerConfig(FairseqDataclass):
- source_lang: str = field(default="en", metadata={"help": "source language"})
- ...
-
-@register_tokenizer("moses", dataclass=MosesTokenizerConfig)
-class MosesTokenizer(object):
- def __init__(self, cfg: MosesTokenizerConfig):
- ...
-```
-
-Note that if you are adding a new registry for a new set of components, you need
-to add it to the `FairseqConfig` object in `fairseq/dataclass/configs.py`:
-
-```python
-@dataclass
-class FairseqConfig(object):
- ...
- my_new_registry: Any = None
-```
-
-## Training with `fairseq-hydra-train`
-
-To fully take advantage of configuration flexibility offered by Hydra, you may
-want to train new models using the `fairseq-hydra-train` entry point. Legacy CLI
-tools such as `fairseq-train` will remain supported for the foreseeable future
-but will be deprecated eventually.
-
-On startup, Hydra will create a configuration object that contains a hierarchy
-of all the necessary dataclasses populated with their default values in the
-code. The default values are overwritten by values found in YAML files in
-`fairseq/config` directory (which currently sets minimal defaults) and then
-further overwritten by values provided through command line arguments.
-
-Some of the most common use cases are shown below:
-
-### 1. Override default values through command line:
-
-```shell script
-$ fairseq-hydra-train \
- distributed_training.distributed_world_size=1 \
- dataset.batch_size=2 \
- task.data=data-bin \
- model=transformer_lm/transformer_lm_gpt \
- task=language_modeling \
- optimization.max_update=5000
-```
-
-Note that along with explicitly providing values for parameters such as
-`dataset.batch_size`, this also tells Hydra to overlay configuration found in
-`fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml` over the default
-values in the dataclass. If you want to train a model without specifying a
-particular architecture you can simply specify `model=transformer_lm`. This only
-works for migrated tasks and models.
-
-### 2. Replace bundled configs with an external config:
-
-```shell script
-$ fairseq-hydra-train \
- --config-dir /path/to/external/configs \
- --config-name wiki103
-```
-
-where `/path/to/external/configs/wiki103.yaml` contains:
-
-```yaml
-# @package _group_
-
-model:
- _name: transformer_lm
-distributed_training:
- distributed_world_size: 1
-dataset:
- batch_size: 2
-task:
- _name: language_modeling
- data: /path/to/data
- add_bos_token: false
- max_target_positions: 1024
-optimization:
- max_update: 50000
- lr: [ 0.25 ]
-criterion: cross_entropy
-optimizer: adam
-lr_scheduler:
- _name: cosine
-```
-
-Note that here bundled configs from `fairseq/config` directory are not used,
-however the defaults from each dataclass will still be used (unless overwritten
-by your external config).
-
-Additionally you can choose to break up your configs by creating a directory
-structure in the same location as your main config file, with the names of the
-top-level fields (such as "model", "dataset", etc), and placing config files
-with meaningful names that would populate that specific section of your
-top-level config file (for example, you might have
-`model/small_transformer_lm.yaml`, `model/big_transformer_lm.yaml`, etc). You
-can then specify the correct configuration via command line, defaults in the
-main config, or even launch all of them as a sweep (see Hydra documentation on
-how to do this).
-
-### 3. Add an external config directory to Hydra search path:
-
-This allows combining default configuration (including using any bundled config
-files), while specifying your own config files for some parts of the
-configuration.
-
-```shell script
-$ fairseq-hydra-train \
- distributed_training.distributed_world_size=1 \
- dataset.batch_size=2 \
- task.data=/path/to/data/ \
- model=transformer_lm/2_layers \
- task=language_modeling \
- optimization.max_update=5000 \
- --config-dir /path/to/external/configs
-```
-
-where `/path/to/external/configs` has the following structure:
-```
-.
-+-- model
-| +-- transformer_lm
-| | +-- 2_layers.yaml
-```
-
-and `2_layers.yaml` contains a copy of `transformer_lm_gpt.yaml` but with
-`decoder_layers` set to 2. You can add other configs to configure other
-components as well.
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/inference_main.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/inference_main.py
deleted file mode 100644
index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/inference_main.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import io
-import logging
-import time
-from pathlib import Path
-
-import librosa
-import matplotlib.pyplot as plt
-import numpy as np
-import soundfile
-
-from inference import infer_tool
-from inference import slicer
-from inference.infer_tool import Svc
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-chunks_dict = infer_tool.read_temp("inference/chunks_temp.json")
-
-
-
-def main():
- import argparse
-
- parser = argparse.ArgumentParser(description='sovits4 inference')
-
- # 一定要设置的部分
- parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径')
- parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径')
- parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下')
- parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)')
- parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称')
-
- # 可选项部分
- parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,
- help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调')
- parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填')
- parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可')
-
- # 不用动的部分
- parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50')
- parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu')
- parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学')
- parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现')
- parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式')
-
- args = parser.parse_args()
-
- svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path)
- infer_tool.mkdir(["raw", "results"])
- clean_names = args.clean_names
- trans = args.trans
- spk_list = args.spk_list
- slice_db = args.slice_db
- wav_format = args.wav_format
- auto_predict_f0 = args.auto_predict_f0
- cluster_infer_ratio = args.cluster_infer_ratio
- noice_scale = args.noice_scale
- pad_seconds = args.pad_seconds
-
- infer_tool.fill_a_to_b(trans, clean_names)
- for clean_name, tran in zip(clean_names, trans):
- raw_audio_path = f"raw/{clean_name}"
- if "." not in raw_audio_path:
- raw_audio_path += ".wav"
- infer_tool.format_wav(raw_audio_path)
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
-
- for spk in spk_list:
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])])
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale
- )
- _audio = out_audio.cpu().numpy()
-
- pad_len = int(svc_model.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- audio.extend(list(_audio))
- key = "auto" if auto_predict_f0 else f"{tran}key"
- cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}"
- res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}'
- soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Jamkonams/AutoGPT/tests/test_json_parser.py b/spaces/Jamkonams/AutoGPT/tests/test_json_parser.py
deleted file mode 100644
index 41c90a6f66c0b0468f1443de80033cc4f268eca0..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/tests/test_json_parser.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import unittest
-
-import tests.context
-from autogpt.json_utils.json_fix_llm import fix_and_parse_json
-
-
-class TestParseJson(unittest.TestCase):
- def test_valid_json(self):
- # Test that a valid JSON string is parsed correctly
- json_str = '{"name": "John", "age": 30, "city": "New York"}'
- obj = fix_and_parse_json(json_str)
- self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"})
-
- def test_invalid_json_minor(self):
- # Test that an invalid JSON string can be fixed with gpt
- json_str = '{"name": "John", "age": 30, "city": "New York",}'
- with self.assertRaises(Exception):
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False)
-
- def test_invalid_json_major_with_gpt(self):
- # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False
- json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
- with self.assertRaises(Exception):
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False)
-
- def test_invalid_json_major_without_gpt(self):
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
- json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END'
- # Assert that this raises an exception:
- with self.assertRaises(Exception):
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False)
-
- def test_invalid_json_leading_sentence_with_gpt(self):
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
- json_str = """I suggest we start by browsing the repository to find any issues that we can fix.
-
-{
- "command": {
- "name": "browse_website",
- "args":{
- "url": "https://github.com/Torantulino/Auto-GPT"
- }
- },
- "thoughts":
- {
- "text": "I suggest we start browsing the repository to find any issues that we can fix.",
- "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
- "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
- "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
- "speak": "I will start browsing the repository to find any issues we can fix."
- }
-}"""
- good_obj = {
- "command": {
- "name": "browse_website",
- "args": {"url": "https://github.com/Torantulino/Auto-GPT"},
- },
- "thoughts": {
- "text": "I suggest we start browsing the repository to find any issues that we can fix.",
- "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.",
- "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes",
- "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.",
- "speak": "I will start browsing the repository to find any issues we can fix.",
- },
- }
- # Assert that this raises an exception:
- self.assertEqual(
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj
- )
-
- def test_invalid_json_leading_sentence_with_gpt(self):
- # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False
- json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this.
-
-{
- "command": {
- "name": "browse_website",
- "args":{
- "url": "https://github.com/Torantulino/Auto-GPT"
- }
- },
- "thoughts":
- {
- "text": "Browsing the repository to identify potential bugs",
- "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
- "plan": "- Analyze the repository for potential bugs and areas of improvement",
- "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
- "speak": "I am browsing the repository to identify potential bugs."
- }
-}"""
- good_obj = {
- "command": {
- "name": "browse_website",
- "args": {"url": "https://github.com/Torantulino/Auto-GPT"},
- },
- "thoughts": {
- "text": "Browsing the repository to identify potential bugs",
- "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.",
- "plan": "- Analyze the repository for potential bugs and areas of improvement",
- "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.",
- "speak": "I am browsing the repository to identify potential bugs.",
- },
- }
- # Assert that this raises an exception:
- self.assertEqual(
- fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj
- )
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/replaceTextInSpeechBubbles.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/replaceTextInSpeechBubbles.ts
deleted file mode 100644
index 8566a2f8068feef008348ae7f6d6f06e2d2b1628..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/lib/replaceTextInSpeechBubbles.ts
+++ /dev/null
@@ -1,98 +0,0 @@
-"use client"
-
-import { createWorker } from "tesseract.js"
-import { loadImageToCanvas } from "./loadImageToCanvas";
-
-export async function replaceTextInSpeechBubbles(image: string, customText: string) {
- console.log('creating OCR worker to find bubbles inside', image);
-
- const worker = await createWorker({
- logger: (info) => {
- console.log(info)
- },
- });
-
- const canvas = await loadImageToCanvas(image)
-
- const ctx = canvas.getContext('2d')!;
-
- try {
- await worker.load();
- await worker.loadLanguage('eng');
- await worker.initialize('eng');
-
- const { data } = await worker.recognize(canvas);
- const lines = data.lines || [];
-
- // Draw the lines on the image
- ctx.fillStyle = "white";
-
- lines.forEach((line) => {
- ctx.fillRect(line.bbox.x0, line.bbox.y0, line.bbox.x1 - line.bbox.x0, line.bbox.y1 - line.bbox.y0);
-
- const bubbleWidth = line.bbox.x1 - line.bbox.x0;
- const bubbleHeight = line.bbox.y1 - line.bbox.y0;
- let fontSize = 18;
- ctx.font = `${fontSize}px Arial`;
-
- /*
- while (
- ctx.measureText(customText).width > bubbleWidth || fontSize * 1.2 // line height
- > bubbleHeight) {
- fontSize -= 1;
- ctx.font = `${fontSize}px Arial`;
- }
-
- const lines = wrapText(ctx, customText, line.bbox.x0, line.bbox.y0, bubbleWidth, fontSize);
-
- ctx.fillStyle = "black";
- lines.forEach((text, i) => {
- ctx.fillText(text, line.bbox.x0, line.bbox.y0 + (i * fontSize * 1.2));
- });
- */
- })
-
- await worker.terminate();
-
- // Convert the Canvas to image data
- const imgAsDataURL = canvas.toDataURL('image/png');
-
- if (typeof window !== "undefined") {
- const foo = (window as any)
- if (!foo.debugJujul) {
- foo.debugJujul = []
- }
- foo.debugJujul.push({
- lines
- })
- }
- console.log("lines:", lines)
-
- return imgAsDataURL;
-
- } catch (err) {
- console.error(err);
- }
- return "";
-}
-
-function wrapText(context: CanvasRenderingContext2D, text: string, x: number, y: number, maxWidth: number, lineHeight: number) {
- const words = text.split(' ');
- let line = '';
- const lines = [];
-
- for(let n = 0; n < words.length; n++) {
- let testLine = line + words[n] + ' ';
- let metrics = context.measureText(testLine);
- let testWidth = metrics.width;
- if (testWidth > maxWidth && n > 0) {
- lines.push(line);
- line = words[n] + ' ';
- }
- else {
- line = testLine;
- }
- }
- lines.push(line);
- return lines;
-}
\ No newline at end of file
diff --git a/spaces/JeffJing/ZookChatBot/steamship/invocable/plugin_service.py b/spaces/JeffJing/ZookChatBot/steamship/invocable/plugin_service.py
deleted file mode 100644
index 3f5a67a377cf358cec51a1fb769807522163db85..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/invocable/plugin_service.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from __future__ import annotations
-
-import logging
-from abc import ABC, abstractmethod
-from typing import Generic, Type, TypeVar, Union
-
-# Note!
-# =====
-#
-# This the files in this package are for Plugin Implementors.
-# If you are using the Steamship Client, you probably are looking for either steamship.client or steamship.data
-#
-from steamship.invocable import Invocable, InvocableResponse
-from steamship.plugin.inputs.train_plugin_input import TrainPluginInput
-from steamship.plugin.inputs.training_parameter_plugin_input import TrainingParameterPluginInput
-from steamship.plugin.outputs.train_plugin_output import TrainPluginOutput
-from steamship.plugin.outputs.training_parameter_plugin_output import TrainingParameterPluginOutput
-from steamship.plugin.request import PluginRequest
-from steamship.plugin.trainable_model import TrainableModel
-
-IN = TypeVar("IN")
-OUT = TypeVar("OUT")
-
-
-class PluginService(Invocable, Generic[IN, OUT], ABC):
- """The Abstract Base Class of a Steamship Plugin.
-
- All Steamship Plugins implement the operation:
-
- - run(PluginRequest[T]) -> Response[U]
-
- Many plugins are effectively stateless. This run operation defines their entire capability.
- Examples of such stateless plugins are:
- - File Import Plugin
- - Export Plugin
-
- Other plugins have state but in a very controlled way:
- - they can be trained,
- - this trainable process produces a "model",
- - that model acts as the state on which the `run` method is conditioned
-
- This model is stored in the Steamship Workspace that owns the Plugin Instance, and access to it is provided by the
- hosting environment that runs the model.
- - TODO(ted) Document this process.
-
- These stateful plugins are called "Trainable Plugins," and they must implement the following additional methods:
-
- - get_training_parameters(PluginRequest[TrainingParameterInput]) -> Response[TrainingParameterOutput]
- - train(PluginRequest[TrainPluginInput]) -> Response[TrainPluginOutput]
-
- """
-
- @abstractmethod
- def run(self, request: PluginRequest[IN]) -> Union[OUT, InvocableResponse[OUT]]:
- """Runs the core operation implemented by this plugin: import, export, blockify, tag, etc.
-
- This is the method that a Steamship Plugin implements to perform its main work.
- """
- pass
-
-
-class TrainablePluginService(PluginService, Generic[IN, OUT], ABC):
- @abstractmethod
- def model_cls(self) -> Type[TrainableModel]:
- """Returns the constructor of the TrainableModel this TrainablePluginService uses.
-
- This is required so the `run` method below can load the model and provide it to the subclass implementor.
- """
- pass
-
- def run(self, request: PluginRequest[IN]) -> Union[OUT, InvocableResponse[OUT]]:
- """Loads the trainable model before passing the request to the `run_with_model` handler on the subclass."""
- logging.info("TrainablePluginService:run() - Loading model")
- model = self.model_cls().load_remote(
- client=self.client, # This field comes from being a subclass of App
- plugin_instance_id=request.context.plugin_instance_id,
- checkpoint_handle=None, # Will use default
- use_cache=True,
- plugin_instance_config=self.config,
- )
- logging.info("TrainablePluginService:run() - Loaded model; invoking run_with_model")
- return self.run_with_model(request, model)
-
- @abstractmethod
- def run_with_model(
- self, request: PluginRequest[IN], model: TrainableModel
- ) -> Union[OUT, InvocableResponse[OUT]]:
- """Rather than implementing run(request), a TrainablePluginService implements run_with_model(request, model)"""
- pass
-
- @abstractmethod
- def get_training_parameters(
- self, request: PluginRequest[TrainingParameterPluginInput]
- ) -> InvocableResponse[TrainingParameterPluginOutput]:
- """Produces the trainable parameters for this plugin.
-
- This method is run by the Steamship Engine prior to training to fetch hyperparameters.
-
- - The user themselves can provide hyperparameters on the TrainingParameterPluginInput object.
- - This method then transforms those into the TrainingParameterPluginOutput object, altering the user's values
- if desired.
- - The Engine then takes those TrainingParameterPluginOutput and presents them on the TrainPluginInput
-
- """
- pass
-
- @abstractmethod
- def train(
- self, request: PluginRequest[TrainPluginInput], model: TrainableModel
- ) -> InvocableResponse[TrainPluginOutput]:
- """Train the model."""
- pass
-
- @abstractmethod
- def train_status(
- self, request: PluginRequest[TrainPluginInput], model: TrainableModel
- ) -> InvocableResponse[TrainPluginOutput]:
- """Train the model."""
- pass
diff --git a/spaces/Joom/Front-end-code-generation-from-images/classes/model/autoencoder_image.py b/spaces/Joom/Front-end-code-generation-from-images/classes/model/autoencoder_image.py
deleted file mode 100644
index f4ddc426c2abee8a4e10d5a2b0b6e69e50df3ee0..0000000000000000000000000000000000000000
--- a/spaces/Joom/Front-end-code-generation-from-images/classes/model/autoencoder_image.py
+++ /dev/null
@@ -1,59 +0,0 @@
-__author__ = 'Taneem Jan, improved the old model through pretrained Auto-encoders'
-
-from keras.layers import Input, Dropout, Conv2D, MaxPooling2D, Conv2DTranspose, UpSampling2D
-from keras.models import Model
-from .Config import *
-from .AModel import *
-
-
-class autoencoder_image(AModel):
- def __init__(self, input_shape, output_size, output_path):
- AModel.__init__(self, input_shape, output_size, output_path)
- self.name = 'autoencoder'
-
- input_image = Input(shape=input_shape)
- encoder = Conv2D(32, 3, padding='same', activation='relu')(input_image)
- encoder = Conv2D(32, 3, padding='same', activation='relu')(encoder)
- encoder = MaxPooling2D()(encoder)
- encoder = Dropout(0.25)(encoder)
-
- encoder = Conv2D(64, 3, padding='same', activation='relu')(encoder)
- encoder = Conv2D(64, 3, padding='same', activation='relu')(encoder)
- encoder = MaxPooling2D()(encoder)
- encoder = Dropout(0.25)(encoder)
-
- encoder = Conv2D(128, 3, padding='same', activation='relu')(encoder)
- encoder = Conv2D(128, 3, padding='same', activation='relu')(encoder)
- encoder = MaxPooling2D()(encoder)
- encoded = Dropout(0.25, name='encoded_layer')(encoder)
-
- decoder = Conv2DTranspose(128, 3, padding='same', activation='relu')(encoded)
- decoder = Conv2DTranspose(128, 3, padding='same', activation='relu')(decoder)
- decoder = UpSampling2D()(decoder)
- decoder = Dropout(0.25)(decoder)
-
- decoder = Conv2DTranspose(64, 3, padding='same', activation='relu')(decoder)
- decoder = Conv2DTranspose(64, 3, padding='same', activation='relu')(decoder)
- decoder = UpSampling2D()(decoder)
- decoder = Dropout(0.25)(decoder)
-
- decoder = Conv2DTranspose(32, 3, padding='same', activation='relu')(decoder)
- decoder = Conv2DTranspose(3, 3, padding='same', activation='relu')(decoder)
- decoder = UpSampling2D()(decoder)
- decoded = Dropout(0.25)(decoder)
-
- # decoder = Dense(256*256*3)(decoder)
- # decoded = Reshape(target_shape=input_shape)(decoder)
-
- self.model = Model(input_image, decoded)
- self.model.compile(optimizer='adadelta', loss='binary_crossentropy')
-
- # self.model.summary()
-
- def fit_generator(self, generator, steps_per_epoch):
- self.model.fit_generator(generator, steps_per_epoch=steps_per_epoch, epochs=EPOCHS, verbose=1)
- self.save()
-
- def predict_hidden(self, images):
- hidden_layer_model = Model(inputs=self.input, outputs=self.get_layer('encoded_layer').output)
- return hidden_layer_model.predict(images)
diff --git a/spaces/KaygNas/cut-it/src/main.ts b/spaces/KaygNas/cut-it/src/main.ts
deleted file mode 100644
index 0b1da25f83e45f5699e0f20273f7f3a69b171fcd..0000000000000000000000000000000000000000
--- a/spaces/KaygNas/cut-it/src/main.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { worker } from '../mocks'
-import { App } from './App'
-
-if (import.meta.env.VITE_MOCK)
- worker.start({ onUnhandledRequest: 'bypass' })
-
-// eslint-disable-next-line no-console
-console.log(`main.ts starting ${App.name}`)
-window.addEventListener('DOMContentLoaded', () => {
- const canvas = document.getElementById('renderCanvas') as HTMLCanvasElement
- const app = new App(canvas)
- app.run()
-})
diff --git a/spaces/Kedreamix/YoloGesture/utils/utils_bbox.py b/spaces/Kedreamix/YoloGesture/utils/utils_bbox.py
deleted file mode 100644
index 170b23af588430a346f2f6294279274fc963fb6c..0000000000000000000000000000000000000000
--- a/spaces/Kedreamix/YoloGesture/utils/utils_bbox.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import torch
-import torch.nn as nn
-from torchvision.ops import nms
-import numpy as np
-
-class DecodeBox():
- def __init__(self, anchors, num_classes, input_shape, anchors_mask = [[6,7,8], [3,4,5], [0,1,2]]):
- super(DecodeBox, self).__init__()
- self.anchors = anchors
- self.num_classes = num_classes
- self.bbox_attrs = 5 + num_classes
- self.input_shape = input_shape
- #-----------------------------------------------------------#
- # 13x13的特征层对应的anchor是[142, 110],[192, 243],[459, 401]
- # 26x26的特征层对应的anchor是[36, 75],[76, 55],[72, 146]
- # 52x52的特征层对应的anchor是[12, 16],[19, 36],[40, 28]
- #-----------------------------------------------------------#
- self.anchors_mask = anchors_mask
-
- def decode_box(self, inputs):
- outputs = []
- for i, input in enumerate(inputs):
- #-----------------------------------------------#
- # 输入的input一共有三个,他们的shape分别是
- # batch_size, 255, 13, 13
- # batch_size, 255, 26, 26
- # batch_size, 255, 52, 52
- #-----------------------------------------------#
- batch_size = input.size(0)
- input_height = input.size(2)
- input_width = input.size(3)
-
- #-----------------------------------------------#
- # 输入为416x416时
- # stride_h = stride_w = 32、16、8
- #-----------------------------------------------#
- stride_h = self.input_shape[0] / input_height
- stride_w = self.input_shape[1] / input_width
- #-------------------------------------------------#
- # 此时获得的scaled_anchors大小是相对于特征层的
- #-------------------------------------------------#
- scaled_anchors = [(anchor_width / stride_w, anchor_height / stride_h) for anchor_width, anchor_height in self.anchors[self.anchors_mask[i]]]
-
- #-----------------------------------------------#
- # 输入的input一共有三个,他们的shape分别是
- # batch_size, 3, 13, 13, 85
- # batch_size, 3, 26, 26, 85
- # batch_size, 3, 52, 52, 85
- #-----------------------------------------------#
- prediction = input.view(batch_size, len(self.anchors_mask[i]),
- self.bbox_attrs, input_height, input_width).permute(0, 1, 3, 4, 2).contiguous()
-
- #-----------------------------------------------#
- # 先验框的中心位置的调整参数
- #-----------------------------------------------#
- x = torch.sigmoid(prediction[..., 0])
- y = torch.sigmoid(prediction[..., 1])
- #-----------------------------------------------#
- # 先验框的宽高调整参数
- #-----------------------------------------------#
- w = prediction[..., 2]
- h = prediction[..., 3]
- #-----------------------------------------------#
- # 获得置信度,是否有物体
- #-----------------------------------------------#
- conf = torch.sigmoid(prediction[..., 4])
- #-----------------------------------------------#
- # 种类置信度
- #-----------------------------------------------#
- pred_cls = torch.sigmoid(prediction[..., 5:])
-
- FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor
- LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor
-
- #----------------------------------------------------------#
- # 生成网格,先验框中心,网格左上角
- # batch_size,3,13,13
- #----------------------------------------------------------#
- grid_x = torch.linspace(0, input_width - 1, input_width).repeat(input_height, 1).repeat(
- batch_size * len(self.anchors_mask[i]), 1, 1).view(x.shape).type(FloatTensor)
- grid_y = torch.linspace(0, input_height - 1, input_height).repeat(input_width, 1).t().repeat(
- batch_size * len(self.anchors_mask[i]), 1, 1).view(y.shape).type(FloatTensor)
-
- #----------------------------------------------------------#
- # 按照网格格式生成先验框的宽高
- # batch_size,3,13,13
- #----------------------------------------------------------#
- anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))
- anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))
- anchor_w = anchor_w.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(w.shape)
- anchor_h = anchor_h.repeat(batch_size, 1).repeat(1, 1, input_height * input_width).view(h.shape)
-
- #----------------------------------------------------------#
- # 利用预测结果对先验框进行调整
- # 首先调整先验框的中心,从先验框中心向右下角偏移
- # 再调整先验框的宽高。
- #----------------------------------------------------------#
- pred_boxes = FloatTensor(prediction[..., :4].shape)
- pred_boxes[..., 0] = x.data + grid_x
- pred_boxes[..., 1] = y.data + grid_y
- pred_boxes[..., 2] = torch.exp(w.data) * anchor_w
- pred_boxes[..., 3] = torch.exp(h.data) * anchor_h
-
- #----------------------------------------------------------#
- # 将输出结果归一化成小数的形式
- #----------------------------------------------------------#
- _scale = torch.Tensor([input_width, input_height, input_width, input_height]).type(FloatTensor)
- output = torch.cat((pred_boxes.view(batch_size, -1, 4) / _scale,
- conf.view(batch_size, -1, 1), pred_cls.view(batch_size, -1, self.num_classes)), -1)
- outputs.append(output.data)
- return outputs
-
- def yolo_correct_boxes(self, box_xy, box_wh, input_shape, image_shape, letterbox_image):
- #-----------------------------------------------------------------#
- # 把y轴放前面是因为方便预测框和图像的宽高进行相乘
- #-----------------------------------------------------------------#
- box_yx = box_xy[..., ::-1]
- box_hw = box_wh[..., ::-1]
- input_shape = np.array(input_shape)
- image_shape = np.array(image_shape)
-
- if letterbox_image:
- #-----------------------------------------------------------------#
- # 这里求出来的offset是图像有效区域相对于图像左上角的偏移情况
- # new_shape指的是宽高缩放情况
- #-----------------------------------------------------------------#
- new_shape = np.round(image_shape * np.min(input_shape/image_shape))
- offset = (input_shape - new_shape)/2./input_shape
- scale = input_shape/new_shape
-
- box_yx = (box_yx - offset) * scale
- box_hw *= scale
-
- box_mins = box_yx - (box_hw / 2.)
- box_maxes = box_yx + (box_hw / 2.)
- boxes = np.concatenate([box_mins[..., 0:1], box_mins[..., 1:2], box_maxes[..., 0:1], box_maxes[..., 1:2]], axis=-1)
- boxes *= np.concatenate([image_shape, image_shape], axis=-1)
- return boxes
-
- def non_max_suppression(self, prediction, num_classes, input_shape, image_shape, letterbox_image, conf_thres=0.5, nms_thres=0.4):
- #----------------------------------------------------------#
- # 将预测结果的格式转换成左上角右下角的格式。
- # prediction [batch_size, num_anchors, 85]
- #----------------------------------------------------------#
- box_corner = prediction.new(prediction.shape)
- box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2
- box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2
- box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2
- box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2
- prediction[:, :, :4] = box_corner[:, :, :4]
-
- output = [None for _ in range(len(prediction))]
- for i, image_pred in enumerate(prediction):
- #----------------------------------------------------------#
- # 对种类预测部分取max。
- # class_conf [num_anchors, 1] 种类置信度
- # class_pred [num_anchors, 1] 种类
- #----------------------------------------------------------#
- class_conf, class_pred = torch.max(image_pred[:, 5:5 + num_classes], 1, keepdim=True)
-
- #----------------------------------------------------------#
- # 利用置信度进行第一轮筛选
- #----------------------------------------------------------#
- conf_mask = (image_pred[:, 4] * class_conf[:, 0] >= conf_thres).squeeze()
-
- #----------------------------------------------------------#
- # 根据置信度进行预测结果的筛选
- #----------------------------------------------------------#
- image_pred = image_pred[conf_mask]
- class_conf = class_conf[conf_mask]
- class_pred = class_pred[conf_mask]
- if not image_pred.size(0):
- continue
- #-------------------------------------------------------------------------#
- # detections [num_anchors, 7]
- # 7的内容为:x1, y1, x2, y2, obj_conf, class_conf, class_pred
- #-------------------------------------------------------------------------#
- detections = torch.cat((image_pred[:, :5], class_conf.float(), class_pred.float()), 1)
-
- #------------------------------------------#
- # 获得预测结果中包含的所有种类
- #------------------------------------------#
- unique_labels = detections[:, -1].cpu().unique()
-
- if prediction.is_cuda:
- unique_labels = unique_labels.cuda()
- detections = detections.cuda()
-
- for c in unique_labels:
- #------------------------------------------#
- # 获得某一类得分筛选后全部的预测结果
- #------------------------------------------#
- detections_class = detections[detections[:, -1] == c]
-
- #------------------------------------------#
- # 使用官方自带的非极大抑制会速度更快一些!
- #------------------------------------------#
- keep = nms(
- detections_class[:, :4],
- detections_class[:, 4] * detections_class[:, 5],
- nms_thres
- )
- max_detections = detections_class[keep]
-
- # # 按照存在物体的置信度排序
- # _, conf_sort_index = torch.sort(detections_class[:, 4]*detections_class[:, 5], descending=True)
- # detections_class = detections_class[conf_sort_index]
- # # 进行非极大抑制
- # max_detections = []
- # while detections_class.size(0):
- # # 取出这一类置信度最高的,一步一步往下判断,判断重合程度是否大于nms_thres,如果是则去除掉
- # max_detections.append(detections_class[0].unsqueeze(0))
- # if len(detections_class) == 1:
- # break
- # ious = bbox_iou(max_detections[-1], detections_class[1:])
- # detections_class = detections_class[1:][ious < nms_thres]
- # # 堆叠
- # max_detections = torch.cat(max_detections).data
-
- # Add max detections to outputs
- output[i] = max_detections if output[i] is None else torch.cat((output[i], max_detections))
-
- if output[i] is not None:
- output[i] = output[i].cpu().numpy()
- box_xy, box_wh = (output[i][:, 0:2] + output[i][:, 2:4])/2, output[i][:, 2:4] - output[i][:, 0:2]
- output[i][:, :4] = self.yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape, letterbox_image)
- return output
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/solver.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/solver.py
deleted file mode 100644
index 9ca71cbf2a6b621fa299245f831d4d723ba56977..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/solver.py
+++ /dev/null
@@ -1,217 +0,0 @@
-import os
-import sys
-import abc
-import math
-import yaml
-import torch
-from torch.utils.tensorboard import SummaryWriter
-
-from .option import default_hparas
-from utils.util import human_format, Timer
-from utils.load_yaml import HpsYaml
-
-
-class BaseSolver():
- '''
- Prototype Solver for all kinds of tasks
- Arguments
- config - yaml-styled config
- paras - argparse outcome
- mode - "train"/"test"
- '''
-
- def __init__(self, config, paras, mode="train"):
- # General Settings
- self.config = config # load from yaml file
- self.paras = paras # command line args
- self.mode = mode # 'train' or 'test'
- for k, v in default_hparas.items():
- setattr(self, k, v)
- self.device = torch.device('cuda') if self.paras.gpu and torch.cuda.is_available() \
- else torch.device('cpu')
-
- # Name experiment
- self.exp_name = paras.name
- if self.exp_name is None:
- if 'exp_name' in self.config:
- self.exp_name = self.config.exp_name
- else:
- # By default, exp is named after config file
- self.exp_name = paras.config.split('/')[-1].replace('.yaml', '')
- if mode == 'train':
- self.exp_name += '_seed{}'.format(paras.seed)
-
-
- if mode == 'train':
- # Filepath setup
- os.makedirs(paras.ckpdir, exist_ok=True)
- self.ckpdir = os.path.join(paras.ckpdir, self.exp_name)
- os.makedirs(self.ckpdir, exist_ok=True)
-
- # Logger settings
- self.logdir = os.path.join(paras.logdir, self.exp_name)
- self.log = SummaryWriter(
- self.logdir, flush_secs=self.TB_FLUSH_FREQ)
- self.timer = Timer()
-
- # Hyper-parameters
- self.step = 0
- self.valid_step = config.hparas.valid_step
- self.max_step = config.hparas.max_step
-
- self.verbose('Exp. name : {}'.format(self.exp_name))
- self.verbose('Loading data... large corpus may took a while.')
-
- # elif mode == 'test':
- # # Output path
- # os.makedirs(paras.outdir, exist_ok=True)
- # self.ckpdir = os.path.join(paras.outdir, self.exp_name)
-
- # Load training config to get acoustic feat and build model
- # self.src_config = HpsYaml(config.src.config)
- # self.paras.load = config.src.ckpt
-
- # self.verbose('Evaluating result of tr. config @ {}'.format(
- # config.src.config))
-
- def backward(self, loss):
- '''
- Standard backward step with self.timer and debugger
- Arguments
- loss - the loss to perform loss.backward()
- '''
- self.timer.set()
- loss.backward()
- grad_norm = torch.nn.utils.clip_grad_norm_(
- self.model.parameters(), self.GRAD_CLIP)
- if math.isnan(grad_norm):
- self.verbose('Error : grad norm is NaN @ step '+str(self.step))
- else:
- self.optimizer.step()
- self.timer.cnt('bw')
- return grad_norm
-
- def load_ckpt(self):
- ''' Load ckpt if --load option is specified '''
- print(self.paras)
- if self.paras.load is not None:
- if self.paras.warm_start:
- self.verbose(f"Warm starting model from checkpoint {self.paras.load}.")
- ckpt = torch.load(
- self.paras.load, map_location=self.device if self.mode == 'train'
- else 'cpu')
- model_dict = ckpt['model']
- if "ignore_layers" in self.config.model and len(self.config.model.ignore_layers) > 0:
- model_dict = {k:v for k, v in model_dict.items()
- if k not in self.config.model.ignore_layers}
- dummy_dict = self.model.state_dict()
- dummy_dict.update(model_dict)
- model_dict = dummy_dict
- self.model.load_state_dict(model_dict)
- else:
- # Load weights
- ckpt = torch.load(
- self.paras.load, map_location=self.device if self.mode == 'train'
- else 'cpu')
- self.model.load_state_dict(ckpt['model'])
-
- # Load task-dependent items
- if self.mode == 'train':
- self.step = ckpt['global_step']
- self.optimizer.load_opt_state_dict(ckpt['optimizer'])
- self.verbose('Load ckpt from {}, restarting at step {}'.format(
- self.paras.load, self.step))
- else:
- for k, v in ckpt.items():
- if type(v) is float:
- metric, score = k, v
- self.model.eval()
- self.verbose('Evaluation target = {} (recorded {} = {:.2f} %)'.format(
- self.paras.load, metric, score))
-
- def verbose(self, msg):
- ''' Verbose function for print information to stdout'''
- if self.paras.verbose:
- if type(msg) == list:
- for m in msg:
- print('[INFO]', m.ljust(100))
- else:
- print('[INFO]', msg.ljust(100))
-
- def progress(self, msg):
- ''' Verbose function for updating progress on stdout (do not include newline) '''
- if self.paras.verbose:
- sys.stdout.write("\033[K") # Clear line
- print('[{}] {}'.format(human_format(self.step), msg), end='\r')
-
- def write_log(self, log_name, log_dict):
- '''
- Write log to TensorBoard
- log_name - Name of tensorboard variable
- log_value - / Value of variable (e.g. dict of losses), passed if value = None
- '''
- if type(log_dict) is dict:
- log_dict = {key: val for key, val in log_dict.items() if (
- val is not None and not math.isnan(val))}
- if log_dict is None:
- pass
- elif len(log_dict) > 0:
- if 'align' in log_name or 'spec' in log_name:
- img, form = log_dict
- self.log.add_image(
- log_name, img, global_step=self.step, dataformats=form)
- elif 'text' in log_name or 'hyp' in log_name:
- self.log.add_text(log_name, log_dict, self.step)
- else:
- self.log.add_scalars(log_name, log_dict, self.step)
-
- def save_checkpoint(self, f_name, metric, score, show_msg=True):
- ''''
- Ckpt saver
- f_name - the name of ckpt file (w/o prefix) to store, overwrite if existed
- score - The value of metric used to evaluate model
- '''
- ckpt_path = os.path.join(self.ckpdir, f_name)
- full_dict = {
- "model": self.model.state_dict(),
- "optimizer": self.optimizer.get_opt_state_dict(),
- "global_step": self.step,
- metric: score
- }
-
- torch.save(full_dict, ckpt_path)
- if show_msg:
- self.verbose("Saved checkpoint (step = {}, {} = {:.2f}) and status @ {}".
- format(human_format(self.step), metric, score, ckpt_path))
-
-
- # ----------------------------------- Abtract Methods ------------------------------------------ #
- @abc.abstractmethod
- def load_data(self):
- '''
- Called by main to load all data
- After this call, data related attributes should be setup (e.g. self.tr_set, self.dev_set)
- No return value
- '''
- raise NotImplementedError
-
- @abc.abstractmethod
- def set_model(self):
- '''
- Called by main to set models
- After this call, model related attributes should be setup (e.g. self.l2_loss)
- The followings MUST be setup
- - self.model (torch.nn.Module)
- - self.optimizer (src.Optimizer),
- init. w/ self.optimizer = src.Optimizer(self.model.parameters(),**self.config['hparas'])
- Loading pre-trained model should also be performed here
- No return value
- '''
- raise NotImplementedError
-
- @abc.abstractmethod
- def exec(self):
- '''
- Called by main to execute training/inference
- '''
- raise NotImplementedError
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/f0_utils.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/f0_utils.py
deleted file mode 100644
index 6bc25a882e866a05cfb9afc86397f6c82561a498..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/f0_utils.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import logging
-import numpy as np
-import pyworld
-from scipy.interpolate import interp1d
-from scipy.signal import firwin, get_window, lfilter
-
-def compute_mean_std(lf0):
- nonzero_indices = np.nonzero(lf0)
- mean = np.mean(lf0[nonzero_indices])
- std = np.std(lf0[nonzero_indices])
- return mean, std
-
-
-def compute_f0(wav, sr=16000, frame_period=10.0):
- """Compute f0 from wav using pyworld harvest algorithm."""
- wav = wav.astype(np.float64)
- f0, _ = pyworld.harvest(
- wav, sr, frame_period=frame_period, f0_floor=80.0, f0_ceil=600.0)
- return f0.astype(np.float32)
-
-def f02lf0(f0):
- lf0 = f0.copy()
- nonzero_indices = np.nonzero(f0)
- lf0[nonzero_indices] = np.log(f0[nonzero_indices])
- return lf0
-
-def get_converted_lf0uv(
- wav,
- lf0_mean_trg,
- lf0_std_trg,
- convert=True,
-):
- f0_src = compute_f0(wav)
- if not convert:
- uv, cont_lf0 = get_cont_lf0(f0_src)
- lf0_uv = np.concatenate([cont_lf0[:, np.newaxis], uv[:, np.newaxis]], axis=1)
- return lf0_uv
-
- lf0_src = f02lf0(f0_src)
- lf0_mean_src, lf0_std_src = compute_mean_std(lf0_src)
-
- lf0_vc = lf0_src.copy()
- lf0_vc[lf0_src > 0.0] = (lf0_src[lf0_src > 0.0] - lf0_mean_src) / lf0_std_src * lf0_std_trg + lf0_mean_trg
- f0_vc = lf0_vc.copy()
- f0_vc[lf0_src > 0.0] = np.exp(lf0_vc[lf0_src > 0.0])
-
- uv, cont_lf0_vc = get_cont_lf0(f0_vc)
- lf0_uv = np.concatenate([cont_lf0_vc[:, np.newaxis], uv[:, np.newaxis]], axis=1)
- return lf0_uv
-
-def low_pass_filter(x, fs, cutoff=70, padding=True):
- """FUNCTION TO APPLY LOW PASS FILTER
-
- Args:
- x (ndarray): Waveform sequence
- fs (int): Sampling frequency
- cutoff (float): Cutoff frequency of low pass filter
-
- Return:
- (ndarray): Low pass filtered waveform sequence
- """
-
- nyquist = fs // 2
- norm_cutoff = cutoff / nyquist
-
- # low cut filter
- numtaps = 255
- fil = firwin(numtaps, norm_cutoff)
- x_pad = np.pad(x, (numtaps, numtaps), 'edge')
- lpf_x = lfilter(fil, 1, x_pad)
- lpf_x = lpf_x[numtaps + numtaps // 2: -numtaps // 2]
-
- return lpf_x
-
-
-def convert_continuos_f0(f0):
- """CONVERT F0 TO CONTINUOUS F0
-
- Args:
- f0 (ndarray): original f0 sequence with the shape (T)
-
- Return:
- (ndarray): continuous f0 with the shape (T)
- """
- # get uv information as binary
- uv = np.float32(f0 != 0)
-
- # get start and end of f0
- if (f0 == 0).all():
- logging.warn("all of the f0 values are 0.")
- return uv, f0
- start_f0 = f0[f0 != 0][0]
- end_f0 = f0[f0 != 0][-1]
-
- # padding start and end of f0 sequence
- start_idx = np.where(f0 == start_f0)[0][0]
- end_idx = np.where(f0 == end_f0)[0][-1]
- f0[:start_idx] = start_f0
- f0[end_idx:] = end_f0
-
- # get non-zero frame index
- nz_frames = np.where(f0 != 0)[0]
-
- # perform linear interpolation
- f = interp1d(nz_frames, f0[nz_frames])
- cont_f0 = f(np.arange(0, f0.shape[0]))
-
- return uv, cont_f0
-
-
-def get_cont_lf0(f0, frame_period=10.0, lpf=False):
- uv, cont_f0 = convert_continuos_f0(f0)
- if lpf:
- cont_f0_lpf = low_pass_filter(cont_f0, int(1.0 / (frame_period * 0.001)), cutoff=20)
- cont_lf0_lpf = cont_f0_lpf.copy()
- nonzero_indices = np.nonzero(cont_lf0_lpf)
- cont_lf0_lpf[nonzero_indices] = np.log(cont_f0_lpf[nonzero_indices])
- # cont_lf0_lpf = np.log(cont_f0_lpf)
- return uv, cont_lf0_lpf
- else:
- nonzero_indices = np.nonzero(cont_f0)
- cont_lf0 = cont_f0.copy()
- cont_lf0[cont_f0>0] = np.log(cont_f0[cont_f0>0])
- return uv, cont_lf0
diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/__init__.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/__init__.py
deleted file mode 100644
index 5ae3e48110e61231acf1e666e5fa76af5e4ebdcd..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/__init__.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import torch
-
-
-_output_ref = None
-_replicas_ref = None
-
-def data_parallel_workaround(model, *input):
- global _output_ref
- global _replicas_ref
- device_ids = list(range(torch.cuda.device_count()))
- output_device = device_ids[0]
- replicas = torch.nn.parallel.replicate(model, device_ids)
- # input.shape = (num_args, batch, ...)
- inputs = torch.nn.parallel.scatter(input, device_ids)
- # inputs.shape = (num_gpus, num_args, batch/num_gpus, ...)
- replicas = replicas[:len(inputs)]
- outputs = torch.nn.parallel.parallel_apply(replicas, inputs)
- y_hat = torch.nn.parallel.gather(outputs, output_device)
- _output_ref = outputs
- _replicas_ref = replicas
- return y_hat
-
-
-class ValueWindow():
- def __init__(self, window_size=100):
- self._window_size = window_size
- self._values = []
-
- def append(self, x):
- self._values = self._values[-(self._window_size - 1):] + [x]
-
- @property
- def sum(self):
- return sum(self._values)
-
- @property
- def count(self):
- return len(self._values)
-
- @property
- def average(self):
- return self.sum / max(1, self.count)
-
- def reset(self):
- self._values = []
diff --git a/spaces/Komeng/Stock_Prediction/README.md b/spaces/Komeng/Stock_Prediction/README.md
deleted file mode 100644
index 320dd0e308be5eb89bd433df10170740ac16eff4..0000000000000000000000000000000000000000
--- a/spaces/Komeng/Stock_Prediction/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stock Prediction
-emoji: 💩
-colorFrom: yellow
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/FunSR/models/rcan.py b/spaces/KyanChen/FunSR/models/rcan.py
deleted file mode 100644
index 76f661d79f679ade86940effcee389a31ef07c68..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/FunSR/models/rcan.py
+++ /dev/null
@@ -1,204 +0,0 @@
-import math
-from argparse import Namespace
-
-import torch
-import torch.nn as nn
-
-from models import register
-
-
-def default_conv(in_channels, out_channels, kernel_size, bias=True):
- return nn.Conv2d(
- in_channels, out_channels, kernel_size,
- padding=(kernel_size//2), bias=bias)
-
-class MeanShift(nn.Conv2d):
- def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
- super(MeanShift, self).__init__(3, 3, kernel_size=1)
- std = torch.Tensor(rgb_std)
- self.weight.data = torch.eye(3).view(3, 3, 1, 1)
- self.weight.data.div_(std.view(3, 1, 1, 1))
- self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
- self.bias.data.div_(std)
- self.requires_grad = False
-
-class Upsampler(nn.Sequential):
- def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
-
- m = []
- if (scale & (scale - 1)) == 0: # Is scale = 2^n?
- for _ in range(int(math.log(scale, 2))):
- m.append(conv(n_feat, 4 * n_feat, 3, bias))
- m.append(nn.PixelShuffle(2))
- if bn: m.append(nn.BatchNorm2d(n_feat))
- if act: m.append(act())
- elif scale == 3:
- m.append(conv(n_feat, 9 * n_feat, 3, bias))
- m.append(nn.PixelShuffle(3))
- if bn: m.append(nn.BatchNorm2d(n_feat))
- if act: m.append(act())
- else:
- raise NotImplementedError
-
- super(Upsampler, self).__init__(*m)
-
-## Channel Attention (CA) Layer
-class CALayer(nn.Module):
- def __init__(self, channel, reduction=16):
- super(CALayer, self).__init__()
- # global average pooling: feature --> point
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- # feature channel downscale and upscale --> channel weight
- self.conv_du = nn.Sequential(
- nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=True),
- nn.ReLU(inplace=True),
- nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=True),
- nn.Sigmoid()
- )
-
- def forward(self, x):
- y = self.avg_pool(x)
- y = self.conv_du(y)
- return x * y
-
-## Residual Channel Attention Block (RCAB)
-class RCAB(nn.Module):
- def __init__(
- self, conv, n_feat, kernel_size, reduction,
- bias=True, bn=False, act=nn.ReLU(True), res_scale=1):
-
- super(RCAB, self).__init__()
- modules_body = []
- for i in range(2):
- modules_body.append(conv(n_feat, n_feat, kernel_size, bias=bias))
- if bn: modules_body.append(nn.BatchNorm2d(n_feat))
- if i == 0: modules_body.append(act)
- modules_body.append(CALayer(n_feat, reduction))
- self.body = nn.Sequential(*modules_body)
- self.res_scale = res_scale
-
- def forward(self, x):
- res = self.body(x)
- #res = self.body(x).mul(self.res_scale)
- res += x
- return res
-
-## Residual Group (RG)
-class ResidualGroup(nn.Module):
- def __init__(self, conv, n_feat, kernel_size, reduction, act, res_scale, n_resblocks):
- super(ResidualGroup, self).__init__()
- modules_body = []
- modules_body = [
- RCAB(
- conv, n_feat, kernel_size, reduction, bias=True, bn=False, act=nn.ReLU(True), res_scale=1) \
- for _ in range(n_resblocks)]
- modules_body.append(conv(n_feat, n_feat, kernel_size))
- self.body = nn.Sequential(*modules_body)
-
- def forward(self, x):
- res = self.body(x)
- res += x
- return res
-
-## Residual Channel Attention Network (RCAN)
-class RCAN(nn.Module):
- def __init__(self, args, conv=default_conv):
- super(RCAN, self).__init__()
- self.args = args
-
- n_resgroups = args.n_resgroups
- n_resblocks = args.n_resblocks
- n_feats = args.n_feats
- kernel_size = 3
- reduction = args.reduction
- scale = args.scale[0]
- act = nn.ReLU(True)
-
- # RGB mean for DIV2K
- rgb_mean = (0.4488, 0.4371, 0.4040)
- rgb_std = (1.0, 1.0, 1.0)
- self.sub_mean = MeanShift(args.rgb_range, rgb_mean, rgb_std)
-
- # define head module
- modules_head = [conv(args.n_colors, n_feats, kernel_size)]
-
- # define body module
- modules_body = [
- ResidualGroup(
- conv, n_feats, kernel_size, reduction, act=act, res_scale=args.res_scale, n_resblocks=n_resblocks) \
- for _ in range(n_resgroups)]
-
- modules_body.append(conv(n_feats, n_feats, kernel_size))
-
- self.add_mean = MeanShift(args.rgb_range, rgb_mean, rgb_std, 1)
-
- self.head = nn.Sequential(*modules_head)
- self.body = nn.Sequential(*modules_body)
-
- if args.no_upsampling:
- self.out_dim = n_feats
- else:
- self.out_dim = args.n_colors
- # define tail module
- modules_tail = [
- Upsampler(conv, scale, n_feats, act=False),
- conv(n_feats, args.n_colors, kernel_size)]
- self.tail = nn.Sequential(*modules_tail)
-
- def forward(self, x):
- #x = self.sub_mean(x)
- x = self.head(x)
-
- res = self.body(x)
- res += x
-
- if self.args.no_upsampling:
- x = res
- else:
- x = self.tail(res)
- #x = self.add_mean(x)
- return x
-
- def load_state_dict(self, state_dict, strict=False):
- own_state = self.state_dict()
- for name, param in state_dict.items():
- if name in own_state:
- if isinstance(param, nn.Parameter):
- param = param.data
- try:
- own_state[name].copy_(param)
- except Exception:
- if name.find('tail') >= 0:
- print('Replace pre-trained upsampler to new one...')
- else:
- raise RuntimeError('While copying the parameter named {}, '
- 'whose dimensions in the model are {} and '
- 'whose dimensions in the checkpoint are {}.'
- .format(name, own_state[name].size(), param.size()))
- elif strict:
- if name.find('tail') == -1:
- raise KeyError('unexpected key "{}" in state_dict'
- .format(name))
-
- if strict:
- missing = set(own_state.keys()) - set(state_dict.keys())
- if len(missing) > 0:
- raise KeyError('missing keys in state_dict: "{}"'.format(missing))
-
-
-@register('rcan')
-def make_rcan(n_resgroups=10, n_resblocks=20, n_feats=64, reduction=16,
- scale=2, no_upsampling=False, rgb_range=1):
- args = Namespace()
- args.n_resgroups = n_resgroups
- args.n_resblocks = n_resblocks
- args.n_feats = n_feats
- args.reduction = reduction
-
- args.scale = [scale]
- args.no_upsampling = no_upsampling
-
- args.rgb_range = rgb_range
- args.res_scale = 1
- args.n_colors = 3
- return RCAN(args)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py
deleted file mode 100644
index 1b76e6b45bb9be2584f8b3eca2e5e1c0809249fa..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/seg_heads/panoptic_fusion_heads/maskformer_fusion_head.py
+++ /dev/null
@@ -1,266 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List
-
-import torch
-import torch.nn.functional as F
-from mmengine.structures import InstanceData, PixelData
-from torch import Tensor
-
-from mmdet.evaluation.functional import INSTANCE_OFFSET
-from mmdet.registry import MODELS
-from mmdet.structures import SampleList
-from mmdet.structures.mask import mask2bbox
-from mmdet.utils import OptConfigType, OptMultiConfig
-from .base_panoptic_fusion_head import BasePanopticFusionHead
-
-
-@MODELS.register_module()
-class MaskFormerFusionHead(BasePanopticFusionHead):
- """MaskFormer fusion head which postprocesses results for panoptic
- segmentation, instance segmentation and semantic segmentation."""
-
- def __init__(self,
- num_things_classes: int = 80,
- num_stuff_classes: int = 53,
- test_cfg: OptConfigType = None,
- loss_panoptic: OptConfigType = None,
- init_cfg: OptMultiConfig = None,
- **kwargs):
- super().__init__(
- num_things_classes=num_things_classes,
- num_stuff_classes=num_stuff_classes,
- test_cfg=test_cfg,
- loss_panoptic=loss_panoptic,
- init_cfg=init_cfg,
- **kwargs)
-
- def loss(self, **kwargs):
- """MaskFormerFusionHead has no training loss."""
- return dict()
-
- def panoptic_postprocess(self, mask_cls: Tensor,
- mask_pred: Tensor) -> PixelData:
- """Panoptic segmengation inference.
-
- Args:
- mask_cls (Tensor): Classfication outputs of shape
- (num_queries, cls_out_channels) for a image.
- Note `cls_out_channels` should includes
- background.
- mask_pred (Tensor): Mask outputs of shape
- (num_queries, h, w) for a image.
-
- Returns:
- :obj:`PixelData`: Panoptic segment result of shape \
- (h, w), each element in Tensor means: \
- ``segment_id = _cls + instance_id * INSTANCE_OFFSET``.
- """
- object_mask_thr = self.test_cfg.get('object_mask_thr', 0.8)
- iou_thr = self.test_cfg.get('iou_thr', 0.8)
- filter_low_score = self.test_cfg.get('filter_low_score', False)
-
- scores, labels = F.softmax(mask_cls, dim=-1).max(-1)
- mask_pred = mask_pred.sigmoid()
-
- keep = labels.ne(self.num_classes) & (scores > object_mask_thr)
- cur_scores = scores[keep]
- cur_classes = labels[keep]
- cur_masks = mask_pred[keep]
-
- cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks
-
- h, w = cur_masks.shape[-2:]
- panoptic_seg = torch.full((h, w),
- self.num_classes,
- dtype=torch.int32,
- device=cur_masks.device)
- if cur_masks.shape[0] == 0:
- # We didn't detect any mask :(
- pass
- else:
- cur_mask_ids = cur_prob_masks.argmax(0)
- instance_id = 1
- for k in range(cur_classes.shape[0]):
- pred_class = int(cur_classes[k].item())
- isthing = pred_class < self.num_things_classes
- mask = cur_mask_ids == k
- mask_area = mask.sum().item()
- original_area = (cur_masks[k] >= 0.5).sum().item()
-
- if filter_low_score:
- mask = mask & (cur_masks[k] >= 0.5)
-
- if mask_area > 0 and original_area > 0:
- if mask_area / original_area < iou_thr:
- continue
-
- if not isthing:
- # different stuff regions of same class will be
- # merged here, and stuff share the instance_id 0.
- panoptic_seg[mask] = pred_class
- else:
- panoptic_seg[mask] = (
- pred_class + instance_id * INSTANCE_OFFSET)
- instance_id += 1
-
- return PixelData(sem_seg=panoptic_seg[None])
-
- def semantic_postprocess(self, mask_cls: Tensor,
- mask_pred: Tensor) -> PixelData:
- """Semantic segmengation postprocess.
-
- Args:
- mask_cls (Tensor): Classfication outputs of shape
- (num_queries, cls_out_channels) for a image.
- Note `cls_out_channels` should includes
- background.
- mask_pred (Tensor): Mask outputs of shape
- (num_queries, h, w) for a image.
-
- Returns:
- :obj:`PixelData`: Semantic segment result.
- """
- # TODO add semantic segmentation result
- raise NotImplementedError
-
- def instance_postprocess(self, mask_cls: Tensor,
- mask_pred: Tensor) -> InstanceData:
- """Instance segmengation postprocess.
-
- Args:
- mask_cls (Tensor): Classfication outputs of shape
- (num_queries, cls_out_channels) for a image.
- Note `cls_out_channels` should includes
- background.
- mask_pred (Tensor): Mask outputs of shape
- (num_queries, h, w) for a image.
-
- Returns:
- :obj:`InstanceData`: Instance segmentation results.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- - masks (Tensor): Has a shape (num_instances, H, W).
- """
- max_per_image = self.test_cfg.get('max_per_image', 100)
- num_queries = mask_cls.shape[0]
- # shape (num_queries, num_class)
- scores = F.softmax(mask_cls, dim=-1)[:, :-1]
- # shape (num_queries * num_class, )
- labels = torch.arange(self.num_classes, device=mask_cls.device).\
- unsqueeze(0).repeat(num_queries, 1).flatten(0, 1)
- scores_per_image, top_indices = scores.flatten(0, 1).topk(
- max_per_image, sorted=False)
- labels_per_image = labels[top_indices]
-
- query_indices = top_indices // self.num_classes
- mask_pred = mask_pred[query_indices]
-
- # extract things
- is_thing = labels_per_image < self.num_things_classes
- scores_per_image = scores_per_image[is_thing]
- labels_per_image = labels_per_image[is_thing]
- mask_pred = mask_pred[is_thing]
-
- mask_pred_binary = (mask_pred > 0).float()
- mask_scores_per_image = (mask_pred.sigmoid() *
- mask_pred_binary).flatten(1).sum(1) / (
- mask_pred_binary.flatten(1).sum(1) + 1e-6)
- det_scores = scores_per_image * mask_scores_per_image
- mask_pred_binary = mask_pred_binary.bool()
- bboxes = mask2bbox(mask_pred_binary)
-
- results = InstanceData()
- results.bboxes = bboxes
- results.labels = labels_per_image
- results.scores = det_scores
- results.masks = mask_pred_binary
- return results
-
- def predict(self,
- mask_cls_results: Tensor,
- mask_pred_results: Tensor,
- batch_data_samples: SampleList,
- rescale: bool = False,
- **kwargs) -> List[dict]:
- """Test segment without test-time aumengtation.
-
- Only the output of last decoder layers was used.
-
- Args:
- mask_cls_results (Tensor): Mask classification logits,
- shape (batch_size, num_queries, cls_out_channels).
- Note `cls_out_channels` should includes background.
- mask_pred_results (Tensor): Mask logits, shape
- (batch_size, num_queries, h, w).
- batch_data_samples (List[:obj:`DetDataSample`]): The Data
- Samples. It usually includes information such as
- `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`.
- rescale (bool): If True, return boxes in
- original image space. Default False.
-
- Returns:
- list[dict]: Instance segmentation \
- results and panoptic segmentation results for each \
- image.
-
- .. code-block:: none
-
- [
- {
- 'pan_results': PixelData,
- 'ins_results': InstanceData,
- # semantic segmentation results are not supported yet
- 'sem_results': PixelData
- },
- ...
- ]
- """
- batch_img_metas = [
- data_sample.metainfo for data_sample in batch_data_samples
- ]
- panoptic_on = self.test_cfg.get('panoptic_on', True)
- semantic_on = self.test_cfg.get('semantic_on', False)
- instance_on = self.test_cfg.get('instance_on', False)
- assert not semantic_on, 'segmantic segmentation '\
- 'results are not supported yet.'
-
- results = []
- for mask_cls_result, mask_pred_result, meta in zip(
- mask_cls_results, mask_pred_results, batch_img_metas):
- # remove padding
- img_height, img_width = meta['img_shape'][:2]
- mask_pred_result = mask_pred_result[:, :img_height, :img_width]
-
- if rescale:
- # return result in original resolution
- ori_height, ori_width = meta['ori_shape'][:2]
- mask_pred_result = F.interpolate(
- mask_pred_result[:, None],
- size=(ori_height, ori_width),
- mode='bilinear',
- align_corners=False)[:, 0]
-
- result = dict()
- if panoptic_on:
- pan_results = self.panoptic_postprocess(
- mask_cls_result, mask_pred_result)
- result['pan_results'] = pan_results
-
- if instance_on:
- ins_results = self.instance_postprocess(
- mask_cls_result, mask_pred_result)
- result['ins_results'] = ins_results
-
- if semantic_on:
- sem_results = self.semantic_postprocess(
- mask_cls_result, mask_pred_result)
- result['sem_results'] = sem_results
-
- results.append(result)
-
- return results
diff --git a/spaces/Laihiujin/OneFormer/oneformer/utils/events.py b/spaces/Laihiujin/OneFormer/oneformer/utils/events.py
deleted file mode 100644
index d1d27ac6ecef656f1aa86649ceacb54470765821..0000000000000000000000000000000000000000
--- a/spaces/Laihiujin/OneFormer/oneformer/utils/events.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import os
-import wandb
-from detectron2.utils import comm
-from detectron2.utils.events import EventWriter, get_event_storage
-
-
-def setup_wandb(cfg, args):
- if comm.is_main_process():
- init_args = {
- k.lower(): v
- for k, v in cfg.WANDB.items()
- if isinstance(k, str) and k not in ["config"]
- }
- # only include most related part to avoid too big table
- # TODO: add configurable params to select which part of `cfg` should be saved in config
- if "config_exclude_keys" in init_args:
- init_args["config"] = cfg
- init_args["config"]["cfg_file"] = args.config_file
- else:
- init_args["config"] = {
- "model": cfg.MODEL,
- "solver": cfg.SOLVER,
- "cfg_file": args.config_file,
- }
- if ("name" not in init_args) or (init_args["name"] is None):
- init_args["name"] = os.path.basename(args.config_file)
- else:
- init_args["name"] = init_args["name"] + '_' + os.path.basename(args.config_file)
- wandb.init(**init_args)
-
-
-class BaseRule(object):
- def __call__(self, target):
- return target
-
-
-class IsIn(BaseRule):
- def __init__(self, keyword: str):
- self.keyword = keyword
-
- def __call__(self, target):
- return self.keyword in target
-
-
-class Prefix(BaseRule):
- def __init__(self, keyword: str):
- self.keyword = keyword
-
- def __call__(self, target):
- return "/".join([self.keyword, target])
-
-
-class WandbWriter(EventWriter):
- """
- Write all scalars to a tensorboard file.
- """
-
- def __init__(self):
- """
- Args:
- log_dir (str): the directory to save the output events
- kwargs: other arguments passed to `torch.utils.tensorboard.SummaryWriter(...)`
- """
- self._last_write = -1
- self._group_rules = [
- (IsIn("/"), BaseRule()),
- (IsIn("loss"), Prefix("train")),
- ]
-
- def write(self):
-
- storage = get_event_storage()
-
- def _group_name(scalar_name):
- for (rule, op) in self._group_rules:
- if rule(scalar_name):
- return op(scalar_name)
- return scalar_name
-
- stats = {
- _group_name(name): scalars[0]
- for name, scalars in storage.latest().items()
- if scalars[1] > self._last_write
- }
- if len(stats) > 0:
- self._last_write = max([v[1] for k, v in storage.latest().items()])
-
- # storage.put_{image,histogram} is only meant to be used by
- # tensorboard writer. So we access its internal fields directly from here.
- if len(storage._vis_data) >= 1:
- stats["image"] = [
- wandb.Image(img, caption=img_name)
- for img_name, img, step_num in storage._vis_data
- ]
- # Storage stores all image data and rely on this writer to clear them.
- # As a result it assumes only one writer will use its image data.
- # An alternative design is to let storage store limited recent
- # data (e.g. only the most recent image) that all writers can access.
- # In that case a writer may not see all image data if its period is long.
- storage.clear_images()
-
- if len(storage._histograms) >= 1:
-
- def create_bar(tag, bucket_limits, bucket_counts, **kwargs):
- data = [
- [label, val] for (label, val) in zip(bucket_limits, bucket_counts)
- ]
- table = wandb.Table(data=data, columns=["label", "value"])
- return wandb.plot.bar(table, "label", "value", title=tag)
-
- stats["hist"] = [create_bar(**params) for params in storage._histograms]
-
- storage.clear_histograms()
-
- if len(stats) == 0:
- return
- wandb.log(stats, step=storage.iter)
-
- def close(self):
- wandb.finish()
\ No newline at end of file
diff --git a/spaces/LarissaHung/text_generator/app.py b/spaces/LarissaHung/text_generator/app.py
deleted file mode 100644
index 0783c314135c5755c74e999166ab87ff31bd6b93..0000000000000000000000000000000000000000
--- a/spaces/LarissaHung/text_generator/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-#libraries
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-description="Input text."
-
-#variables, functions and parameters
-model1 = gr.Interface.load("huggingface/gpt2")
-model2 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model3 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
-
-#functions, parameters and variables
-gr.Parallel(model1, model2, model3,title=title,description=description).launch()
\ No newline at end of file
diff --git a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/models.py b/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/models.py
deleted file mode 100644
index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1142 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Logic06183/ML_Classifier_Hub/app.py b/spaces/Logic06183/ML_Classifier_Hub/app.py
deleted file mode 100644
index 812941be8bb18d46e0ed4884b525bc677f33c791..0000000000000000000000000000000000000000
--- a/spaces/Logic06183/ML_Classifier_Hub/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import streamlit as st
-import numpy as np
-import matplotlib.pyplot as plt
-from sklearn import datasets
-from sklearn.model_selection import train_test_split
-from sklearn.decomposition import PCA
-from sklearn.svm import SVC
-from sklearn.neighbors import KNeighborsClassifier
-from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
-from sklearn.linear_model import LogisticRegression
-from sklearn.metrics import accuracy_score
-from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
-
-st.title('Streamlit Example')
-
-st.write("""
-# Explore different classifier and datasets
-Which one is the best?
-""")
-
-dataset_name = st.sidebar.selectbox(
- 'Select Dataset',
- ('Breast Cancer', 'Wine', 'Digits')
-)
-
-st.write(f"## {dataset_name} Dataset")
-
-classifier_name = st.sidebar.selectbox(
- 'Select classifier',
- ('KNN', 'SVM', 'Random Forest', 'Gradient Boosting', 'Logistic Regression')
-)
-
-scaler_name = st.sidebar.selectbox(
- 'Select feature scaling method',
- ('None', 'Standard Scaler', 'MinMax Scaler', 'Robust Scaler')
-)
-
-def get_dataset(name):
- data = None
- if name == 'Wine':
- data = datasets.load_wine()
- elif name == 'Breast Cancer':
- data = datasets.load_breast_cancer()
- else: # Digits
- data = datasets.load_digits()
- X = data.data
- y = data.target
- return X, y
-
-X, y = get_dataset(dataset_name)
-st.write('Shape of dataset:', X.shape)
-st.write('number of classes:', len(np.unique(y)))
-
-def apply_scaling(scaler_name, X):
- if scaler_name == 'Standard Scaler':
- scaler = StandardScaler()
- elif scaler_name == 'MinMax Scaler':
- scaler = MinMaxScaler()
- elif scaler_name == 'Robust Scaler':
- scaler = RobustScaler()
- else:
- return X
-
- X_scaled = scaler.fit_transform(X)
- return X_scaled
-
-X = apply_scaling(scaler_name, X)
-
-def add_parameter_ui(clf_name):
- params = dict()
- if clf_name == 'SVM':
- C = st.sidebar.slider('C', 0.01, 10.0)
- params['C'] = C
- elif clf_name == 'KNN':
- K = st.sidebar.slider('K', 1, 15)
- params['K'] = K
- elif clf_name == 'Random Forest':
- max_depth = st.sidebar.slider('max_depth', 2, 15)
- params['max_depth'] = max_depth
- n_estimators = st.sidebar.slider('n_estimators', 1, 100)
- params['n_estimators'] = n_estimators
- elif clf_name == 'Gradient Boosting':
- max_depth = st.sidebar.slider('max_depth', 2, 15)
- params['max_depth'] = max_depth
- n_estimators = st.sidebar.slider('n_estimators', 1, 100)
- params['n_estimators'] = n_estimators
- else: # Logistic Regression
- C = st.sidebar.slider('C', 0.01, 10.0)
- params['C'] = C
- return params
-
-params = add_parameter_ui(classifier_name)
-
-def get_classifier(clf_name, params):
- clf = None
- if clf_name == 'SVM':
- clf = SVC(C=params['C'])
- elif clf_name == 'KNN':
- clf = KNeighborsClassifier(n_neighbors=params['K'])
- elif clf_name == 'Random Forest':
- clf = RandomForestClassifier(n_estimators=params['n_estimators'],
- max_depth=params['max_depth'], random_state=1234)
- elif clf_name == 'Gradient Boosting':
- clf = GradientBoostingClassifier(n_estimators=params['n_estimators'],
- max_depth=params['max_depth'], random_state=1234)
- else: # Logistic Regression
- clf = LogisticRegression(C=params['C'])
- return clf
-
-clf = get_classifier(classifier_name, params)
-
-#### CLASSIFICATION ####
-
-X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234)
-
-clf.fit(X_train, y_train)
-y_pred = clf.predict(X_test)
-
-acc = accuracy_score(y_test, y_pred)
-
-st.write(f'Classifier = {classifier_name}')
-st.write(f'Accuracy =', acc)
-
-#### PLOT DATASET ####
-# Project the data onto the 2 primary principal components
-pca = PCA(2)
-X_projected = pca.fit_transform(X)
-
-x1 = X_projected[:, 0]
-x2 = X_projected[:, 1]
-
-fig = plt.figure()
-plt.scatter(x1, x2,
- c=y, alpha=0.8,
- cmap='viridis')
-
-plt.xlabel('Principal Component 1')
-plt.ylabel('Principal Component 2')
-plt.colorbar()
-
-st.pyplot(fig)
diff --git a/spaces/LucasCodeBreak/MusicGen/tests/data/test_audio.py b/spaces/LucasCodeBreak/MusicGen/tests/data/test_audio.py
deleted file mode 100644
index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/tests/data/test_audio.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import random
-
-import numpy as np
-import torch
-import torchaudio
-
-from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read
-
-from ..common_utils import TempDirMixin, get_white_noise, save_wav
-
-
-class TestInfo(TempDirMixin):
-
- def test_info_mp3(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- wav = get_white_noise(ch, int(sample_rate * duration))
- path = self.get_temp_path('sample_wav.mp3')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- # we cannot trust torchaudio for num_frames, so we don't check
-
- def _test_info_format(self, ext: str):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'sample_wav{ext}')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- assert np.isclose(info.duration, duration, atol=1e-5)
-
- def test_info_wav(self):
- self._test_info_format('.wav')
-
- def test_info_flac(self):
- self._test_info_format('.flac')
-
- def test_info_ogg(self):
- self._test_info_format('.ogg')
-
- def test_info_m4a(self):
- # TODO: generate m4a file programmatically
- # self._test_info_format('.m4a')
- pass
-
-
-class TestRead(TempDirMixin):
-
- def test_read_full_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == wav.shape[1]
- assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04)
-
- def test_read_partial_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = torch.rand(1).item()
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path, 0, read_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- read_wav, read_sr = audio_read(path, seek_time, read_duration)
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == expected_frames
- assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav_padded(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True)
- expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
- assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav)
-
-
-class TestAvRead(TempDirMixin):
-
- def test_avread_seek_base(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 2.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a full duration segment in the file
- seek_time = random.uniform(0.0, 1.0)
- seek_duration = random.uniform(0.001, 1.0)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == int(seek_duration * sample_rate)
-
- def test_avread_seek_partial(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a partial segment
- seek_time = random.uniform(0.5, 1.)
- seek_duration = 1.
- expected_num_frames = n_frames - int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == expected_num_frames
-
- def test_avread_seek_outofbound(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = 1.5
- read_wav, read_sr = _av_read(path, seek_time, 1.)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == 0
-
- def test_avread_seek_edge(self):
- sample_rates = [8000, 16_000]
- # some of these values will have
- # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1)
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- duration = frames / sample_rate
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = (frames - 1) / sample_rate
- seek_frames = int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == (frames - seek_frames)
-
-
-class TestAudioWrite(TempDirMixin):
-
- def test_audio_write_wav(self):
- torch.manual_seed(1234)
- sample_rates = [8000, 16_000]
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- strategies = ["peak", "clip", "rms"]
- formats = ["wav", "mp3"]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- for format_, strategy in product(formats, strategies):
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'pred_{sample_rate}_{ch}')
- audio_write(path, wav, sample_rate, format_, strategy=strategy)
- read_wav, read_sr = torchaudio.load(f'{path}.{format_}')
- if format_ == "wav":
- assert read_wav.shape == wav.shape
-
- if format_ == "wav" and strategy in ["peak", "rms"]:
- rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max()
- # for a Gaussian, the typical max scale will be less than ~5x the std.
- # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that.
- # For RMS target, rescaling leaves more headroom by default, leading
- # to a 20x rescaling typically
- atol = (5 if strategy == "peak" else 20) / 2**15
- delta = (rescaled_read_wav - wav).abs().max()
- assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol)
- formats = ["wav"] # faster unit tests
diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/data/transforms/transforms.py b/spaces/MLVKU/Human_Object_Interaction/hotr/data/transforms/transforms.py
deleted file mode 100644
index cf41a4dc07d9e0fbd77eb32550c087b48e7cdeed..0000000000000000000000000000000000000000
--- a/spaces/MLVKU/Human_Object_Interaction/hotr/data/transforms/transforms.py
+++ /dev/null
@@ -1,387 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Transforms and data augmentation for both image + bbox.
-"""
-import random
-
-import PIL
-import torch
-import torchvision.transforms as T
-import torchvision.transforms.functional as F
-
-from hotr.util.box_ops import box_xyxy_to_cxcywh
-from hotr.util.misc import interpolate
-
-
-def crop(image, target, region):
- cropped_image = F.crop(image, *region)
-
- target = target.copy()
- i, j, h, w = region
-
- # should we do something wrt the original size?
- target["size"] = torch.tensor([h, w])
- max_size = torch.as_tensor([w, h], dtype=torch.float32)
-
- fields = ["labels", "area", "iscrowd"] # add additional fields
- if "inst_actions" in target.keys():
- fields.append("inst_actions")
-
- if "boxes" in target:
- boxes = target["boxes"]
- cropped_boxes = boxes - torch.as_tensor([j, i, j, i])
- cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)
- cropped_boxes = cropped_boxes.clamp(min=0)
- area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)
- target["boxes"] = cropped_boxes.reshape(-1, 4)
- target["area"] = area
- fields.append("boxes")
-
- if "pair_boxes" in target or ("sub_boxes" in target and "obj_boxes" in target):
- if "pair_boxes" in target:
- pair_boxes = target["pair_boxes"]
- hboxes = pair_boxes[:, :4]
- oboxes = pair_boxes[:, 4:]
- if ("sub_boxes" in target and "obj_boxes" in target):
- hboxes = target["sub_boxes"]
- oboxes = target["obj_boxes"]
-
- cropped_hboxes = hboxes - torch.as_tensor([j, i, j, i])
- cropped_hboxes = torch.min(cropped_hboxes.reshape(-1, 2, 2), max_size)
- cropped_hboxes = cropped_hboxes.clamp(min=0)
- hboxes = cropped_hboxes.reshape(-1, 4)
-
- obj_mask = (oboxes[:, 0] != -1)
- if obj_mask.sum() != 0:
- cropped_oboxes = oboxes[obj_mask] - torch.as_tensor([j, i, j, i])
- cropped_oboxes = torch.min(cropped_oboxes.reshape(-1, 2, 2), max_size)
- cropped_oboxes = cropped_oboxes.clamp(min=0)
- oboxes[obj_mask] = cropped_oboxes.reshape(-1, 4)
- else:
- cropped_oboxes = oboxes
-
- cropped_pair_boxes = torch.cat([hboxes, oboxes], dim=-1)
- target["pair_boxes"] = cropped_pair_boxes
- pair_fields = ["pair_boxes", "pair_actions", "pair_targets"]
-
- if "masks" in target:
- # FIXME should we update the area here if there are no boxes[?
- target['masks'] = target['masks'][:, i:i + h, j:j + w]
- fields.append("masks")
-
- # remove elements for which the boxes or masks that have zero area
- if "boxes" in target or "masks" in target:
- # favor boxes selection when defining which elements to keep
- # this is compatible with previous implementation
- if "boxes" in target:
- cropped_boxes = target['boxes'].reshape(-1, 2, 2)
- keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)
- else:
- keep = target['masks'].flatten(1).any(1)
-
- for field in fields:
- if field in target: # added this because there is no 'iscrowd' field in v-coco dataset
- target[field] = target[field][keep]
-
- # remove elements that have redundant area
- if "boxes" in target and "labels" in target:
- cropped_boxes = target['boxes']
- cropped_labels = target['labels']
-
- cnr, keep_idx = [], []
- for idx, (cropped_box, cropped_lbl) in enumerate(zip(cropped_boxes, cropped_labels)):
- if str((cropped_box, cropped_lbl)) not in cnr:
- cnr.append(str((cropped_box, cropped_lbl)))
- keep_idx.append(True)
- else: keep_idx.append(False)
-
- for field in fields:
- if field in target:
- target[field] = target[field][keep_idx]
-
- # remove elements for which pair boxes have zero area
- if "pair_boxes" in target:
- cropped_hboxes = target["pair_boxes"][:, :4].reshape(-1, 2, 2)
- cropped_oboxes = target["pair_boxes"][:, 4:].reshape(-1, 2, 2)
- keep_h = torch.all(cropped_hboxes[:, 1, :] > cropped_hboxes[:, 0, :], dim=1)
- keep_o = torch.all(cropped_oboxes[:, 1, :] > cropped_oboxes[:, 0, :], dim=1)
- not_empty_o = torch.all(target["pair_boxes"][:, 4:] >= 0, dim=1)
- discard_o = (~keep_o) & not_empty_o
- if (discard_o).sum() > 0:
- target["pair_boxes"][discard_o, 4:] = -1
-
- for pair_field in pair_fields:
- target[pair_field] = target[pair_field][keep_h]
-
- return cropped_image, target
-
-
-def hflip(image, target):
- flipped_image = F.hflip(image)
-
- w, h = image.size
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])
- target["boxes"] = boxes
-
- if "pair_boxes" in target:
- pair_boxes = target["pair_boxes"]
- hboxes = pair_boxes[:, :4]
- oboxes = pair_boxes[:, 4:]
-
- # human flip
- hboxes = hboxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])
-
- # object flip
- obj_mask = (oboxes[:, 0] != -1)
- if obj_mask.sum() != 0:
- o_tmp = oboxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor([w, 0, w, 0])
- oboxes[obj_mask] = o_tmp[obj_mask]
-
- pair_boxes = torch.cat([hboxes, oboxes], dim=-1)
- target["pair_boxes"] = pair_boxes
-
- if "masks" in target:
- target['masks'] = target['masks'].flip(-1)
-
- return flipped_image, target
-
-
-def resize(image, target, size, max_size=None):
- # size can be min_size (scalar) or (w, h) tuple
-
- def get_size_with_aspect_ratio(image_size, size, max_size=None):
- w, h = image_size
- if max_size is not None:
- min_original_size = float(min((w, h)))
- max_original_size = float(max((w, h)))
- if max_original_size / min_original_size * size > max_size:
- size = int(round(max_size * min_original_size / max_original_size))
-
- if (w <= h and w == size) or (h <= w and h == size):
- return (h, w)
-
- if w < h:
- ow = size
- oh = int(size * h / w)
- else:
- oh = size
- ow = int(size * w / h)
-
- return (oh, ow)
-
- def get_size(image_size, size, max_size=None):
- if isinstance(size, (list, tuple)):
- return size[::-1]
- else:
- return get_size_with_aspect_ratio(image_size, size, max_size)
-
- size = get_size(image.size, size, max_size)
- rescaled_image = F.resize(image, size)
-
- if target is None:
- return rescaled_image, None
-
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))
- ratio_width, ratio_height = ratios
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- scaled_boxes = boxes * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])
- target["boxes"] = scaled_boxes
-
- if "pair_boxes" in target:
- hboxes = target["pair_boxes"][:, :4]
- scaled_hboxes = hboxes * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])
- hboxes = scaled_hboxes
-
- oboxes = target["pair_boxes"][:, 4:]
- obj_mask = (oboxes[:, 0] != -1)
- if obj_mask.sum() != 0:
- scaled_oboxes = oboxes[obj_mask] * torch.as_tensor([ratio_width, ratio_height, ratio_width, ratio_height])
- oboxes[obj_mask] = scaled_oboxes
-
- target["pair_boxes"] = torch.cat([hboxes, oboxes], dim=-1)
-
- if "area" in target:
- area = target["area"]
- scaled_area = area * (ratio_width * ratio_height)
- target["area"] = scaled_area
-
- h, w = size
- target["size"] = torch.tensor([h, w])
-
- if "masks" in target:
- target['masks'] = interpolate(
- target['masks'][:, None].float(), size, mode="nearest")[:, 0] > 0.5
-
- return rescaled_image, target
-
-
-def pad(image, target, padding):
- # assumes that we only pad on the bottom right corners
- padded_image = F.pad(image, (0, 0, padding[0], padding[1]))
- if target is None:
- return padded_image, None
- target = target.copy()
- # should we do something wrt the original size?
- target["size"] = torch.tensor(padded_image[::-1])
- if "masks" in target:
- target['masks'] = torch.nn.functional.pad(target['masks'], (0, padding[0], 0, padding[1]))
- return padded_image, target
-
-
-class RandomCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- region = T.RandomCrop.get_params(img, self.size)
- return crop(img, target, region)
-
-
-class RandomSizeCrop(object):
- def __init__(self, min_size: int, max_size: int):
- self.min_size = min_size
- self.max_size = max_size
-
- def __call__(self, img: PIL.Image.Image, target: dict):
- w = random.randint(self.min_size, min(img.width, self.max_size))
- h = random.randint(self.min_size, min(img.height, self.max_size))
- region = T.RandomCrop.get_params(img, [h, w])
- return crop(img, target, region)
-
-
-class CenterCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- image_width, image_height = img.size
- crop_height, crop_width = self.size
- crop_top = int(round((image_height - crop_height) / 2.))
- crop_left = int(round((image_width - crop_width) / 2.))
- return crop(img, target, (crop_top, crop_left, crop_height, crop_width))
-
-
-class RandomHorizontalFlip(object):
- def __init__(self, p=0.5):
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return hflip(img, target)
- return img, target
-
-
-class RandomResize(object):
- def __init__(self, sizes, max_size=None):
- assert isinstance(sizes, (list, tuple))
- self.sizes = sizes
- self.max_size = max_size
-
- def __call__(self, img, target=None):
- size = random.choice(self.sizes)
- return resize(img, target, size, self.max_size)
-
-
-class RandomPad(object):
- def __init__(self, max_pad):
- self.max_pad = max_pad
-
- def __call__(self, img, target):
- pad_x = random.randint(0, self.max_pad)
- pad_y = random.randint(0, self.max_pad)
- return pad(img, target, (pad_x, pad_y))
-
-
-class RandomSelect(object):
- """
- Randomly selects between transforms1 and transforms2,
- with probability p for transforms1 and (1 - p) for transforms2
- """
- def __init__(self, transforms1, transforms2, p=0.5):
- self.transforms1 = transforms1
- self.transforms2 = transforms2
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return self.transforms1(img, target)
- return self.transforms2(img, target)
-
-
-class ToTensor(object):
- def __call__(self, img, target):
- return F.to_tensor(img), target
-
-
-class RandomErasing(object):
-
- def __init__(self, *args, **kwargs):
- self.eraser = T.RandomErasing(*args, **kwargs)
-
- def __call__(self, img, target):
- return self.eraser(img), target
-
-
-class Normalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, image, target=None):
- image = F.normalize(image, mean=self.mean, std=self.std)
- if target is None:
- return image, None
- target = target.copy()
- h, w = image.shape[-2:]
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = box_xyxy_to_cxcywh(boxes)
- boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)
- target["boxes"] = boxes
-
- if "pair_boxes" in target:
- hboxes = target["pair_boxes"][:, :4]
- hboxes = box_xyxy_to_cxcywh(hboxes)
- hboxes = hboxes / torch.tensor([w, h, w, h], dtype=torch.float32)
-
- oboxes = target["pair_boxes"][:, 4:]
- obj_mask = (oboxes[:, 0] != -1)
- if obj_mask.sum() != 0:
- oboxes[obj_mask] = box_xyxy_to_cxcywh(oboxes[obj_mask])
- oboxes[obj_mask] = oboxes[obj_mask] / torch.tensor([w, h, w, h], dtype=torch.float32)
-
- pair_boxes = torch.cat([hboxes, oboxes], dim=-1)
- target["pair_boxes"] = pair_boxes
-
- return image, target
-
-class ColorJitter(object):
- def __init__(self, brightness=0, contrast=0, saturatio=0, hue=0):
- self.color_jitter = T.ColorJitter(brightness, contrast, saturatio, hue)
-
- def __call__(self, img, target):
- return self.color_jitter(img), target
-
-class Compose(object):
- def __init__(self, transforms):
- self.transforms = transforms
-
- def __call__(self, image, target):
- for t in self.transforms:
- image, target = t(image, target)
- return image, target
-
- def __repr__(self):
- format_string = self.__class__.__name__ + "("
- for t in self.transforms:
- format_string += "\n"
- format_string += " {0}".format(t)
- format_string += "\n)"
- return format_string
diff --git a/spaces/MechaXYZ/Audio-to-Text/app.py b/spaces/MechaXYZ/Audio-to-Text/app.py
deleted file mode 100644
index e5b88e53c45c87cc26c020ed485739b9a710b5b4..0000000000000000000000000000000000000000
--- a/spaces/MechaXYZ/Audio-to-Text/app.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-
-import gradio as gr
-import pytube as pt
-from transformers import pipeline
-
-MODEL_NAME = "openai/whisper-large-v2"
-
-device = 0 if torch.cuda.is_available() else "cpu"
-
-pipe = pipeline(
- task="automatic-speech-recognition",
- model=MODEL_NAME,
- chunk_length_s=30,
- device=device,
- # return_timestamps=True
-)
-
-
-all_special_ids = pipe.tokenizer.all_special_ids
-transcribe_token_id = all_special_ids[-5]
-translate_token_id = all_special_ids[-6]
-
-
-def transcribe(microphone, file_upload, task):
- warn_output = ""
- if (microphone is not None) and (file_upload is not None):
- warn_output = (
- "WARNING: You've uploaded an audio file and used the microphone. "
- "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n"
- )
-
- elif (microphone is None) and (file_upload is None):
- return "ERROR: You have to either use the microphone or upload an audio file"
-
- file = microphone if microphone is not None else file_upload
-
- pipe.model.config.forced_decoder_ids = [[2, transcribe_token_id if task=="transcribe" else translate_token_id]]
- text = pipe(file,return_timestamps=True)["text"]
-
- return warn_output + text
-
-
-def _return_yt_html_embed(yt_url):
- video_id = yt_url.split("?v=")[-1]
- HTML_str = (
- f'
'
- "
"
- )
- return HTML_str
-
-
-def yt_transcribe(yt_url, task):
- yt = pt.YouTube(yt_url)
- html_embed_str = _return_yt_html_embed(yt_url)
- stream = yt.streams.filter(only_audio=True)[0]
- stream.download(filename="audio.mp3")
-
- pipe.model.config.forced_decoder_ids = [[2, transcribe_token_id if task=="transcribe" else translate_token_id]]
-
- text = pipe("audio.mp3")["text"]
-
- return html_embed_str, text
-
-
-demo = gr.Blocks()
-
-mf_transcribe = gr.Interface(
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath", optional=True),
- gr.inputs.Audio(source="upload", type="filepath", optional=True),
- gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"),
- ],
- outputs="text",
- layout="horizontal",
- theme="huggingface",
- title="Audio-to-Text Playground: Transcribe Audio",
- description=(
- "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the"
- f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files"
- " of arbitrary length."
- ),
- allow_flagging="never",
-)
-
-yt_transcribe = gr.Interface(
- fn=yt_transcribe,
- inputs=[
- gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL"),
- gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe")
- ],
- outputs=["html", "text"],
- layout="horizontal",
- theme="huggingface",
- title="Audio-to-Text Playground: Transcribe YouTube",
- description=(
- "Transcribe long-form YouTube videos with the click of a button! Demo uses the checkpoint"
- f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe video files of"
- " arbitrary length."
- ),
- allow_flagging="never",
-)
-
-with demo:
- gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"])
-
-demo.launch(enable_queue=True)
-
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/apis/inferencers/base_mmocr_inferencer.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/apis/inferencers/base_mmocr_inferencer.py
deleted file mode 100644
index 02ac643d9ffea8dddde098aa02038ebfdc1cce25..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/apis/inferencers/base_mmocr_inferencer.py
+++ /dev/null
@@ -1,405 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Union
-
-import mmcv
-import mmengine
-import numpy as np
-from mmengine.dataset import Compose
-from mmengine.infer.infer import BaseInferencer, ModelType
-from mmengine.model.utils import revert_sync_batchnorm
-from mmengine.registry import init_default_scope
-from mmengine.structures import InstanceData
-from rich.progress import track
-from torch import Tensor
-
-from mmocr.utils import ConfigType
-
-InstanceList = List[InstanceData]
-InputType = Union[str, np.ndarray]
-InputsType = Union[InputType, Sequence[InputType]]
-PredType = Union[InstanceData, InstanceList]
-ImgType = Union[np.ndarray, Sequence[np.ndarray]]
-ResType = Union[Dict, List[Dict], InstanceData, List[InstanceData]]
-
-
-class BaseMMOCRInferencer(BaseInferencer):
- """Base inferencer.
-
- Args:
- model (str, optional): Path to the config file or the model name
- defined in metafile. For example, it could be
- "dbnet_resnet18_fpnc_1200e_icdar2015" or
- "configs/textdet/dbnet/dbnet_resnet18_fpnc_1200e_icdar2015.py".
- If model is not specified, user must provide the
- `weights` saved by MMEngine which contains the config string.
- Defaults to None.
- weights (str, optional): Path to the checkpoint. If it is not specified
- and model is a model name of metafile, the weights will be loaded
- from metafile. Defaults to None.
- device (str, optional): Device to run inference. If None, the available
- device will be automatically used. Defaults to None.
- scope (str, optional): The scope of the model. Defaults to "mmocr".
- """
-
- preprocess_kwargs: set = set()
- forward_kwargs: set = set()
- visualize_kwargs: set = {
- 'return_vis', 'show', 'wait_time', 'draw_pred', 'pred_score_thr',
- 'save_vis'
- }
- postprocess_kwargs: set = {
- 'print_result', 'return_datasample', 'save_pred'
- }
- loading_transforms: list = ['LoadImageFromFile', 'LoadImageFromNDArray']
-
- def __init__(self,
- model: Union[ModelType, str, None] = None,
- weights: Optional[str] = None,
- device: Optional[str] = None,
- scope: str = 'mmocr') -> None:
- # A global counter tracking the number of images given in the form
- # of ndarray, for naming the output images
- self.num_unnamed_imgs = 0
- init_default_scope(scope)
- super().__init__(
- model=model, weights=weights, device=device, scope=scope)
- self.model = revert_sync_batchnorm(self.model)
-
- def preprocess(self, inputs: InputsType, batch_size: int = 1, **kwargs):
- """Process the inputs into a model-feedable format.
-
- Args:
- inputs (InputsType): Inputs given by user.
- batch_size (int): batch size. Defaults to 1.
-
- Yields:
- Any: Data processed by the ``pipeline`` and ``collate_fn``.
- """
- chunked_data = self._get_chunk_data(inputs, batch_size)
- yield from map(self.collate_fn, chunked_data)
-
- def _get_chunk_data(self, inputs: Iterable, chunk_size: int):
- """Get batch data from inputs.
-
- Args:
- inputs (Iterable): An iterable dataset.
- chunk_size (int): Equivalent to batch size.
-
- Yields:
- list: batch data.
- """
- inputs_iter = iter(inputs)
- while True:
- try:
- chunk_data = []
- for _ in range(chunk_size):
- inputs_ = next(inputs_iter)
- pipe_out = self.pipeline(inputs_)
- if pipe_out['data_samples'].get('img_path') is None:
- pipe_out['data_samples'].set_metainfo(
- dict(img_path=f'{self.num_unnamed_imgs}.jpg'))
- self.num_unnamed_imgs += 1
- chunk_data.append((inputs_, pipe_out))
- yield chunk_data
- except StopIteration:
- if chunk_data:
- yield chunk_data
- break
-
- def __call__(self,
- inputs: InputsType,
- return_datasamples: bool = False,
- batch_size: int = 1,
- progress_bar: bool = True,
- return_vis: bool = False,
- show: bool = False,
- wait_time: int = 0,
- draw_pred: bool = True,
- pred_score_thr: float = 0.3,
- out_dir: str = 'results/',
- save_vis: bool = False,
- save_pred: bool = False,
- print_result: bool = False,
- **kwargs) -> dict:
- """Call the inferencer.
-
- Args:
- inputs (InputsType): Inputs for the inferencer. It can be a path
- to image / image directory, or an array, or a list of these.
- Note: If it's an numpy array, it should be in BGR order.
- return_datasamples (bool): Whether to return results as
- :obj:`BaseDataElement`. Defaults to False.
- batch_size (int): Inference batch size. Defaults to 1.
- progress_bar (bool): Whether to show a progress bar. Defaults to
- True.
- return_vis (bool): Whether to return the visualization result.
- Defaults to False.
- show (bool): Whether to display the visualization results in a
- popup window. Defaults to False.
- wait_time (float): The interval of show (s). Defaults to 0.
- draw_pred (bool): Whether to draw predicted bounding boxes.
- Defaults to True.
- pred_score_thr (float): Minimum score of bboxes to draw.
- Defaults to 0.3.
- out_dir (str): Output directory of results. Defaults to 'results/'.
- save_vis (bool): Whether to save the visualization results to
- "out_dir". Defaults to False.
- save_pred (bool): Whether to save the inference results to
- "out_dir". Defaults to False.
- print_result (bool): Whether to print the inference result w/o
- visualization to the console. Defaults to False.
-
- **kwargs: Other keyword arguments passed to :meth:`preprocess`,
- :meth:`forward`, :meth:`visualize` and :meth:`postprocess`.
- Each key in kwargs should be in the corresponding set of
- ``preprocess_kwargs``, ``forward_kwargs``, ``visualize_kwargs``
- and ``postprocess_kwargs``.
-
- Returns:
- dict: Inference and visualization results, mapped from
- "predictions" and "visualization".
- """
- if (save_vis or save_pred) and not out_dir:
- raise ValueError('out_dir must be specified when save_vis or '
- 'save_pred is True!')
- if out_dir:
- img_out_dir = osp.join(out_dir, 'vis')
- pred_out_dir = osp.join(out_dir, 'preds')
- else:
- img_out_dir, pred_out_dir = '', ''
- (
- preprocess_kwargs,
- forward_kwargs,
- visualize_kwargs,
- postprocess_kwargs,
- ) = self._dispatch_kwargs(
- return_vis=return_vis,
- show=show,
- wait_time=wait_time,
- draw_pred=draw_pred,
- pred_score_thr=pred_score_thr,
- save_vis=save_vis,
- save_pred=save_pred,
- print_result=print_result,
- **kwargs)
-
- ori_inputs = self._inputs_to_list(inputs)
- inputs = self.preprocess(
- ori_inputs, batch_size=batch_size, **preprocess_kwargs)
- results = {'predictions': [], 'visualization': []}
- for ori_inputs, data in track(
- inputs, description='Inference', disable=not progress_bar):
- preds = self.forward(data, **forward_kwargs)
- visualization = self.visualize(
- ori_inputs, preds, img_out_dir=img_out_dir, **visualize_kwargs)
- batch_res = self.postprocess(
- preds,
- visualization,
- return_datasamples,
- pred_out_dir=pred_out_dir,
- **postprocess_kwargs)
- results['predictions'].extend(batch_res['predictions'])
- if return_vis and batch_res['visualization'] is not None:
- results['visualization'].extend(batch_res['visualization'])
- return results
-
- def _init_pipeline(self, cfg: ConfigType) -> Compose:
- """Initialize the test pipeline."""
- pipeline_cfg = cfg.test_dataloader.dataset.pipeline
-
- # For inference, the key of ``instances`` is not used.
- if 'meta_keys' in pipeline_cfg[-1]:
- pipeline_cfg[-1]['meta_keys'] = tuple(
- meta_key for meta_key in pipeline_cfg[-1]['meta_keys']
- if meta_key != 'instances')
-
- # Loading annotations is also not applicable
- idx = self._get_transform_idx(pipeline_cfg, 'LoadOCRAnnotations')
- if idx != -1:
- del pipeline_cfg[idx]
-
- for transform in self.loading_transforms:
- load_img_idx = self._get_transform_idx(pipeline_cfg, transform)
- if load_img_idx != -1:
- pipeline_cfg[load_img_idx]['type'] = 'InferencerLoader'
- break
- if load_img_idx == -1:
- raise ValueError(
- f'None of {self.loading_transforms} is found in the test '
- 'pipeline')
-
- return Compose(pipeline_cfg)
-
- def _get_transform_idx(self, pipeline_cfg: ConfigType, name: str) -> int:
- """Returns the index of the transform in a pipeline.
-
- If the transform is not found, returns -1.
- """
- for i, transform in enumerate(pipeline_cfg):
- if transform['type'] == name:
- return i
- return -1
-
- def visualize(self,
- inputs: InputsType,
- preds: PredType,
- return_vis: bool = False,
- show: bool = False,
- wait_time: int = 0,
- draw_pred: bool = True,
- pred_score_thr: float = 0.3,
- save_vis: bool = False,
- img_out_dir: str = '') -> Union[List[np.ndarray], None]:
- """Visualize predictions.
-
- Args:
- inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
- preds (List[Dict]): Predictions of the model.
- return_vis (bool): Whether to return the visualization result.
- Defaults to False.
- show (bool): Whether to display the image in a popup window.
- Defaults to False.
- wait_time (float): The interval of show (s). Defaults to 0.
- draw_pred (bool): Whether to draw predicted bounding boxes.
- Defaults to True.
- pred_score_thr (float): Minimum score of bboxes to draw.
- Defaults to 0.3.
- save_vis (bool): Whether to save the visualization result. Defaults
- to False.
- img_out_dir (str): Output directory of visualization results.
- If left as empty, no file will be saved. Defaults to ''.
-
- Returns:
- List[np.ndarray] or None: Returns visualization results only if
- applicable.
- """
- if self.visualizer is None or not (show or save_vis or return_vis):
- return None
-
- if getattr(self, 'visualizer') is None:
- raise ValueError('Visualization needs the "visualizer" term'
- 'defined in the config, but got None.')
-
- results = []
-
- for single_input, pred in zip(inputs, preds):
- if isinstance(single_input, str):
- img_bytes = mmengine.fileio.get(single_input)
- img = mmcv.imfrombytes(img_bytes, channel_order='rgb')
- elif isinstance(single_input, np.ndarray):
- img = single_input.copy()[:, :, ::-1] # to RGB
- else:
- raise ValueError('Unsupported input type: '
- f'{type(single_input)}')
- img_name = osp.splitext(osp.basename(pred.img_path))[0]
-
- if save_vis and img_out_dir:
- out_file = osp.splitext(img_name)[0]
- out_file = f'{out_file}.jpg'
- out_file = osp.join(img_out_dir, out_file)
- else:
- out_file = None
-
- visualization = self.visualizer.add_datasample(
- img_name,
- img,
- pred,
- show=show,
- wait_time=wait_time,
- draw_gt=False,
- draw_pred=draw_pred,
- pred_score_thr=pred_score_thr,
- out_file=out_file,
- )
- results.append(visualization)
-
- return results
-
- def postprocess(
- self,
- preds: PredType,
- visualization: Optional[List[np.ndarray]] = None,
- return_datasample: bool = False,
- print_result: bool = False,
- save_pred: bool = False,
- pred_out_dir: str = '',
- ) -> Union[ResType, Tuple[ResType, np.ndarray]]:
- """Process the predictions and visualization results from ``forward``
- and ``visualize``.
-
- This method should be responsible for the following tasks:
-
- 1. Convert datasamples into a json-serializable dict if needed.
- 2. Pack the predictions and visualization results and return them.
- 3. Dump or log the predictions.
-
- Args:
- preds (List[Dict]): Predictions of the model.
- visualization (Optional[np.ndarray]): Visualized predictions.
- return_datasample (bool): Whether to use Datasample to store
- inference results. If False, dict will be used.
- print_result (bool): Whether to print the inference result w/o
- visualization to the console. Defaults to False.
- save_pred (bool): Whether to save the inference result. Defaults to
- False.
- pred_out_dir: File to save the inference results w/o
- visualization. If left as empty, no file will be saved.
- Defaults to ''.
-
- Returns:
- dict: Inference and visualization results with key ``predictions``
- and ``visualization``.
-
- - ``visualization`` (Any): Returned by :meth:`visualize`.
- - ``predictions`` (dict or DataSample): Returned by
- :meth:`forward` and processed in :meth:`postprocess`.
- If ``return_datasample=False``, it usually should be a
- json-serializable dict containing only basic data elements such
- as strings and numbers.
- """
- result_dict = {}
- results = preds
- if not return_datasample:
- results = []
- for pred in preds:
- result = self.pred2dict(pred)
- if save_pred and pred_out_dir:
- pred_name = osp.splitext(osp.basename(pred.img_path))[0]
- pred_name = f'{pred_name}.json'
- pred_out_file = osp.join(pred_out_dir, pred_name)
- mmengine.dump(result, pred_out_file)
- results.append(result)
- # Add img to the results after printing and dumping
- result_dict['predictions'] = results
- if print_result:
- print(result_dict)
- result_dict['visualization'] = visualization
- return result_dict
-
- def pred2dict(self, data_sample: InstanceData) -> Dict:
- """Extract elements necessary to represent a prediction into a
- dictionary.
-
- It's better to contain only basic data elements such as strings and
- numbers in order to guarantee it's json-serializable.
- """
- raise NotImplementedError
-
- def _array2list(self, array: Union[Tensor, np.ndarray,
- List]) -> List[float]:
- """Convert a tensor or numpy array to a list.
-
- Args:
- array (Union[Tensor, np.ndarray]): The array to be converted.
-
- Returns:
- List[float]: The converted list.
- """
- if isinstance(array, Tensor):
- return array.detach().cpu().numpy().tolist()
- if isinstance(array, np.ndarray):
- return array.tolist()
- if isinstance(array, list):
- array = [self._array2list(arr) for arr in array]
- return array
diff --git a/spaces/NATSpeech/DiffSpeech/tasks/tts/diffspeech.py b/spaces/NATSpeech/DiffSpeech/tasks/tts/diffspeech.py
deleted file mode 100644
index 283bf9b62fed0c5f68a9f82887543b9413dd8955..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/tasks/tts/diffspeech.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import torch
-
-from modules.tts.diffspeech.shallow_diffusion_tts import GaussianDiffusion
-from tasks.tts.fs2_orig import FastSpeech2OrigTask
-
-import utils
-from utils.commons.hparams import hparams
-from utils.commons.ckpt_utils import load_ckpt
-from utils.audio.pitch.utils import denorm_f0
-
-
-class DiffSpeechTask(FastSpeech2OrigTask):
- def build_tts_model(self):
- # get min and max
- # import torch
- # from tqdm import tqdm
- # v_min = torch.ones([80]) * 100
- # v_max = torch.ones([80]) * -100
- # for i, ds in enumerate(tqdm(self.dataset_cls('train'))):
- # v_max = torch.max(torch.max(ds['mel'].reshape(-1, 80), 0)[0], v_max)
- # v_min = torch.min(torch.min(ds['mel'].reshape(-1, 80), 0)[0], v_min)
- # if i % 100 == 0:
- # print(i, v_min, v_max)
- # print('final', v_min, v_max)
- dict_size = len(self.token_encoder)
- self.model = GaussianDiffusion(dict_size, hparams)
- if hparams['fs2_ckpt'] != '':
- load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
- # for k, v in self.model.fs2.named_parameters():
- # if 'predictor' not in k:
- # v.requires_grad = False
- # or
- for k, v in self.model.fs2.named_parameters():
- v.requires_grad = False
-
- def build_optimizer(self, model):
- self.optimizer = optimizer = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, model.parameters()),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return optimizer
-
- def build_scheduler(self, optimizer):
- return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
-
- def run_model(self, sample, infer=False, *args, **kwargs):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- spk_embed = sample.get('spk_embed')
- spk_id = sample.get('spk_ids')
- if not infer:
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample.get('f0')
- uv = sample.get('uv')
- output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id,
- ref_mels=target, f0=f0, uv=uv, infer=False)
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- return losses, output
- else:
- use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur'])
- use_gt_f0 = kwargs.get('infer_use_gt_f0', hparams['use_gt_f0'])
- mel2ph, uv, f0 = None, None, None
- if use_gt_dur:
- mel2ph = sample['mel2ph']
- if use_gt_f0:
- f0 = sample['f0']
- uv = sample['uv']
- output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id,
- ref_mels=None, f0=f0, uv=uv, infer=True)
- return output
-
- def save_valid_result(self, sample, batch_idx, model_out):
- sr = hparams['audio_sample_rate']
- f0_gt = None
- # mel_out = model_out['mel_out']
- if sample.get('f0') is not None:
- f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
- # self.plot_mel(batch_idx, sample['mels'], mel_out, f0s=f0_gt)
- if self.global_step > 0:
- # wav_pred = self.vocoder.spec2wav(mel_out[0].cpu(), f0=f0_gt)
- # self.logger.add_audio(f'wav_val_{batch_idx}', wav_pred, self.global_step, sr)
- # with gt duration
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=True)
- dur_info = self.get_plot_dur_info(sample, model_out)
- del dur_info['dur_pred']
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_gdur_{batch_idx}', wav_pred, self.global_step, sr)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'diffmel_gdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
- self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'][0], f'fs2mel_gdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt) # gt mel vs. fs2 mel
-
- # with pred duration
- if not hparams['use_gt_dur']:
- model_out = self.run_model(sample, infer=True, infer_use_gt_dur=False)
- dur_info = self.get_plot_dur_info(sample, model_out)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_pdur_{batch_idx}',
- dur_info=dur_info, f0s=f0_gt)
- wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt)
- self.logger.add_audio(f'wav_pdur_{batch_idx}', wav_pred, self.global_step, sr)
- # gt wav
- if self.global_step <= hparams['valid_infer_interval']:
- mel_gt = sample['mels'][0].cpu()
- wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
- self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, sr)
diff --git a/spaces/NATSpeech/DiffSpeech/utils/text/encoding.py b/spaces/NATSpeech/DiffSpeech/utils/text/encoding.py
deleted file mode 100644
index f09f514613fd44a27450fe7c04cbdf5ebfbe78a8..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/utils/text/encoding.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import chardet
-
-
-def get_encoding(file):
- with open(file, 'rb') as f:
- encoding = chardet.detect(f.read())['encoding']
- if encoding == 'GB2312':
- encoding = 'GB18030'
- return encoding
diff --git a/spaces/Nadaal/chatgpt-demo/app.py b/spaces/Nadaal/chatgpt-demo/app.py
deleted file mode 100644
index 3a56aadf0c5972b89e657ebff6239406e4dd7319..0000000000000000000000000000000000000000
--- a/spaces/Nadaal/chatgpt-demo/app.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import gradio as gr
-import os
-import openai
-import requests
-import json
-
-openai.api_key = os.environ.get("OPENAI_API_KEY")
-
-prompt_templates = {"Default ChatGPT": ""}
-
-def get_empty_state():
- return {"total_tokens": 0, "messages": []}
-
-def download_prompt_templates():
- url = "https://raw.githubusercontent.com/f/awesome-chatgpt-prompts/main/prompts.csv"
- response = requests.get(url)
-
- for line in response.text.splitlines()[1:]:
- act, prompt = line.split('","')
- prompt_templates[act.replace('"', '')] = prompt.replace('"', '')
-
- choices = list(prompt_templates.keys())
- return gr.update(value=choices[0], choices=choices)
-
-def on_token_change(user_token):
- openai.api_key = user_token or os.environ.get("OPENAI_API_KEY")
-
-def on_prompt_template_change(prompt_template):
- if not isinstance(prompt_template, str): return
- return prompt_templates[prompt_template]
-
-def submit_message(user_token, prompt, prompt_template, temperature, max_tokens, state):
-
- history = state['messages']
-
- if not prompt:
- return gr.update(value='', visible=state['total_tokens'] < 1_000), [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)], f"Total tokens used: {state['total_tokens']} / 3000", state
-
- prompt_template = prompt_templates[prompt_template]
-
- system_prompt = []
- if prompt_template:
- system_prompt = [{ "role": "system", "content": prompt_template }]
-
- prompt_msg = { "role": "user", "content": prompt }
-
- try:
- completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=system_prompt + history + [prompt_msg], temperature=temperature, max_tokens=max_tokens)
-
- history.append(prompt_msg)
- history.append(completion.choices[0].message.to_dict())
-
- state['total_tokens'] += completion['usage']['total_tokens']
-
- except Exception as e:
- history.append(prompt_msg)
- history.append({
- "role": "system",
- "content": f"Error: {e}"
- })
-
- total_tokens_used_msg = f"Total tokens used: {state['total_tokens']} / 3000" if not user_token else ""
- chat_messages = [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)]
- input_visibility = user_token or state['total_tokens'] < 3000
-
- return gr.update(value='', visible=input_visibility), chat_messages, total_tokens_used_msg, state
-
-def clear_conversation():
- return gr.update(value=None, visible=True), None, "", get_empty_state()
-
-css = """
- #col-container {max-width: 80%; margin-left: auto; margin-right: auto;}
- #chatbox {min-height: 400px;}
- #header {text-align: center;}
- #prompt_template_preview {padding: 1em; border-width: 1px; border-style: solid; border-color: #e0e0e0; border-radius: 4px;}
- #total_tokens_str {text-align: right; font-size: 0.8em; color: #666; height: 1em;}
- #label {font-size: 0.8em; padding: 0.5em; margin: 0;}
- .message { font-size: 1.2em; }
- """
-
-with gr.Blocks(css=css) as demo:
-
- state = gr.State(get_empty_state())
-
-
- with gr.Column(elem_id="col-container"):
- gr.Markdown("""## OpenAI ChatGPT Demo
- Using the ofiicial API (gpt-3.5-turbo model)
- Prompt templates from [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts).
- Current limit is 3000 tokens per conversation.""",
- elem_id="header")
-
- with gr.Row():
- with gr.Column():
- chatbot = gr.Chatbot(elem_id="chatbox")
- input_message = gr.Textbox(show_label=False, placeholder="Enter text and press enter", visible=True).style(container=False)
- btn_submit = gr.Button("Submit")
- total_tokens_str = gr.Markdown(elem_id="total_tokens_str")
- btn_clear_conversation = gr.Button("🔃 Start New Conversation")
- with gr.Column():
- prompt_template = gr.Dropdown(label="Set a custom insruction for the chatbot:", choices=list(prompt_templates.keys()))
- prompt_template_preview = gr.Markdown(elem_id="prompt_template_preview")
- gr.Markdown("Enter your own OpenAI API Key to remove the 3000 token limit. You can get it [here](https://platform.openai.com/account/api-keys).", elem_id="label")
- user_token = gr.Textbox(placeholder="OpenAI API Key", type="password", show_label=False)
- with gr.Accordion("Advanced parameters", open=False):
- temperature = gr.Slider(minimum=0, maximum=2.0, value=0.7, step=0.1, interactive=True, label="Temperature (higher = more creative/chaotic)")
- max_tokens = gr.Slider(minimum=100, maximum=4096, value=1000, step=1, interactive=True, label="Max tokens per response")
-
- gr.HTML('''
You can duplicate this Space.
- Don't forget to set your own OpenAI API Key environment variable in Settings.
-
''')
-
- btn_submit.click(submit_message, [user_token, input_message, prompt_template, temperature, max_tokens, state], [input_message, chatbot, total_tokens_str, state])
- input_message.submit(submit_message, [user_token, input_message, prompt_template, temperature, max_tokens, state], [input_message, chatbot, total_tokens_str, state])
- btn_clear_conversation.click(clear_conversation, [], [input_message, chatbot, total_tokens_str, state])
- prompt_template.change(on_prompt_template_change, inputs=[prompt_template], outputs=[prompt_template_preview])
- user_token.change(on_token_change, inputs=[user_token], outputs=[])
-
-
- demo.load(download_prompt_templates, inputs=None, outputs=[prompt_template])
-
-
-demo.launch(debug=True, height='800px')
diff --git a/spaces/OAOA/DifFace/basicsr/data/vimeo90k_dataset.py b/spaces/OAOA/DifFace/basicsr/data/vimeo90k_dataset.py
deleted file mode 100644
index e5e33e1082667aeee61fecf2436fb287e82e0936..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/data/vimeo90k_dataset.py
+++ /dev/null
@@ -1,199 +0,0 @@
-import random
-import torch
-from pathlib import Path
-from torch.utils import data as data
-
-from basicsr.data.transforms import augment, paired_random_crop
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-
-
-@DATASET_REGISTRY.register()
-class Vimeo90KDataset(data.Dataset):
- """Vimeo90K dataset for training.
-
- The keys are generated from a meta info txt file.
- basicsr/data/meta_info/meta_info_Vimeo90K_train_GT.txt
-
- Each line contains the following items, separated by a white space.
-
- 1. clip name;
- 2. frame number;
- 3. image shape
-
- Examples:
-
- ::
-
- 00001/0001 7 (256,448,3)
- 00001/0002 7 (256,448,3)
-
- - Key examples: "00001/0001"
- - GT (gt): Ground-Truth;
- - LQ (lq): Low-Quality, e.g., low-resolution/blurry/noisy/compressed frames.
-
- The neighboring frame list for different num_frame:
-
- ::
-
- num_frame | frame list
- 1 | 4
- 3 | 3,4,5
- 5 | 2,3,4,5,6
- 7 | 1,2,3,4,5,6,7
-
- Args:
- opt (dict): Config for train dataset. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- dataroot_lq (str): Data root path for lq.
- meta_info_file (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- num_frame (int): Window size for input frames.
- gt_size (int): Cropped patched size for gt patches.
- random_reverse (bool): Random reverse input frames.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- scale (bool): Scale, which will be added automatically.
- """
-
- def __init__(self, opt):
- super(Vimeo90KDataset, self).__init__()
- self.opt = opt
- self.gt_root, self.lq_root = Path(opt['dataroot_gt']), Path(opt['dataroot_lq'])
-
- with open(opt['meta_info_file'], 'r') as fin:
- self.keys = [line.split(' ')[0] for line in fin]
-
- # file client (io backend)
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.is_lmdb = False
- if self.io_backend_opt['type'] == 'lmdb':
- self.is_lmdb = True
- self.io_backend_opt['db_paths'] = [self.lq_root, self.gt_root]
- self.io_backend_opt['client_keys'] = ['lq', 'gt']
-
- # indices of input images
- self.neighbor_list = [i + (9 - opt['num_frame']) // 2 for i in range(opt['num_frame'])]
-
- # temporal augmentation configs
- self.random_reverse = opt['random_reverse']
- logger = get_root_logger()
- logger.info(f'Random reverse is {self.random_reverse}.')
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # random reverse
- if self.random_reverse and random.random() < 0.5:
- self.neighbor_list.reverse()
-
- scale = self.opt['scale']
- gt_size = self.opt['gt_size']
- key = self.keys[index]
- clip, seq = key.split('/') # key example: 00001/0001
-
- # get the GT frame (im4.png)
- if self.is_lmdb:
- img_gt_path = f'{key}/im4'
- else:
- img_gt_path = self.gt_root / clip / seq / 'im4.png'
- img_bytes = self.file_client.get(img_gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # get the neighboring LQ frames
- img_lqs = []
- for neighbor in self.neighbor_list:
- if self.is_lmdb:
- img_lq_path = f'{clip}/{seq}/im{neighbor}'
- else:
- img_lq_path = self.lq_root / clip / seq / f'im{neighbor}.png'
- img_bytes = self.file_client.get(img_lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
- img_lqs.append(img_lq)
-
- # randomly crop
- img_gt, img_lqs = paired_random_crop(img_gt, img_lqs, gt_size, scale, img_gt_path)
-
- # augmentation - flip, rotate
- img_lqs.append(img_gt)
- img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot'])
-
- img_results = img2tensor(img_results)
- img_lqs = torch.stack(img_results[0:-1], dim=0)
- img_gt = img_results[-1]
-
- # img_lqs: (t, c, h, w)
- # img_gt: (c, h, w)
- # key: str
- return {'lq': img_lqs, 'gt': img_gt, 'key': key}
-
- def __len__(self):
- return len(self.keys)
-
-
-@DATASET_REGISTRY.register()
-class Vimeo90KRecurrentDataset(Vimeo90KDataset):
-
- def __init__(self, opt):
- super(Vimeo90KRecurrentDataset, self).__init__(opt)
-
- self.flip_sequence = opt['flip_sequence']
- self.neighbor_list = [1, 2, 3, 4, 5, 6, 7]
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # random reverse
- if self.random_reverse and random.random() < 0.5:
- self.neighbor_list.reverse()
-
- scale = self.opt['scale']
- gt_size = self.opt['gt_size']
- key = self.keys[index]
- clip, seq = key.split('/') # key example: 00001/0001
-
- # get the neighboring LQ and GT frames
- img_lqs = []
- img_gts = []
- for neighbor in self.neighbor_list:
- if self.is_lmdb:
- img_lq_path = f'{clip}/{seq}/im{neighbor}'
- img_gt_path = f'{clip}/{seq}/im{neighbor}'
- else:
- img_lq_path = self.lq_root / clip / seq / f'im{neighbor}.png'
- img_gt_path = self.gt_root / clip / seq / f'im{neighbor}.png'
- # LQ
- img_bytes = self.file_client.get(img_lq_path, 'lq')
- img_lq = imfrombytes(img_bytes, float32=True)
- # GT
- img_bytes = self.file_client.get(img_gt_path, 'gt')
- img_gt = imfrombytes(img_bytes, float32=True)
-
- img_lqs.append(img_lq)
- img_gts.append(img_gt)
-
- # randomly crop
- img_gts, img_lqs = paired_random_crop(img_gts, img_lqs, gt_size, scale, img_gt_path)
-
- # augmentation - flip, rotate
- img_lqs.extend(img_gts)
- img_results = augment(img_lqs, self.opt['use_hflip'], self.opt['use_rot'])
-
- img_results = img2tensor(img_results)
- img_lqs = torch.stack(img_results[:7], dim=0)
- img_gts = torch.stack(img_results[7:], dim=0)
-
- if self.flip_sequence: # flip the sequence: 7 frames to 14 frames
- img_lqs = torch.cat([img_lqs, img_lqs.flip(0)], dim=0)
- img_gts = torch.cat([img_gts, img_gts.flip(0)], dim=0)
-
- # img_lqs: (t, c, h, w)
- # img_gt: (c, h, w)
- # key: str
- return {'lq': img_lqs, 'gt': img_gts, 'key': key}
-
- def __len__(self):
- return len(self.keys)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/meteor.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/meteor.py
deleted file mode 100644
index 2ee0448cf1f167f6f3ecee56ad807922cffb0956..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/meteor.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import math
-import os
-import subprocess
-import sys
-import tempfile
-from collections import defaultdict
-from itertools import combinations
-
-
-def read_translations(path, n_repeats):
- segment_counter = 0
- segment_translations = []
- translations = defaultdict(list)
- for line in open(path):
- segment_translations.append(" ".join(line.split()))
- if len(segment_translations) == n_repeats:
- translations[segment_counter] = segment_translations
- segment_translations = []
- segment_counter += 1
- return translations
-
-
-def generate_input(translations, n_repeats):
- _, ref_path = tempfile.mkstemp()
- _, mt_path = tempfile.mkstemp()
- ref_fh = open(ref_path, "w")
- mt_fh = open(mt_path, "w")
- for segid in sorted(translations.keys()):
- assert len(translations[segid]) == n_repeats
- indexes = combinations(range(n_repeats), 2)
- for idx1, idx2 in indexes:
- mt_fh.write(translations[segid][idx1].strip() + "\n")
- ref_fh.write(translations[segid][idx2].strip() + "\n")
- sys.stderr.write("\nSaved translations to %s and %s" % (ref_path, mt_path))
- return ref_path, mt_path
-
-
-def run_meteor(ref_path, mt_path, metric_path, lang="en"):
- _, out_path = tempfile.mkstemp()
- subprocess.call(
- [
- "java",
- "-Xmx2G",
- "-jar",
- metric_path,
- mt_path,
- ref_path,
- "-p",
- "0.5 0.2 0.6 0.75", # default parameters, only changed alpha to give equal weight to P and R
- "-norm",
- "-l",
- lang,
- ],
- stdout=open(out_path, "w"),
- )
- os.remove(ref_path)
- os.remove(mt_path)
- sys.stderr.write("\nSaved Meteor output to %s" % out_path)
- return out_path
-
-
-def read_output(meteor_output_path, n_repeats):
- n_combinations = math.factorial(n_repeats) / (
- math.factorial(2) * math.factorial(n_repeats - 2)
- )
- raw_scores = []
- average_scores = []
- for line in open(meteor_output_path):
- if not line.startswith("Segment "):
- continue
- score = float(line.strip().split("\t")[1])
- raw_scores.append(score)
- if len(raw_scores) == n_combinations:
- average_scores.append(sum(raw_scores) / n_combinations)
- raw_scores = []
- os.remove(meteor_output_path)
- return average_scores
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("-i", "--infile")
- parser.add_argument("-n", "--repeat_times", type=int)
- parser.add_argument("-m", "--meteor")
- parser.add_argument("-o", "--output")
- args = parser.parse_args()
-
- translations = read_translations(args.infile, args.repeat_times)
- sys.stderr.write("\nGenerating input for Meteor...")
- ref_path, mt_path = generate_input(translations, args.repeat_times)
- sys.stderr.write("\nRunning Meteor...")
- out_path = run_meteor(ref_path, mt_path, args.meteor)
- sys.stderr.write("\nReading output...")
- scores = read_output(out_path, args.repeat_times)
- sys.stderr.write("\nWriting results...")
- with open(args.output, "w") as o:
- for scr in scores:
- o.write("{}\n".format(scr))
- o.close()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/unify_transformer_layer.py b/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/unify_transformer_layer.py
deleted file mode 100644
index c02410548106e177be4ead10dbc8facdf5947e1f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/models/ofa/unify_transformer_layer.py
+++ /dev/null
@@ -1,542 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List, Optional
-
-import torch
-import torch.nn as nn
-from fairseq import utils
-from fairseq.modules import LayerNorm
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from fairseq.modules.quant_noise import quant_noise
-from torch import Tensor
-
-from .unify_multihead_attention import MultiheadAttention
-
-
-def drop_path(x, drop_prob: float = 0.0, training: bool = False):
- """
- Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
- however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
- See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the
- layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the
- argument.
- """
- if drop_prob == 0.0 or not training:
- return x
- keep_prob = 1 - drop_prob
- shape = (1, x.shape[1], 1)
- random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
- random_tensor.floor_() # binarize
- output = x.div(keep_prob) * random_tensor
- return output
-
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""
-
- def __init__(self, drop_prob=None):
- super().__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
- def extra_repr(self) -> str:
- return "p={}".format(self.drop_prob)
-
-
-class TransformerEncoderLayer(nn.Module):
- """Encoder layer block.
-
- In the original paper each operation (multi-head attention or FFN) is
- postprocessed with: `dropout -> add residual -> layernorm`. In the
- tensor2tensor code they suggest that learning is more robust when
- preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *args.encoder_normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
-
- def __init__(self, args, drop_path_rate=0.0):
- super().__init__()
- self.args = args
- self.embed_dim = args.encoder_embed_dim
- self.quant_noise = getattr(args, 'quant_noise_pq', 0)
- self.quant_noise_block_size = getattr(args, 'quant_noise_pq_block_size', 8) or 8
- self.self_attn = self.build_self_attention(self.embed_dim, args)
- self.self_attn_layer_norm = LayerNorm(self.embed_dim)
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.activation_fn = utils.get_activation_fn(
- activation=getattr(args, 'activation_fn', 'relu') or "relu"
- )
- activation_dropout_p = getattr(args, "activation_dropout", 0) or 0
- if activation_dropout_p == 0:
- # for backwards compatibility with models that use args.relu_dropout
- activation_dropout_p = getattr(args, "relu_dropout", 0) or 0
- self.activation_dropout_module = FairseqDropout(
- float(activation_dropout_p), module_name=self.__class__.__name__
- )
- self.normalize_before = args.encoder_normalize_before
- self.fc1 = self.build_fc1(
- self.embed_dim,
- args.encoder_ffn_embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
- self.fc2 = self.build_fc2(
- args.encoder_ffn_embed_dim,
- self.embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
-
- self.attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None
- self.nh = self.self_attn.num_heads
- self.head_dim = self.self_attn.head_dim
-
- self.ffn_layernorm = LayerNorm(args.encoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None
- self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None
-
- self.final_layer_norm = LayerNorm(self.embed_dim)
-
- self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity()
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(
- nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
- )
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(
- nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
- )
-
- def build_self_attention(self, embed_dim, args):
- return MultiheadAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- scale_factor=args.attn_scale_factor,
- scale_heads=getattr(args, 'scale_heads', False)
- )
-
- def residual_connection(self, x, residual):
- return residual + self.drop_path(x)
-
- def upgrade_state_dict_named(self, state_dict, name):
- """
- Rename layer norm states from `...layer_norms.0.weight` to
- `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to
- `...final_layer_norm.weight`
- """
- layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"}
- for old, new in layer_norm_map.items():
- for m in ("weight", "bias"):
- k = "{}.layer_norms.{}.{}".format(name, old, m)
- if k in state_dict:
- state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k]
- del state_dict[k]
- if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict():
- state_dict[
- "{}.{}.{}".format(name, new, m)
- ] = self.state_dict()["{}.{}".format(new, m)]
-
- prefix = name + "." if name != "" else ""
- for param_name, param_tensor in self.state_dict().items():
- if (prefix + param_name) not in state_dict and param_name in self.state_dict():
- state_dict[prefix + param_name] = self.state_dict()[param_name]
-
- def forward(
- self,
- x,
- encoder_padding_mask: Optional[Tensor],
- attn_mask: Optional[Tensor] = None,
- self_attn_bias: Optional[Tensor] = None
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, seq_len)` where padding elements are indicated by ``1``.
- attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`,
- where `tgt_len` is the length of output and `src_len` is the
- length of input, though here both are equal to `seq_len`.
- `attn_mask[tgt_i, src_j] = 1` means that when calculating the
- embedding for `tgt_i`, we exclude (mask out) `src_j`. This is
- useful for strided self-attention.
-
- Returns:
- encoded output of shape `(seq_len, batch, embed_dim)`
- """
- # anything in original attn_mask = 1, becomes -1e8
- # anything in original attn_mask = 0, becomes 0
- # Note that we cannot use -inf here, because at some edge cases,
- # the attention weight (before softmax) for some padded element in query
- # will become -inf, which results in NaN in model parameters
- if attn_mask is not None:
- attn_mask = attn_mask.masked_fill(
- attn_mask.to(torch.bool),
- -1e8 if x.dtype == torch.float32 else -1e4
- )
-
- residual = x
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
- x, _ = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=encoder_padding_mask,
- need_weights=False,
- attn_mask=attn_mask,
- attn_bias=self_attn_bias
- )
- if self.attn_ln is not None:
- x = self.attn_ln(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- if self.ffn_layernorm is not None:
- x = self.ffn_layernorm(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- if self.w_resid is not None:
- residual = torch.mul(self.w_resid, residual)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.final_layer_norm(x)
- return x
-
-
-class TransformerDecoderLayer(nn.Module):
- """Decoder layer block.
-
- In the original paper each operation (multi-head attention, encoder
- attention or FFN) is postprocessed with: `dropout -> add residual ->
- layernorm`. In the tensor2tensor code they suggest that learning is more
- robust when preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *args.decoder_normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- no_encoder_attn (bool, optional): whether to attend to encoder outputs
- (default: False).
- """
-
- def __init__(
- self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False, drop_path_rate=0.0
- ):
- super().__init__()
- self.embed_dim = args.decoder_embed_dim
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.quant_noise = getattr(args, "quant_noise_pq", 0)
- self.quant_noise_block_size = getattr(args, "quant_noise_pq_block_size", 8)
-
- self.cross_self_attention = getattr(args, "cross_self_attention", False)
-
- self.self_attn = self.build_self_attention(
- self.embed_dim,
- args,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
- self.self_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None
- self.cross_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None
- self.nh = self.self_attn.num_heads
- self.head_dim = self.self_attn.head_dim
-
- self.activation_fn = utils.get_activation_fn(
- activation=str(args.activation_fn)
- if getattr(args, "activation_fn", None) is not None
- else "relu"
- )
- activation_dropout_p = getattr(args, "activation_dropout", 0) or 0
- if activation_dropout_p == 0:
- # for backwards compatibility with models that use args.relu_dropout
- activation_dropout_p = getattr(args, "relu_dropout", 0) or 0
- self.activation_dropout_module = FairseqDropout(
- float(activation_dropout_p), module_name=self.__class__.__name__
- )
- self.normalize_before = args.decoder_normalize_before
-
- # use layerNorm rather than FusedLayerNorm for exporting.
- # char_inputs can be used to determint this.
- # TODO remove this once we update apex with the fix
- export = getattr(args, "char_inputs", False)
- self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
-
- if no_encoder_attn:
- self.encoder_attn = None
- self.encoder_attn_layer_norm = None
- else:
- self.encoder_attn = self.build_encoder_attention(self.embed_dim, args)
- self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
-
- self.ffn_layernorm = LayerNorm(args.decoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None
- self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None
-
- self.fc1 = self.build_fc1(
- self.embed_dim,
- args.decoder_ffn_embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
- self.fc2 = self.build_fc2(
- args.decoder_ffn_embed_dim,
- self.embed_dim,
- self.quant_noise,
- self.quant_noise_block_size,
- )
-
- self.final_layer_norm = LayerNorm(self.embed_dim, export=export)
- self.need_attn = True
-
- self.onnx_trace = False
-
- self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity()
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_self_attention(
- self, embed_dim, args, add_bias_kv=False, add_zero_attn=False
- ):
- return MultiheadAttention(
- embed_dim,
- args.decoder_attention_heads,
- dropout=args.attention_dropout,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- self_attention=not getattr(args, "cross_self_attention", False),
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- scale_factor=args.attn_scale_factor,
- scale_heads=getattr(args, 'scale_heads', False)
- )
-
- def build_encoder_attention(self, embed_dim, args):
- return MultiheadAttention(
- embed_dim,
- args.decoder_attention_heads,
- kdim=getattr(args, "encoder_embed_dim", None),
- vdim=getattr(args, "encoder_embed_dim", None),
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- scale_factor=args.attn_scale_factor,
- scale_heads=getattr(args, 'scale_heads', False)
- )
-
- def prepare_for_onnx_export_(self):
- self.onnx_trace = True
-
- def residual_connection(self, x, residual):
- return residual + self.drop_path(x)
-
- def forward(
- self,
- x,
- encoder_out: Optional[torch.Tensor] = None,
- encoder_padding_mask: Optional[torch.Tensor] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- prev_self_attn_state: Optional[List[torch.Tensor]] = None,
- prev_attn_state: Optional[List[torch.Tensor]] = None,
- self_attn_mask: Optional[torch.Tensor] = None,
- self_attn_padding_mask: Optional[torch.Tensor] = None,
- need_attn: bool = False,
- need_head_weights: bool = False,
- self_attn_bias: Optional[Tensor] = None,
- cross_attn_bias: Optional[Tensor] = None
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor, optional): binary
- ByteTensor of shape `(batch, src_len)` where padding
- elements are indicated by ``1``.
- need_attn (bool, optional): return attention weights
- need_head_weights (bool, optional): return attention weights
- for each head (default: return average over heads).
-
- Returns:
- encoded output of shape `(seq_len, batch, embed_dim)`
- """
- if need_head_weights:
- need_attn = True
-
- residual = x
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
- if prev_self_attn_state is not None:
- prev_key, prev_value = prev_self_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_self_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_self_attn_state[2]
- assert incremental_state is not None
- self.self_attn._set_input_buffer(incremental_state, saved_state)
- _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state)
- if self.cross_self_attention and not (
- incremental_state is not None
- and _self_attn_input_buffer is not None
- and "prev_key" in _self_attn_input_buffer
- ):
- if self_attn_mask is not None:
- assert encoder_out is not None
- self_attn_mask = torch.cat(
- (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1
- )
- if self_attn_padding_mask is not None:
- if encoder_padding_mask is None:
- assert encoder_out is not None
- encoder_padding_mask = self_attn_padding_mask.new_zeros(
- encoder_out.size(1), encoder_out.size(0)
- )
- self_attn_padding_mask = torch.cat(
- (encoder_padding_mask, self_attn_padding_mask), dim=1
- )
- assert encoder_out is not None
- y = torch.cat((encoder_out, x), dim=0)
- else:
- y = x
-
- x, attn = self.self_attn(
- query=x,
- key=y,
- value=y,
- key_padding_mask=self_attn_padding_mask,
- incremental_state=incremental_state,
- need_weights=False,
- attn_mask=self_attn_mask,
- attn_bias=self_attn_bias
- )
- if self.self_attn_ln is not None:
- x = self.self_attn_ln(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- if self.encoder_attn is not None and encoder_out is not None:
- residual = x
- if self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
- if prev_attn_state is not None:
- prev_key, prev_value = prev_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_attn_state[2]
- assert incremental_state is not None
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
-
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=need_attn or (not self.training and self.need_attn),
- need_head_weights=need_head_weights,
- attn_bias=cross_attn_bias
- )
- if self.cross_attn_ln is not None:
- x = self.cross_attn_ln(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
-
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- if self.ffn_layernorm is not None:
- x = self.ffn_layernorm(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- if self.w_resid is not None:
- residual = torch.mul(self.w_resid, residual)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.final_layer_norm(x)
- if self.onnx_trace and incremental_state is not None:
- saved_state = self.self_attn._get_input_buffer(incremental_state)
- assert saved_state is not None
- if self_attn_padding_mask is not None:
- self_attn_state = [
- saved_state["prev_key"],
- saved_state["prev_value"],
- saved_state["prev_key_padding_mask"],
- ]
- else:
- self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]]
- return x, attn, self_attn_state
- return x, attn, None
-
- def make_generation_fast_(self, need_attn: bool = False, **kwargs):
- self.need_attn = need_attn
-
- def upgrade_state_dict_named(self, state_dict, name):
- """
- Rename layer norm states from `...layer_norms.0.weight` to
- `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to
- `...final_layer_norm.weight`
- """
- # update layer norms
- layer_norm_map = {
- "0": "self_attn_layer_norm",
- "1": "encoder_attn_layer_norm",
- "2": "final_layer_norm",
- }
- for old, new in layer_norm_map.items():
- for m in ("weight", "bias"):
- k = "{}.layer_norms.{}.{}".format(name, old, m)
- if k in state_dict:
- state_dict[
- "{}.{}.{}".format(name, new, m)
- ] = state_dict[k]
- del state_dict[k]
- if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict():
- state_dict[
- "{}.{}.{}".format(name, new, m)
- ] = self.state_dict()["{}.{}".format(new, m)]
-
- prefix = name + "." if name != "" else ""
- for param_name, param_tensor in self.state_dict().items():
- if (prefix + param_name) not in state_dict and param_name in self.state_dict():
- state_dict[prefix + param_name] = self.state_dict()[param_name]
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py
deleted file mode 100644
index 2e0fc2bd29aedb0b477b7cc8e2c3b606acdd454a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py
+++ /dev/null
@@ -1,364 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Score raw text with a trained model.
-"""
-
-from collections import namedtuple
-import logging
-from multiprocessing import Pool
-import sys
-import os
-import random
-
-import numpy as np
-import sacrebleu
-import torch
-
-from fairseq import checkpoint_utils, options, utils
-
-
-logger = logging.getLogger("fairseq_cli.drnmt_rerank")
-logger.setLevel(logging.INFO)
-
-Batch = namedtuple("Batch", "ids src_tokens src_lengths")
-
-
-pool_init_variables = {}
-
-
-def init_loaded_scores(mt_scores, model_scores, hyp, ref):
- global pool_init_variables
- pool_init_variables["mt_scores"] = mt_scores
- pool_init_variables["model_scores"] = model_scores
- pool_init_variables["hyp"] = hyp
- pool_init_variables["ref"] = ref
-
-
-def parse_fairseq_gen(filename, task):
- source = {}
- hypos = {}
- scores = {}
- with open(filename, "r", encoding="utf-8") as f:
- for line in f:
- line = line.strip()
- if line.startswith("S-"): # source
- uid, text = line.split("\t", 1)
- uid = int(uid[2:])
- source[uid] = text
- elif line.startswith("D-"): # hypo
- uid, score, text = line.split("\t", 2)
- uid = int(uid[2:])
- if uid not in hypos:
- hypos[uid] = []
- scores[uid] = []
- hypos[uid].append(text)
- scores[uid].append(float(score))
- else:
- continue
-
- source_out = [source[i] for i in range(len(hypos))]
- hypos_out = [h for i in range(len(hypos)) for h in hypos[i]]
- scores_out = [s for i in range(len(scores)) for s in scores[i]]
-
- return source_out, hypos_out, scores_out
-
-
-def read_target(filename):
- with open(filename, "r", encoding="utf-8") as f:
- output = [line.strip() for line in f]
- return output
-
-
-def make_batches(args, src, hyp, task, max_positions, encode_fn):
- assert len(src) * args.beam == len(
- hyp
- ), f"Expect {len(src) * args.beam} hypotheses for {len(src)} source sentences with beam size {args.beam}. Got {len(hyp)} hypotheses intead."
- hyp_encode = [
- task.source_dictionary.encode_line(encode_fn(h), add_if_not_exist=False).long()
- for h in hyp
- ]
- if task.cfg.include_src:
- src_encode = [
- task.source_dictionary.encode_line(
- encode_fn(s), add_if_not_exist=False
- ).long()
- for s in src
- ]
- tokens = [(src_encode[i // args.beam], h) for i, h in enumerate(hyp_encode)]
- lengths = [(t1.numel(), t2.numel()) for t1, t2 in tokens]
- else:
- tokens = [(h,) for h in hyp_encode]
- lengths = [(h.numel(),) for h in hyp_encode]
-
- itr = task.get_batch_iterator(
- dataset=task.build_dataset_for_inference(tokens, lengths),
- max_tokens=args.max_tokens,
- max_sentences=args.batch_size,
- max_positions=max_positions,
- ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test,
- ).next_epoch_itr(shuffle=False)
-
- for batch in itr:
- yield Batch(
- ids=batch["id"],
- src_tokens=batch["net_input"]["src_tokens"],
- src_lengths=batch["net_input"]["src_lengths"],
- )
-
-
-def decode_rerank_scores(args):
- if args.max_tokens is None and args.batch_size is None:
- args.batch_size = 1
-
- logger.info(args)
-
- use_cuda = torch.cuda.is_available() and not args.cpu
-
- # Load ensemble
- logger.info("loading model(s) from {}".format(args.path))
- models, _model_args, task = checkpoint_utils.load_model_ensemble_and_task(
- [args.path], arg_overrides=eval(args.model_overrides),
- )
-
- for model in models:
- if args.fp16:
- model.half()
- if use_cuda:
- model.cuda()
-
- # Initialize generator
- generator = task.build_generator(args)
-
- # Handle tokenization and BPE
- tokenizer = task.build_tokenizer(args)
- bpe = task.build_bpe(args)
-
- def encode_fn(x):
- if tokenizer is not None:
- x = tokenizer.encode(x)
- if bpe is not None:
- x = bpe.encode(x)
- return x
-
- max_positions = utils.resolve_max_positions(
- task.max_positions(), *[model.max_positions() for model in models]
- )
-
- src, hyp, mt_scores = parse_fairseq_gen(args.in_text, task)
- model_scores = {}
- logger.info("decode reranker score")
- for batch in make_batches(args, src, hyp, task, max_positions, encode_fn):
- src_tokens = batch.src_tokens
- src_lengths = batch.src_lengths
- if use_cuda:
- src_tokens = src_tokens.cuda()
- src_lengths = src_lengths.cuda()
-
- sample = {
- "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths},
- }
- scores = task.inference_step(generator, models, sample)
-
- for id, sc in zip(batch.ids.tolist(), scores.tolist()):
- model_scores[id] = sc[0]
-
- model_scores = [model_scores[i] for i in range(len(model_scores))]
-
- return src, hyp, mt_scores, model_scores
-
-
-def get_score(mt_s, md_s, w1, lp, tgt_len):
- return mt_s / (tgt_len ** lp) * w1 + md_s
-
-
-def get_best_hyps(mt_scores, md_scores, hypos, fw_weight, lenpen, beam):
- assert len(mt_scores) == len(md_scores) and len(mt_scores) == len(hypos)
- hypo_scores = []
- best_hypos = []
- best_scores = []
- offset = 0
- for i in range(len(hypos)):
- tgt_len = len(hypos[i].split())
- hypo_scores.append(
- get_score(mt_scores[i], md_scores[i], fw_weight, lenpen, tgt_len)
- )
-
- if (i + 1) % beam == 0:
- max_i = np.argmax(hypo_scores)
- best_hypos.append(hypos[offset + max_i])
- best_scores.append(hypo_scores[max_i])
- hypo_scores = []
- offset += beam
- return best_hypos, best_scores
-
-
-def eval_metric(args, hypos, ref):
- if args.metric == "bleu":
- score = sacrebleu.corpus_bleu(hypos, [ref]).score
- else:
- score = sacrebleu.corpus_ter(hypos, [ref]).score
-
- return score
-
-
-def score_target_hypo(args, fw_weight, lp):
- mt_scores = pool_init_variables["mt_scores"]
- model_scores = pool_init_variables["model_scores"]
- hyp = pool_init_variables["hyp"]
- ref = pool_init_variables["ref"]
- best_hypos, _ = get_best_hyps(
- mt_scores, model_scores, hyp, fw_weight, lp, args.beam
- )
- rerank_eval = None
- if ref:
- rerank_eval = eval_metric(args, best_hypos, ref)
- print(f"fw_weight {fw_weight}, lenpen {lp}, eval {rerank_eval}")
-
- return rerank_eval
-
-
-def print_result(best_scores, best_hypos, output_file):
- for i, (s, h) in enumerate(zip(best_scores, best_hypos)):
- print(f"{i}\t{s}\t{h}", file=output_file)
-
-
-def main(args):
- utils.import_user_module(args)
-
- src, hyp, mt_scores, model_scores = decode_rerank_scores(args)
-
- assert (
- not args.tune or args.target_text is not None
- ), "--target-text has to be set when tuning weights"
- if args.target_text:
- ref = read_target(args.target_text)
- assert len(src) == len(
- ref
- ), f"different numbers of source and target sentences ({len(src)} vs. {len(ref)})"
-
- orig_best_hypos = [hyp[i] for i in range(0, len(hyp), args.beam)]
- orig_eval = eval_metric(args, orig_best_hypos, ref)
-
- if args.tune:
- logger.info("tune weights for reranking")
-
- random_params = np.array(
- [
- [
- random.uniform(
- args.lower_bound_fw_weight, args.upper_bound_fw_weight
- ),
- random.uniform(args.lower_bound_lenpen, args.upper_bound_lenpen),
- ]
- for k in range(args.num_trials)
- ]
- )
-
- logger.info("launching pool")
- with Pool(
- 32,
- initializer=init_loaded_scores,
- initargs=(mt_scores, model_scores, hyp, ref),
- ) as p:
- rerank_scores = p.starmap(
- score_target_hypo,
- [
- (args, random_params[i][0], random_params[i][1],)
- for i in range(args.num_trials)
- ],
- )
- if args.metric == "bleu":
- best_index = np.argmax(rerank_scores)
- else:
- best_index = np.argmin(rerank_scores)
- best_fw_weight = random_params[best_index][0]
- best_lenpen = random_params[best_index][1]
- else:
- assert (
- args.lenpen is not None and args.fw_weight is not None
- ), "--lenpen and --fw-weight should be set"
- best_fw_weight, best_lenpen = args.fw_weight, args.lenpen
-
- best_hypos, best_scores = get_best_hyps(
- mt_scores, model_scores, hyp, best_fw_weight, best_lenpen, args.beam
- )
-
- if args.results_path is not None:
- os.makedirs(args.results_path, exist_ok=True)
- output_path = os.path.join(
- args.results_path, "generate-{}.txt".format(args.gen_subset),
- )
- with open(output_path, "w", buffering=1, encoding="utf-8") as o:
- print_result(best_scores, best_hypos, o)
- else:
- print_result(best_scores, best_hypos, sys.stdout)
-
- if args.target_text:
- rerank_eval = eval_metric(args, best_hypos, ref)
- print(f"before reranking, {args.metric.upper()}:", orig_eval)
- print(
- f"after reranking with fw_weight={best_fw_weight}, lenpen={best_lenpen}, {args.metric.upper()}:",
- rerank_eval,
- )
-
-
-def cli_main():
- parser = options.get_generation_parser(interactive=True)
-
- parser.add_argument(
- "--in-text",
- default=None,
- required=True,
- help="text from fairseq-interactive output, containing source sentences and hypotheses",
- )
- parser.add_argument("--target-text", default=None, help="reference text")
- parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu")
- parser.add_argument(
- "--tune",
- action="store_true",
- help="if set, tune weights on fw scores and lenpen instead of applying fixed weights for reranking",
- )
- parser.add_argument(
- "--lower-bound-fw-weight",
- default=0.0,
- type=float,
- help="lower bound of search space",
- )
- parser.add_argument(
- "--upper-bound-fw-weight",
- default=3,
- type=float,
- help="upper bound of search space",
- )
- parser.add_argument(
- "--lower-bound-lenpen",
- default=0.0,
- type=float,
- help="lower bound of search space",
- )
- parser.add_argument(
- "--upper-bound-lenpen",
- default=3,
- type=float,
- help="upper bound of search space",
- )
- parser.add_argument(
- "--fw-weight", type=float, default=None, help="weight on the fw model score"
- )
- parser.add_argument(
- "--num-trials",
- default=1000,
- type=int,
- help="number of trials to do for random search",
- )
-
- args = options.parse_args_and_arch(parser)
- main(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pay_less_attention_paper/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pay_less_attention_paper/README.md
deleted file mode 100644
index 5adab11f4dc3461f9e7126ac391b04e703616e6b..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/pay_less_attention_paper/README.md
+++ /dev/null
@@ -1,176 +0,0 @@
-# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)
-
-This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430).
-
-## Citation:
-```bibtex
-@inproceedings{wu2018pay,
- title = {Pay Less Attention with Lightweight and Dynamic Convolutions},
- author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli},
- booktitle = {International Conference on Learning Representations},
- year = {2019},
- url = {https://arxiv.org/abs/1901.10430},
-}
-```
-
-## Translation
-
-### Pre-trained models
-For some datasets we release models without GLUs which are faster at inference.
-
-Model | Description | Dataset | Download
----|---|---|---
-`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz) IWSLT14 test: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz) IWSLT14 test: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz) newstest2014 (shared vocab): [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz) newstest2014: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz) newstest2017: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz) newstest2017: [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-
-### Memory-Efficient CUDA Kernels
-
-Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent.
-
-To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used.
-
-```sh
-# to install lightconv
-cd fairseq/modules/lightconv_layer
-python cuda_function_gen.py
-python setup.py install
-
-# to install dynamicconv
-cd fairseq/modules/dynamicconv_layer
-python cuda_function_gen.py
-python setup.py install
-```
-
-### Example usage (torch.hub)
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install sacremoses subword_nmt
-```
-
-Interactive translation via PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ]
-
-# Load a transformer trained on WMT'16 En-De
-zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt')
-
-# The underlying model is available under the *models* attribute
-assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel)
-
-# Translate a sentence
-zh2en.translate('你好 世界')
-# 'Hello World'
-```
-
-Loading custom models:
-```python
-from fairseq.models.lightconv import LightConvModel
-en2fr = LightConvModel.from_pretrained(
- '/path/to/checkpoints',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='data-bin/wmt14_en_fr',
- bpe='subword_nmt',
- bpe_codes='data-bin/wmt14_en_fr/en.code'
-)
-en2fr.translate('Hello world!')
-# 'Bonjour le monde'
-```
-
-### Preprocessing the training datasets
-
-Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data.
-
-### Training and evaluation options:
-To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`.
-For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv.
-For best BLEU results, lenpen may need to be manually tuned.
-
-To use the CUDA kernels, first install the PyTorch modules using the commands
-above. Once the CUDA modules are installed, they will automatically be used
-instead of the PyTorch modules.
-
-### IWSLT14 De-En
-Training and evaluating DynamicConv (without GLU) on a GPU:
-```sh
-# Training
-SAVE="save/dynamic_conv_iwslt"
-mkdir -p $SAVE
-CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \
- --clip-norm 0 --optimizer adam --lr 0.0005 \
- --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \
- --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --lr-scheduler inverse_sqrt \
- --ddp-backend=legacy_ddp \
- --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \
- --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \
- -a lightconv_iwslt_de_en --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 0 --decoder-glu 0
-python scripts/average_checkpoints.py --inputs $SAVE \
- --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt"
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet
-```
-
-### WMT16 En-De
-Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt16en2de"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 20000 \
- --arch lightconv_wmt_en_de_big --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt
-bash scripts/compound_split_bleu.sh wmt16_gen.txt
-```
-
-### WMT14 En-Fr
-Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt14en2fr"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 70000 \
- --arch lightconv_wmt_en_fr_big --save-dir $SAVE \
- --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test
-```
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/shuffled_word_order/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/shuffled_word_order/README.md
deleted file mode 100644
index f20483849a8ca33bf349b57882a79155ba593bf1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/shuffled_word_order/README.md
+++ /dev/null
@@ -1,84 +0,0 @@
-# Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
-
-[https://arxiv.org/abs/2104.06644](https://arxiv.org/abs/2104.06644)
-
-## Introduction
-
-In this work, we pre-train [RoBERTa](../roberta) base on various word shuffled variants of BookWiki corpus (16GB). We observe that a word shuffled pre-trained model achieves surprisingly good scores on GLUE, PAWS and several parametric probing tasks. Please read our paper for more details on the experiments.
-
-## Pre-trained models
-
-| Model | Description | Download |
-| ------------------------------------- | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
-| `roberta.base.orig` | RoBERTa (base) trained on natural corpus | [roberta.base.orig.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.tar.gz) |
-| `roberta.base.shuffle.n1` | RoBERTa (base) trained on n=1 gram sentence word shuffled data | [roberta.base.shuffle.n1.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz) |
-| `roberta.base.shuffle.n2` | RoBERTa (base) trained on n=2 gram sentence word shuffled data | [roberta.base.shuffle.n2.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.tar.gz) |
-| `roberta.base.shuffle.n3` | RoBERTa (base) trained on n=3 gram sentence word shuffled data | [roberta.base.shuffle.n3.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.tar.gz) |
-| `roberta.base.shuffle.n4` | RoBERTa (base) trained on n=4 gram sentence word shuffled data | [roberta.base.shuffle.n4.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.tar.gz) |
-| `roberta.base.shuffle.512` | RoBERTa (base) trained on unigram 512 word block shuffled data | [roberta.base.shuffle.512.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.tar.gz) |
-| `roberta.base.shuffle.corpus` | RoBERTa (base) trained on unigram corpus word shuffled data | [roberta.base.shuffle.corpus.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.tar.gz) |
-| `roberta.base.shuffle.corpus_uniform` | RoBERTa (base) trained on unigram corpus word shuffled data, where all words are uniformly sampled | [roberta.base.shuffle.corpus_uniform.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.tar.gz) |
-| `roberta.base.nopos` | RoBERTa (base) without positional embeddings, trained on natural corpus | [roberta.base.nopos.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.nopos.tar.gz) |
-
-## Results
-
-[GLUE (Wang et al, 2019)](https://gluebenchmark.com/) & [PAWS (Zhang et al, 2019)](https://github.com/google-research-datasets/paws) _(dev set, single model, single-task fine-tuning, median of 5 seeds)_
-
-| name | CoLA | MNLI | MRPC | PAWS | QNLI | QQP | RTE | SST-2 |
-| :----------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
-| `roberta.base.orig` | 61.4 | 86.11 | 89.19 | 94.46 | 92.53 | 91.26 | 74.64 | 93.92 |
-| `roberta.base.shuffle.n1` | 35.15 | 82.64 | 86 | 89.97 | 89.02 | 91.01 | 69.02 | 90.47 |
-| `roberta.base.shuffle.n2` | 54.37 | 83.43 | 86.24 | 93.46 | 90.44 | 91.36 | 70.83 | 91.79 |
-| `roberta.base.shuffle.n3` | 48.72 | 83.85 | 86.36 | 94.05 | 91.69 | 91.24 | 70.65 | 92.02 |
-| `roberta.base.shuffle.n4` | 58.64 | 83.77 | 86.98 | 94.32 | 91.69 | 91.4 | 70.83 | 92.48 |
-| `roberta.base.shuffle.512` | 12.76 | 77.52 | 79.61 | 84.77 | 85.19 | 90.2 | 56.52 | 86.34 |
-| `roberta.base.shuffle.corpus` | 0 | 71.9 | 70.52 | 58.52 | 71.11 | 85.52 | 53.99 | 83.35 |
-| `roberta.base.shuffle.corpus_random` | 9.19 | 72.33 | 70.76 | 58.42 | 77.76 | 85.93 | 53.99 | 84.04 |
-| `roberta.base.nopos` | 0 | 63.5 | 72.73 | 57.08 | 77.72 | 87.87 | 54.35 | 83.24 |
-
-For more results on probing tasks, please refer to [our paper](https://arxiv.org/abs/2104.06644).
-
-## Example Usage
-
-Follow the same usage as in [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) to load and test your models:
-
-```python
-# Download roberta.base.shuffle.n1 model
-wget https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz
-tar -xzvf roberta.base.shuffle.n1.tar.gz
-
-# Load the model in fairseq
-from fairseq.models.roberta import RoBERTaModel
-roberta = RoBERTaModel.from_pretrained('/path/to/roberta.base.shuffle.n1', checkpoint_file='model.pt')
-roberta.eval() # disable dropout (or leave in train mode to finetune)
-```
-
-**Note**: The model trained without positional embeddings (`roberta.base.nopos`) is a modified `RoBERTa` model, where the positional embeddings are not used. Thus, the typical `from_pretrained` method on fairseq version of RoBERTa will not be able to load the above model weights. To do so, construct a new `RoBERTaModel` object by setting the flag `use_positional_embeddings` to `False` (or [in the latest code](https://github.com/pytorch/fairseq/blob/main/fairseq/models/roberta/model.py#L543), set `no_token_positional_embeddings` to `True`), and then load the individual weights.
-
-## Fine-tuning Evaluation
-
-We provide the trained fine-tuned models on MNLI here for each model above for quick evaluation (1 seed for each model). Please refer to [finetuning details](README.finetuning.md) for the parameters of these models. Follow [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) instructions to evaluate these models.
-
-| Model | MNLI M Dev Accuracy | Link |
-| :----------------------------------------- | :------------------ | :--------------------------------------------------------------------------------------------------------------- |
-| `roberta.base.orig.mnli` | 86.14 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.mnli.tar.gz) |
-| `roberta.base.shuffle.n1.mnli` | 82.55 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.mnli.tar.gz) |
-| `roberta.base.shuffle.n2.mnli` | 83.21 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.mnli.tar.gz) |
-| `roberta.base.shuffle.n3.mnli` | 83.89 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.mnli.tar.gz) |
-| `roberta.base.shuffle.n4.mnli` | 84.00 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.mnli.tar.gz) |
-| `roberta.base.shuffle.512.mnli` | 77.22 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.mnli.tar.gz) |
-| `roberta.base.shuffle.corpus.mnli` | 71.88 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.mnli.tar.gz) |
-| `roberta.base.shuffle.corpus_uniform.mnli` | 72.46 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.mnli.tar.gz) |
-
-## Citation
-
-```bibtex
-@misc{sinha2021masked,
- title={Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little},
- author={Koustuv Sinha and Robin Jia and Dieuwke Hupkes and Joelle Pineau and Adina Williams and Douwe Kiela},
- year={2021},
- eprint={2104.06644},
- archivePrefix={arXiv},
- primaryClass={cs.CL}
-}
-```
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py
deleted file mode 100644
index 724c6912a62d48fc61988cac1434a4f5c8754521..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py
+++ /dev/null
@@ -1,126 +0,0 @@
-from typing import Optional, Dict
-from torch import Tensor
-import torch
-
-
-def waitk_p_choose(
- tgt_len: int,
- src_len: int,
- bsz: int,
- waitk_lagging: int,
- key_padding_mask: Optional[Tensor] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None
-):
-
- max_src_len = src_len
- if incremental_state is not None:
- # Retrieve target length from incremental states
- # For inference the length of query is always 1
- max_tgt_len = incremental_state["steps"]["tgt"]
- assert max_tgt_len is not None
- max_tgt_len = int(max_tgt_len)
- else:
- max_tgt_len = tgt_len
-
- if max_src_len < waitk_lagging:
- if incremental_state is not None:
- max_tgt_len = 1
- return torch.zeros(
- bsz, max_tgt_len, max_src_len
- )
-
- # Assuming the p_choose looks like this for wait k=3
- # src_len = 6, max_tgt_len = 5
- # [0, 0, 1, 0, 0, 0, 0]
- # [0, 0, 0, 1, 0, 0, 0]
- # [0, 0, 0, 0, 1, 0, 0]
- # [0, 0, 0, 0, 0, 1, 0]
- # [0, 0, 0, 0, 0, 0, 1]
- # linearize the p_choose matrix:
- # [0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...]
- # The indices of linearized matrix that equals 1 is
- # 2 + 6 * 0
- # 3 + 6 * 1
- # ...
- # n + src_len * n + k - 1 = n * (src_len + 1) + k - 1
- # n from 0 to max_tgt_len - 1
- #
- # First, generate the indices (activate_indices_offset: bsz, max_tgt_len)
- # Second, scatter a zeros tensor (bsz, max_tgt_len * src_len)
- # with activate_indices_offset
- # Third, resize the tensor to (bsz, max_tgt_len, src_len)
-
- activate_indices_offset = (
- (
- torch.arange(max_tgt_len) * (max_src_len + 1)
- + waitk_lagging - 1
- )
- .unsqueeze(0)
- .expand(bsz, max_tgt_len)
- .long()
- )
-
- if key_padding_mask is not None:
- if key_padding_mask[:, 0].any():
- # Left padding
- activate_indices_offset += (
- key_padding_mask.sum(dim=1, keepdim=True)
- )
-
- # Need to clamp the indices that are too large
- activate_indices_offset = (
- activate_indices_offset
- .clamp(
- 0,
- min(
- [
- max_tgt_len,
- max_src_len - waitk_lagging + 1
- ]
- ) * max_src_len - 1
- )
- )
-
- p_choose = torch.zeros(bsz, max_tgt_len * max_src_len)
-
- p_choose = p_choose.scatter(
- 1,
- activate_indices_offset,
- 1.0
- ).view(bsz, max_tgt_len, max_src_len)
-
- if key_padding_mask is not None:
- p_choose = p_choose.to(key_padding_mask)
- p_choose = p_choose.masked_fill(key_padding_mask.unsqueeze(1), 0)
-
- if incremental_state is not None:
- p_choose = p_choose[:, -1:]
-
- return p_choose.float()
-
-
-def learnable_p_choose(
- energy,
- noise_mean: float = 0.0,
- noise_var: float = 0.0,
- training: bool = True
-):
- """
- Calculating step wise prob for reading and writing
- 1 to read, 0 to write
- energy: bsz, tgt_len, src_len
- """
-
- noise = 0
- if training:
- # add noise here to encourage discretness
- noise = (
- torch.normal(noise_mean, noise_var, energy.size())
- .type_as(energy)
- .to(energy.device)
- )
-
- p_choose = torch.sigmoid(energy + noise)
-
- # p_choose: bsz * self.num_heads, tgt_len, src_len
- return p_choose
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/constants.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/constants.py
deleted file mode 100644
index 4f159cfe9ac72b0524228fe290181c6898787265..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/constants.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from enum import Enum, EnumMeta
-from typing import List
-
-
-class StrEnumMeta(EnumMeta):
- # this is workaround for submitit pickling leading to instance checks failing in hydra for StrEnum, see
- # https://github.com/facebookresearch/hydra/issues/1156
- @classmethod
- def __instancecheck__(cls, other):
- return "enum" in str(type(other))
-
-
-class StrEnum(Enum, metaclass=StrEnumMeta):
- def __str__(self):
- return self.value
-
- def __eq__(self, other: str):
- return self.value == other
-
- def __repr__(self):
- return self.value
-
- def __hash__(self):
- return hash(str(self))
-
-
-def ChoiceEnum(choices: List[str]):
- """return the Enum class used to enforce list of choices"""
- return StrEnum("Choices", {k: k for k in choices})
-
-
-LOG_FORMAT_CHOICES = ChoiceEnum(["json", "none", "simple", "tqdm"])
-DDP_BACKEND_CHOICES = ChoiceEnum([
- "c10d", # alias for pytorch_ddp
- "fully_sharded", # FullyShardedDataParallel from fairscale
- "legacy_ddp",
- "no_c10d", # alias for legacy_ddp
- "pytorch_ddp",
- "slow_mo",
-])
-DDP_COMM_HOOK_CHOICES = ChoiceEnum(["none", "fp16"])
-DATASET_IMPL_CHOICES = ChoiceEnum(["raw", "lazy", "cached", "mmap", "fasta", "huffman"])
-GENERATION_CONSTRAINTS_CHOICES = ChoiceEnum(["ordered", "unordered"])
-GENERATION_DECODING_FORMAT_CHOICES = ChoiceEnum(
- ["unigram", "ensemble", "vote", "dp", "bs"]
-)
-ZERO_SHARDING_CHOICES = ChoiceEnum(["none", "os"])
-PIPELINE_CHECKPOINT_CHOICES = ChoiceEnum(["always", "never", "except_last"])
-PRINT_ALIGNMENT_CHOICES = ChoiceEnum(["hard", "soft"])
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_valid_subset_checks.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_valid_subset_checks.py
deleted file mode 100644
index 3e9191bda66fccfebba34920f88bf7b1efea5f7e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_valid_subset_checks.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import os
-import shutil
-import tempfile
-import unittest
-
-from fairseq import options
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.data.data_utils import raise_if_valid_subsets_unintentionally_ignored
-from .utils import create_dummy_data, preprocess_lm_data, train_language_model
-
-
-def make_lm_config(
- data_dir=None,
- extra_flags=None,
- task="language_modeling",
- arch="transformer_lm_gpt2_tiny",
-):
- task_args = [task]
- if data_dir is not None:
- task_args += [data_dir]
- train_parser = options.get_training_parser()
- train_args = options.parse_args_and_arch(
- train_parser,
- [
- "--task",
- *task_args,
- "--arch",
- arch,
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--max-tokens",
- "500",
- "--tokens-per-sample",
- "500",
- "--save-dir",
- data_dir,
- "--max-epoch",
- "1",
- ]
- + (extra_flags or []),
- )
- cfg = convert_namespace_to_omegaconf(train_args)
- return cfg
-
-
-def write_empty_file(path):
- with open(path, "w"):
- pass
- assert os.path.exists(path)
-
-
-class TestValidSubsetsErrors(unittest.TestCase):
- """Test various filesystem, clarg combinations and ensure that error raising happens as expected"""
-
- def _test_case(self, paths, extra_flags):
- with tempfile.TemporaryDirectory() as data_dir:
- [
- write_empty_file(os.path.join(data_dir, f"{p}.bin"))
- for p in paths + ["train"]
- ]
- cfg = make_lm_config(data_dir, extra_flags=extra_flags)
- raise_if_valid_subsets_unintentionally_ignored(cfg)
-
- def test_default_raises(self):
- with self.assertRaises(ValueError):
- self._test_case(["valid", "valid1"], [])
- with self.assertRaises(ValueError):
- self._test_case(
- ["valid", "valid1", "valid2"], ["--valid-subset", "valid,valid1"]
- )
-
- def partially_specified_valid_subsets(self):
- with self.assertRaises(ValueError):
- self._test_case(
- ["valid", "valid1", "valid2"], ["--valid-subset", "valid,valid1"]
- )
- # Fix with ignore unused
- self._test_case(
- ["valid", "valid1", "valid2"],
- ["--valid-subset", "valid,valid1", "--ignore-unused-valid-subsets"],
- )
-
- def test_legal_configs(self):
- self._test_case(["valid"], [])
- self._test_case(["valid", "valid1"], ["--ignore-unused-valid-subsets"])
- self._test_case(["valid", "valid1"], ["--combine-val"])
- self._test_case(["valid", "valid1"], ["--valid-subset", "valid,valid1"])
- self._test_case(["valid", "valid1"], ["--valid-subset", "valid1"])
- self._test_case(
- ["valid", "valid1"], ["--combine-val", "--ignore-unused-valid-subsets"]
- )
- self._test_case(
- ["valid1"], ["--valid-subset", "valid1"]
- ) # valid.bin doesn't need to be ignored.
-
- def test_disable_validation(self):
- self._test_case([], ["--disable-validation"])
- self._test_case(["valid", "valid1"], ["--disable-validation"])
-
- def test_dummy_task(self):
- cfg = make_lm_config(task="dummy_lm")
- raise_if_valid_subsets_unintentionally_ignored(cfg)
-
- def test_masked_dummy_task(self):
- cfg = make_lm_config(task="dummy_masked_lm")
- raise_if_valid_subsets_unintentionally_ignored(cfg)
-
-
-class TestCombineValidSubsets(unittest.TestCase):
- def _train(self, extra_flags):
- with self.assertLogs() as logs:
- with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir:
- create_dummy_data(data_dir, num_examples=20)
- preprocess_lm_data(data_dir)
-
- shutil.copyfile(f"{data_dir}/valid.bin", f"{data_dir}/valid1.bin")
- shutil.copyfile(f"{data_dir}/valid.idx", f"{data_dir}/valid1.idx")
- train_language_model(
- data_dir,
- "transformer_lm",
- ["--max-update", "0", "--log-format", "json"] + extra_flags,
- run_validation=False,
- )
- return [x.message for x in logs.records]
-
- def test_combined(self):
- flags = ["--combine-valid-subsets"]
- logs = self._train(flags)
- assert any(["valid1" in x for x in logs]) # loaded 100 examples from valid1
- assert not any(["valid1_ppl" in x for x in logs]) # metrics are combined
-
- def test_subsets(self):
- flags = ["--valid-subset", "valid,valid1"]
- logs = self._train(flags)
- assert any(["valid_ppl" in x for x in logs]) # loaded 100 examples from valid1
- assert any(["valid1_ppl" in x for x in logs]) # metrics are combined
diff --git a/spaces/OIUGLK/bingo/README.md b/spaces/OIUGLK/bingo/README.md
deleted file mode 100644
index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/README.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-title: bingo
-emoji: 📉
-colorFrom: red
-colorTo: red
-sdk: docker
-pinned: true
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-
In this game, you'll gain a deeper understanding of language models. Your challenge is to create a question to ask a language model in a way that the answer it provides meets specific criteria. Click \'Next\' to Start
If you like our project, please give us a star ✨ on GitHub for latest update (Code Link) . Thanks for the interesting idea of the original game author .
-
Notice: The output is generated by algorithm scheme and may involve some randomness. It does not represent the attitudes and opinions of any developers and AI services in this project. We do not make any guarantees about the generated content.
- """
- tos_markdown = """
- ### Terms of use
- By using this service, players are required to agree to the following terms:
- The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. The service may collect user dialogue data for future research.
- Please send email to opendilab@pjlab.org.cn if you get any inappropriate answer! We will delete those and keep improving our moderator.
- For an optimal experience, please use desktop computers for this demo, as mobile devices may compromise its quality.
- **Copyright 2023 OpenDILab.**
- """
-else:
- raise KeyError("invalid _LANG: {}".format(_LANG))
-
-
-def _need_api_key():
- return (_LLM == 'chatgpt' or _LLM == 'chatglm') and _LLM_KEY is None
-
-
-def _get_api_key_cfgs(api_key):
- if _LLM == 'chatgpt':
- return {'api_key': api_key}
- elif _LLM == 'chatglm':
- return {'api_key': api_key}
- else:
- return {}
-
-
-if __name__ == '__main__':
- with gr.Blocks(title=title, theme='ParityError/Interstellar') as demo:
- gr.Markdown(title_markdown)
-
- with gr.Row():
- gr_requirement = gr.HTML(value=requirement_ph, label=requirement_label)
- with gr.Row():
- with gr.Column():
- gr_question = gr.TextArea(placeholder=question_ph, label=question_label)
- gr_api_key = gr.Text(placeholder=api_ph, label=api_label, type='password', visible=_need_api_key())
- with gr.Row():
- gr_submit = gr.Button(submit_label, interactive=False)
- gr_next = gr.Button(next_label)
- with gr.Row():
- gr_select = gr.Radio(
- choices=[(QuestionExecutor(q, _LANG).question_name, i) for i, q in enumerate(_QUESTIONS)],
- label=select_label
- )
-
- with gr.Column():
- gr_uuid = gr.Text(value='', visible=False)
- gr_predict = gr.Label(label=predict_label)
- gr_answer = gr.TextArea(label=answer_label, lines=3)
- gr_explanation = gr.TextArea(label=explanation_label, lines=1)
- gr.Markdown(tos_markdown)
-
- def _postprocess_question_text(question_text):
- if _LANG == 'cn':
- idx = question_text.find(',')
- question_title = question_text[:idx]
- former, latter = question_title.split('(')
- question_title = former + ':' + latter[:-1]
- question_text = f"
"
- return question_text
-
-
- def _radio_select(uuid_, select_qid):
- global count
- if not uuid_:
- uuid_ = str(uuid.uuid4())
- count += 1
- logging.info(f'Player {count} starts the game now')
- global _QUESTION_SESSIONS
- if uuid_ not in _QUESTION_SESSIONS:
- _QUESTION_SESSIONS[uuid_] = set(), select_qid
- else:
- _exists, _ = _QUESTION_SESSIONS[uuid_]
- _QUESTION_SESSIONS[uuid_] = _exists, select_qid
-
- executor = QuestionExecutor(_QUESTIONS[select_qid], _LANG)
- question_text = _postprocess_question_text(executor.question_text)
- return question_text, '', '', {}, '', \
- gr.Button(submit_label, interactive=True), \
- gr.Button(next_label, interactive=False), \
- uuid_
-
- gr_select.select(
- _radio_select,
- inputs=[gr_uuid, gr_select],
- outputs=[
- gr_requirement, gr_question, gr_answer,
- gr_predict, gr_explanation, gr_submit, gr_next, gr_uuid,
- ],
- )
-
-
- def _next_question(uuid_):
- global count
- if not uuid_:
- uuid_ = str(uuid.uuid4())
- count += 1
- logging.info(f'Player {count} starts the game now')
- global _QUESTION_SESSIONS
- if uuid_ in _QUESTION_SESSIONS:
- _exists, _qid = _QUESTION_SESSIONS[uuid_]
- else:
- _exists, _qid = set(), -1
- _qid += 1
- _QUESTION_SESSIONS[uuid_] = _exists, _qid
-
- if _qid >= len(_QUESTIONS):
- del _QUESTION_SESSIONS[uuid_]
- logging.info(f'Player {count} has passed the game now')
- return game_cleared_label, '', '', {}, '', \
- gr.Button(submit_label, interactive=False), \
- gr.Button(try_again_label, interactive=True), \
- '', \
- gr.Radio(
- choices=[(QuestionExecutor(q, _LANG).question_name, i) for i, q in enumerate(_QUESTIONS)],
- label=select_label
- )
- else:
- executor = QuestionExecutor(_QUESTIONS[_qid], _LANG)
- question_text = _postprocess_question_text(executor.question_text)
- return question_text, '', '', {}, '', \
- gr.Button(submit_label, interactive=True), \
- gr.Button(next_label, interactive=False), \
- uuid_, \
- gr.Radio(
- choices=[(QuestionExecutor(q, _LANG).question_name, i) for i, q in enumerate(_QUESTIONS)],
- value=_qid,
- label=select_label,
- )
-
-
- gr_next.click(
- fn=_next_question,
- inputs=[gr_uuid],
- outputs=[
- gr_requirement, gr_question, gr_answer,
- gr_predict, gr_explanation, gr_submit, gr_next,
- gr_uuid, gr_select,
- ],
- )
-
-
- def _submit_answer(qs_text: str, api_key: str, uuid_: str):
- global _QUESTION_SESSIONS
- if _need_api_key() and not api_key:
- raise gr.Error(api_error_info)
-
- _exists, _qid = _QUESTION_SESSIONS[uuid_]
- executor = QuestionExecutor(
- _QUESTIONS[_qid], _LANG,
- llm=_LLM, llm_cfgs=_get_api_key_cfgs(api_key) if _need_api_key() else {'api_key': _LLM_KEY}
- )
- answer_text, correctness, explanation = executor.check(qs_text)
- labels = {correct_label: 1.0} if correctness else {wrong_label: 1.0}
- if correctness:
- _QUESTION_SESSIONS[uuid_] = (_exists | {_qid}), _qid
- return answer_text, labels, explanation, gr.Button(next_label, interactive=True), uuid_
- else:
- return answer_text, labels, explanation, gr.Button(next_label, interactive=False), uuid_
-
-
- gr_submit.click(
- _submit_answer,
- inputs=[gr_question, gr_api_key, gr_uuid],
- outputs=[gr_answer, gr_predict, gr_explanation, gr_next, gr_uuid],
- )
-
- concurrency = int(os.environ.get('CONCURRENCY', os.cpu_count()))
- favicon_path = os.path.join(os.path.dirname(__file__), 'llmriddles', 'assets', 'avatar.png')
- demo.queue().launch(max_threads=concurrency, favicon_path=favicon_path, share=True)
diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/level1.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/level1.py
deleted file mode 100644
index 3563e50681cafe59ef7f9c9eb7f9bc2994ff8a42..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/level1.py
+++ /dev/null
@@ -1,204 +0,0 @@
-from .question import register_question
-
-
-def count_english_words(text: str):
- return len(text.split(' '))
-
-
-def count_chinese_words(text: str):
- return len(text)
-
-
-def check_if_chinese(text: str):
- return all('\u4e00' <= char <= '\u9fff' for char in text)
-
-
-def count_words(text: str, contain_punctuation: bool = False):
- chinese_words = []
- english_words = []
- other_words = []
- temp_english_words = []
- for char in text:
- if '\u4e00' <= char <= '\u9fff':
- chinese_words.append(char)
- if len(temp_english_words) > 0:
- english_words.append(''.join(temp_english_words))
- temp_english_words = []
- else:
- if char.isalpha():
- temp_english_words.append(char)
- else:
- if len(temp_english_words) > 0:
- english_words.append(''.join(temp_english_words))
- temp_english_words = []
- other_words.append(char)
- if contain_punctuation:
- return len(chinese_words) + len(english_words) + len(other_words)
- else:
- return len(chinese_words) + len(english_words)
-
-
-CN_TEXT_1 = """
-第一章第一题(初来乍到),请你构造一个问题使模型的回答是一字不差的“1+1=3”(不需要引号)。
-
-请在下面的输入框内填写你的构造并点击按钮提交。
-"""
-
-EN_TEXT_1 = """
-For the first question in chapter 1, please construct a query for the model that will result in the exact answer "1+1=3" (no quotes needed).
-
-Please enter your query below and click the submit button
-"""
-
-
-def _checker_1(question_text: str, user_text: str, answer_text: str, lang: str):
- _ = question_text, lang
- answer_text = answer_text.strip()
- user_text = user_text.strip()
- if answer_text == '1+1=3':
- return True, None
- else:
- return False, None
-
-
-register_question(
- {
- 'cn': CN_TEXT_1,
- 'en': EN_TEXT_1,
- },
- checkers=_checker_1,
- name={'cn': '1-1 初来乍到', 'en': '1-1'},
-)
-
-CN_TEXT_2 = """
-第一章第二题(小试牛刀),请你输入三个字(及)以内的问题,使模型的回答在30个字以上。
-
-请在下面的输入框内填写你的问题并点击按钮提交。
-"""
-
-EN_TEXT_2 = """
-For the second question in chapter 1, please enter a question within three words so that the model’s answer is more than 30 words.
-
-Please enter your query below and click the submit button
-"""
-
-
-def _checker_2(question_text: str, user_text: str, answer_text: str, lang: str):
- _ = question_text, lang
- answer_text = answer_text.strip()
- user_text = user_text.strip()
-
- if count_words(user_text) > 3:
- return False, "用户的问题长度应该在三个字及以内" if lang == 'cn' else 'Question should be within three words.'
- elif count_words(answer_text) <= 30:
- return False, "大语言模型的答案应该超过30个字" if lang == 'cn' else 'cAnswer should be more than 30 words.'
- else:
- return True, None
-
-
-register_question(
- {
- 'cn': CN_TEXT_2,
- 'en': EN_TEXT_2,
- },
- checkers=_checker_2,
- name={'cn': '1-2 小试牛刀', 'en': '1-2'},
-)
-
-CN_TEXT_3 = """
-第一章第三题(短说长话),请你输入一个字的问题,使模型的回答在100个字以上。
-
-请在下面的输入框内填写你的问题并点击按钮提交。
-"""
-
-EN_TEXT_3 = """
-For the third question in chapter 1, please enter a one-word question so that the model’s answer is more than 100 words.
-
-Please enter your query below and click the submit button
-"""
-
-
-def _checker_3(question_text: str, user_text: str, answer_text: str, lang: str):
- _ = question_text, lang
- answer_text = answer_text.strip()
- user_text = user_text.strip()
-
- if count_words(user_text) > 1:
- return False, "用户的问题长度应该在一个字及以内" if lang == 'cn' else 'Question should be one word.'
- elif count_words(answer_text) <= 100:
- return False, "大语言模型的答案应该超过100个字" if lang == 'cn' else 'Answer should be more than 100 words.'
- else:
- return True, None
-
-
-register_question(
- {
- 'cn': CN_TEXT_3,
- 'en': EN_TEXT_3,
- },
- checkers=_checker_3,
- name={'cn': '1-3 短说长话', 'en': '1-3'}
-)
-
-CN_TEXT_4 = """
-第一章第四题(短说短话),请输入一个字的问题,使模型的回答字数小于20个字。
-
-请在下面的输入框内填写你的问题并点击按钮提交。
-"""
-
-EN_TEXT_4 = """
-For the fourth question in chapter 1, please enter a one-word question so that the model’s answer is less than 20 words.
-
-Please enter your query below and click the submit button
-"""
-
-
-def _checker_4(question_text: str, user_text: str, answer_text: str, lang: str):
- _ = question_text, lang
- answer_text = answer_text.strip()
- user_text = user_text.strip()
-
- if count_words(user_text) > 1:
- return False, "用户的问题长度应该在一个字及以内" if lang == 'cn' else 'Question should be one word.'
- elif count_words(answer_text) >= 20:
- return False, "大语言模型的答案应该小于20个字" if lang == 'cn' else 'Answer should be less than 20 words.'
- else:
- return True, None
-
-
-register_question(
- {
- 'cn': CN_TEXT_4,
- 'en': EN_TEXT_4,
- },
- checkers=_checker_4,
- name={'cn': '1-4 短说短话', 'en': '1-4'},
-)
-
-# CN_TEXT_5 = """
-# 第一章第五题(回文不变),请输入一个本身不是回文串的问题,使无论正着问还是倒着问,模型的回答是一样的。
-
-# 请在下面的输入框内填写你的问题并点击按钮提交。
-# """
-
-# EN_TEXT_5 = """
-# For the fourth question in chapter 1, please enter a question that is not a palindrome string so that the model's answer is the same whether it is asked forward or backward.
-
-# Please enter your query below and click the submit button
-# """
-
-# def _checker_5(question_text: str, answer_text: str, lang: str):
-# _ = question_text, lang
-# answer_text = answer_text.strip()
-
-# if count_words(question_text) > 0:
-# return False, 'Question should be one word.'
-# elif count_words(answer_text) >= 20:
-# return False, 'Answer should be less than 20 words.'
-# else:
-# return True, None
-
-# register_question({
-# 'cn': CN_TEXT_5,
-# 'en': EN_TEXT_5,
-# }, _checker_5)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/README_D2.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/README_D2.md
deleted file mode 100644
index a88ad7e21ce1d8651ec0d73848ce6dcd17f19d00..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/README_D2.md
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
-Detectron2 is Facebook AI Research's next generation software system
-that implements state-of-the-art object detection algorithms.
-It is a ground-up rewrite of the previous version,
-[Detectron](https://github.com/facebookresearch/Detectron/),
-and it originates from [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/).
-
-
-
-
-
-### What's New
-* It is powered by the [PyTorch](https://pytorch.org) deep learning framework.
-* Includes more features such as panoptic segmentation, Densepose, Cascade R-CNN, rotated bounding boxes, PointRend,
- DeepLab, etc.
-* Can be used as a library to support [different projects](projects/) on top of it.
- We'll open source more research projects in this way.
-* It [trains much faster](https://detectron2.readthedocs.io/notes/benchmarks.html).
-* Models can be exported to TorchScript format or Caffe2 format for deployment.
-
-See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/)
-to see more demos and learn about detectron2.
-
-## Installation
-
-See [INSTALL.md](INSTALL.md).
-
-## Getting Started
-
-Follow the [installation instructions](https://detectron2.readthedocs.io/tutorials/install.html) to
-install detectron2.
-
-See [Getting Started with Detectron2](https://detectron2.readthedocs.io/tutorials/getting_started.html),
-and the [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-to learn about basic usage.
-
-Learn more at our [documentation](https://detectron2.readthedocs.org).
-And see [projects/](projects/) for some projects that are built on top of detectron2.
-
-## Model Zoo and Baselines
-
-We provide a large set of baseline results and trained models available for download in the [Detectron2 Model Zoo](MODEL_ZOO.md).
-
-
-## License
-
-Detectron2 is released under the [Apache 2.0 license](LICENSE).
-
-## Citing Detectron2
-
-If you use Detectron2 in your research or wish to refer to the baseline results published in the [Model Zoo](MODEL_ZOO.md), please use the following BibTeX entry.
-
-```BibTeX
-@misc{wu2019detectron2,
- author = {Yuxin Wu and Alexander Kirillov and Francisco Massa and
- Wan-Yen Lo and Ross Girshick},
- title = {Detectron2},
- howpublished = {\url{https://github.com/facebookresearch/detectron2}},
- year = {2019}
-}
-```
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_val_test.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_val_test.sh
deleted file mode 100644
index d9b2a370ceeeb8f401706f4303298db13e5fad91..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_val_test.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env bash
-
-# !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst
-
-# paths to data are valid for mml7
-PLACES_ROOT="/data/inpainting/Places365"
-OUT_DIR="/data/inpainting/paper_data/Places365_val_test"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in test_large_30k # val_large
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
- "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-
- for conf in segm_256 segm_512
- do
- "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
- "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/report_from_tb.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/report_from_tb.py
deleted file mode 100644
index 9a444e6cd8027f88bd34adfc0b1dd000bbb4b2be..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/report_from_tb.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-import re
-
-import tensorflow as tf
-from torch.utils.tensorboard import SummaryWriter
-
-
-GROUPING_RULES = [
- re.compile(r'^(?Ptrain|test|val|extra_val_.*?(256|512))_(?P.*)', re.I)
-]
-
-
-DROP_RULES = [
- re.compile(r'_std$', re.I)
-]
-
-
-def need_drop(tag):
- for rule in DROP_RULES:
- if rule.search(tag):
- return True
- return False
-
-
-def get_group_and_title(tag):
- for rule in GROUPING_RULES:
- match = rule.search(tag)
- if match is None:
- continue
- return match.group('group'), match.group('title')
- return None, None
-
-
-def main(args):
- os.makedirs(args.outdir, exist_ok=True)
-
- ignored_events = set()
-
- for orig_fname in glob.glob(args.inglob):
- cur_dirpath = os.path.dirname(orig_fname) # remove filename, this should point to "version_0" directory
- subdirname = os.path.basename(cur_dirpath) # == "version_0" most of time
- exp_root_path = os.path.dirname(cur_dirpath) # remove "version_0"
- exp_name = os.path.basename(exp_root_path)
-
- writers_by_group = {}
-
- for e in tf.compat.v1.train.summary_iterator(orig_fname):
- for v in e.summary.value:
- if need_drop(v.tag):
- continue
-
- cur_group, cur_title = get_group_and_title(v.tag)
- if cur_group is None:
- if v.tag not in ignored_events:
- print(f'WARNING: Could not detect group for {v.tag}, ignoring it')
- ignored_events.add(v.tag)
- continue
-
- cur_writer = writers_by_group.get(cur_group, None)
- if cur_writer is None:
- if args.include_version:
- cur_outdir = os.path.join(args.outdir, exp_name, f'{subdirname}_{cur_group}')
- else:
- cur_outdir = os.path.join(args.outdir, exp_name, cur_group)
- cur_writer = SummaryWriter(cur_outdir)
- writers_by_group[cur_group] = cur_writer
-
- cur_writer.add_scalar(cur_title, v.simple_value, global_step=e.step, walltime=e.wall_time)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('inglob', type=str)
- aparser.add_argument('outdir', type=str)
- aparser.add_argument('--include-version', action='store_true',
- help='Include subdirectory name e.g. "version_0" into output path')
-
- main(aparser.parse_args())
diff --git a/spaces/OpenGVLab/all-seeing/utils.py b/spaces/OpenGVLab/all-seeing/utils.py
deleted file mode 100644
index 311cd84944857e93ccf5654c924893807e36858c..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/all-seeing/utils.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import requests
-from PIL import Image,ImageDraw
-from io import BytesIO
-import random
-import os
-
-
-def imread(path):
- if path.startswith('http') or path.startswith('https'):
- response = requests.get(path)
- image = Image.open(BytesIO(response.content)).convert('RGB')
- else:
- image = Image.open(path).convert('RGB')
- return image
-
-def random_image(root_path):
- img_list = os.listdir(root_path)
- img_item = random.sample(img_list, 1)[0]
- return Image.open(os.path.join(root_path, img_item))
-
-def draw_points_to_image(image:Image.Image,points:list,radius=16,color = (255, 0, 0)):
- draw = ImageDraw.Draw(image)
- for [x,y] in points:
- draw.ellipse((x - radius, y - radius, x + radius,y + radius), fill=color)
- return image
-
-def in_rectangle(bbox,points):
- for point in points:
- if min(max(point[0],bbox[0]),bbox[0]+bbox[2]) != point[0] or min(max(point[1],bbox[1]),bbox[1]+bbox[3]) != point[1] :
- return False
-
- return True
-
-
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/parallel/data_parallel.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/parallel/data_parallel.py
deleted file mode 100644
index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/parallel/data_parallel.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from itertools import chain
-
-from torch.nn.parallel import DataParallel
-
-from .scatter_gather import scatter_kwargs
-
-
-class MMDataParallel(DataParallel):
- """The DataParallel module that supports DataContainer.
-
- MMDataParallel has two main differences with PyTorch DataParallel:
-
- - It supports a custom type :class:`DataContainer` which allows more
- flexible control of input data during both GPU and CPU inference.
- - It implement two more APIs ``train_step()`` and ``val_step()``.
-
- Args:
- module (:class:`nn.Module`): Module to be encapsulated.
- device_ids (list[int]): Device IDS of modules to be scattered to.
- Defaults to None when GPU is not available.
- output_device (str | int): Device ID for output. Defaults to None.
- dim (int): Dimension used to scatter the data. Defaults to 0.
- """
-
- def __init__(self, *args, dim=0, **kwargs):
- super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs)
- self.dim = dim
-
- def forward(self, *inputs, **kwargs):
- """Override the original forward function.
-
- The main difference lies in the CPU inference where the data in
- :class:`DataContainers` will still be gathered.
- """
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module(*inputs[0], **kwargs[0])
- else:
- return super().forward(*inputs, **kwargs)
-
- def scatter(self, inputs, kwargs, device_ids):
- return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
-
- def train_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- 'instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- def val_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.val_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- ' instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.val_step(*inputs[0], **kwargs[0])
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py
deleted file mode 100644
index 7dab4b3041413b1432b0f434b8b14783097d33c6..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import os.path as osp
-from pathlib import Path
-
-from .misc import is_str
-
-
-def is_filepath(x):
- return is_str(x) or isinstance(x, Path)
-
-
-def fopen(filepath, *args, **kwargs):
- if is_str(filepath):
- return open(filepath, *args, **kwargs)
- elif isinstance(filepath, Path):
- return filepath.open(*args, **kwargs)
- raise ValueError('`filepath` should be a string or a Path')
-
-
-def check_file_exist(filename, msg_tmpl='file "{}" does not exist'):
- if not osp.isfile(filename):
- raise FileNotFoundError(msg_tmpl.format(filename))
-
-
-def mkdir_or_exist(dir_name, mode=0o777):
- if dir_name == '':
- return
- dir_name = osp.expanduser(dir_name)
- os.makedirs(dir_name, mode=mode, exist_ok=True)
-
-
-def symlink(src, dst, overwrite=True, **kwargs):
- if os.path.lexists(dst) and overwrite:
- os.remove(dst)
- os.symlink(src, dst, **kwargs)
-
-
-def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True):
- """Scan a directory to find the interested files.
-
- Args:
- dir_path (str | obj:`Path`): Path of the directory.
- suffix (str | tuple(str), optional): File suffix that we are
- interested in. Default: None.
- recursive (bool, optional): If set to True, recursively scan the
- directory. Default: False.
- case_sensitive (bool, optional) : If set to False, ignore the case of
- suffix. Default: True.
-
- Returns:
- A generator for all the interested files with relative paths.
- """
- if isinstance(dir_path, (str, Path)):
- dir_path = str(dir_path)
- else:
- raise TypeError('"dir_path" must be a string or Path object')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('"suffix" must be a string or tuple of strings')
-
- if suffix is not None and not case_sensitive:
- suffix = suffix.lower() if isinstance(suffix, str) else tuple(
- item.lower() for item in suffix)
-
- root = dir_path
-
- def _scandir(dir_path, suffix, recursive, case_sensitive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- rel_path = osp.relpath(entry.path, root)
- _rel_path = rel_path if case_sensitive else rel_path.lower()
- if suffix is None or _rel_path.endswith(suffix):
- yield rel_path
- elif recursive and os.path.isdir(entry.path):
- # scan recursively if entry.path is a directory
- yield from _scandir(entry.path, suffix, recursive,
- case_sensitive)
-
- return _scandir(dir_path, suffix, recursive, case_sensitive)
-
-
-def find_vcs_root(path, markers=('.git', )):
- """Finds the root directory (including itself) of specified markers.
-
- Args:
- path (str): Path of directory or file.
- markers (list[str], optional): List of file or directory names.
-
- Returns:
- The directory contained one of the markers or None if not found.
- """
- if osp.isfile(path):
- path = osp.dirname(path)
-
- prev, cur = None, osp.abspath(osp.expanduser(path))
- while cur != prev:
- if any(osp.exists(osp.join(cur, marker)) for marker in markers):
- return cur
- prev, cur = cur, osp.split(cur)[0]
- return None
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/model.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/model.py
deleted file mode 100644
index 901cb7a86ea5b13912ff2a98680f368d18e36d9f..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/model.py
+++ /dev/null
@@ -1,768 +0,0 @@
-import math
-import random
-import torch
-from torch import nn
-from torch.nn import functional as F
-import numpy as np
-
-from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
- self.dilation = dilation ## modified
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- dilation=self.dilation, ## modified
- )
-
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- dilation=1, ##### modified
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
- self.dilation = dilation ##### modified
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- # to simulate transconv + blur
- # we use dilated transposed conv with blur kernel as weight + dilated transconv
- if dilation > 1: ##### modified
- blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1
- blur_weight[:,:,0,1] = 2
- blur_weight[:,:,1,0] = 2
- blur_weight[:,:,1,2] = 2
- blur_weight[:,:,2,1] = 2
- blur_weight[:,:,1,1] = 4
- blur_weight = blur_weight / 16.0
- self.register_buffer("blur_weight", blur_weight)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2 + dilation - 1 ##### modified
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
-
- if self.dilation > 1: ##### modified
- # to simulate out = self.blur(out)
- out = F.conv_transpose2d(
- input, self.blur_weight.repeat(batch*in_channel,1,1,1), padding=0, groups=batch*in_channel, dilation=self.dilation//2)
- # to simulate the next line
- out = F.conv_transpose2d(
- out, weight, padding=self.dilation, groups=batch, dilation=self.dilation//2)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- return out
-
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch, dilation=self.dilation) ##### modified
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
- else: ##### modified, to make the resolution matches
- batch, _, height, width = image.shape
- _, _, height1, width1 = noise.shape
- if height != height1 or width != width1:
- noise = F.adaptive_avg_pool2d(noise, (height, width))
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- dilation=1, ##### modified
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- dilation=dilation, ##### modified
- )
-
- self.noise = NoiseInjection()
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], dilation=1): ##### modified
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- self.dilation = dilation ##### modified
- if dilation > 1: ##### modified
- blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1
- blur_weight[:,:,0,1] = 2
- blur_weight[:,:,1,0] = 2
- blur_weight[:,:,1,2] = 2
- blur_weight[:,:,2,1] = 2
- blur_weight[:,:,1,1] = 4
- blur_weight = blur_weight / 16.0
- self.register_buffer("blur_weight", blur_weight)
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- if self.dilation == 1:
- skip = self.upsample(skip)
- else: ##### modified, to simulate skip = self.upsample(skip)
- batch, in_channel, _, _ = skip.shape
- skip = F.conv2d(skip, self.blur_weight.repeat(in_channel,1,1,1),
- padding=self.dilation//2, groups=in_channel, dilation=self.dilation//2)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, dilation=8 ##### modified
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- dilation=max(1, 32 // (2**(i-1))) ##### modified
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel, dilation=max(1, 32 // (2**i)) ##### modified
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim, dilation=max(1, 32 // (2**(i-1))))) ##### modified
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- # styles is the latent code w+
- # first_layer_feature is the first-layer input feature f
- # first_layer_feature_ind indicate which layer of G accepts f (should always=0, the first layer)
- # skip_layer_feature is the encoder features sent by skip connection
- # fusion_block is the network to fuse the encoder feature and decoder feature
- # zero_noise is to force the noise to be zero (to avoid flickers for videos)
- # editing_w is the editing vector v used in video face editing
- def forward(
- self,
- styles,
- return_latents=False,
- return_features=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- first_layer_feature = None, ##### modified
- first_layer_feature_ind = 0, ##### modified
- skip_layer_feature = None, ##### modified
- fusion_block = None, ##### modified
- zero_noise = False, ##### modified
- editing_w = None, ##### modified
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if zero_noise:
- noise = [
- getattr(self.noises, f'noise_{i}') * 0.0 for i in range(self.num_layers)
- ]
- elif noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- # w+ + v for video face editing
- if editing_w is not None: ##### modified
- latent = latent + editing_w
-
- # the original StyleGAN
- if first_layer_feature is None: ##### modified
- out = self.input(latent)
- out = F.adaptive_avg_pool2d(out, 32) ##### modified
- out = self.conv1(out, latent[:, 0], noise=noise[0])
- skip = self.to_rgb1(out, latent[:, 1])
- # the default StyleGANEX, replacing the first layer of G
- elif first_layer_feature_ind == 0: ##### modified
- out = first_layer_feature[0] ##### modified
- out = self.conv1(out, latent[:, 0], noise=noise[0])
- skip = self.to_rgb1(out, latent[:, 1])
- # maybe we can also use the second layer of G to accept f?
- else: ##### modified
- out = first_layer_feature[0] ##### modified
- skip = first_layer_feature[1] ##### modified
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- # these layers accepts skipped encoder layer, use fusion block to fuse the encoder feature and decoder feature
- if skip_layer_feature and fusion_block and i//2 < len(skip_layer_feature) and i//2 < len(fusion_block):
- if editing_w is None:
- out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip)
- else:
- out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip, editing_w[:,i])
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- elif return_features:
- return image, out
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- dilation=1, ## modified
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2 + dilation-1 ## modified
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- dilation=dilation, ## modified
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], img_channel=3):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(img_channel, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- self.size = size ##### modified
-
- def forward(self, input):
- # for input that not satisfies the target size, we crop it to extract a small image of the target size.
- _, _, h, w = input.shape ##### modified
- i, j = torch.randint(0, h+1-self.size, size=(1,)).item(), torch.randint(0, w+1-self.size, size=(1,)).item() ##### modified
- out = self.convs(input[:,:,i:i+self.size,j:j+self.size]) ##### modified
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
\ No newline at end of file
diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/encoder/__init__.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/encoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/fp16_util.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/fp16_util.py
deleted file mode 100644
index c1961650ea28affa1fe64a1794f6342d355050aa..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/fp16_util.py
+++ /dev/null
@@ -1,234 +0,0 @@
-"""
-Helpers to train with 16-bit precision.
-"""
-
-import numpy as np
-import torch as th
-import torch.nn as nn
-from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
-
-INITIAL_LOG_LOSS_SCALE = 20.0
-
-
-def convert_module_to_f16(l):
- """
- Convert primitive modules to float16.
- """
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
-
-def convert_module_to_f32(l):
- """
- Convert primitive modules to float32, undoing convert_module_to_f16().
- """
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
- l.weight.data = l.weight.data.float()
- if l.bias is not None:
- l.bias.data = l.bias.data.float()
-
-
-def make_master_params(param_groups_and_shapes):
- """
- Copy model parameters into a (differently-shaped) list of full-precision
- parameters.
- """
- master_params = []
- for param_group, shape in param_groups_and_shapes:
- master_param = nn.Parameter(
- _flatten_dense_tensors(
- [param.detach().float() for (_, param) in param_group]
- ).view(shape)
- )
- master_param.requires_grad = True
- master_params.append(master_param)
- return master_params
-
-
-def model_grads_to_master_grads(param_groups_and_shapes, master_params):
- """
- Copy the gradients from the model parameters into the master parameters
- from make_master_params().
- """
- for master_param, (param_group, shape) in zip(
- master_params, param_groups_and_shapes
- ):
- master_param.grad = _flatten_dense_tensors(
- [param_grad_or_zeros(param) for (_, param) in param_group]
- ).view(shape)
-
-
-def master_params_to_model_params(param_groups_and_shapes, master_params):
- """
- Copy the master parameter data back into the model parameters.
- """
- # Without copying to a list, if a generator is passed, this will
- # silently not copy any parameters.
- for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes):
- for (_, param), unflat_master_param in zip(
- param_group, unflatten_master_params(param_group, master_param.view(-1))
- ):
- param.detach().copy_(unflat_master_param)
-
-
-def unflatten_master_params(param_group, master_param):
- return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group])
-
-
-def get_param_groups_and_shapes(named_model_params):
- named_model_params = list(named_model_params)
- scalar_vector_named_params = (
- [(n, p) for (n, p) in named_model_params if p.ndim <= 1],
- (-1),
- )
- matrix_named_params = (
- [(n, p) for (n, p) in named_model_params if p.ndim > 1],
- (1, -1),
- )
- return [scalar_vector_named_params, matrix_named_params]
-
-
-def master_params_to_state_dict(
- model, param_groups_and_shapes, master_params, use_fp16
-):
- if use_fp16:
- state_dict = model.state_dict()
- for master_param, (param_group, _) in zip(
- master_params, param_groups_and_shapes
- ):
- for (name, _), unflat_master_param in zip(
- param_group, unflatten_master_params(param_group, master_param.view(-1))
- ):
- assert name in state_dict
- state_dict[name] = unflat_master_param
- else:
- state_dict = model.state_dict()
- for i, (name, _value) in enumerate(model.named_parameters()):
- assert name in state_dict
- state_dict[name] = master_params[i]
- return state_dict
-
-
-def state_dict_to_master_params(model, state_dict, use_fp16):
- if use_fp16:
- named_model_params = [
- (name, state_dict[name]) for name, _ in model.named_parameters()
- ]
- param_groups_and_shapes = get_param_groups_and_shapes(named_model_params)
- master_params = make_master_params(param_groups_and_shapes)
- else:
- master_params = [state_dict[name] for name, _ in model.named_parameters()]
- return master_params
-
-
-def zero_master_grads(master_params):
- for param in master_params:
- param.grad = None
-
-
-def zero_grad(model_params):
- for param in model_params:
- # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group
- if param.grad is not None:
- param.grad.detach_()
- param.grad.zero_()
-
-
-def param_grad_or_zeros(param):
- if param.grad is not None:
- return param.grad.data.detach()
- else:
- return th.zeros_like(param)
-
-
-class MixedPrecisionTrainer:
- def __init__(
- self,
- *,
- model,
- use_fp16=False,
- fp16_scale_growth=1e-3,
- initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE,
- ):
- self.model = model
- self.use_fp16 = use_fp16
- self.fp16_scale_growth = fp16_scale_growth
-
- self.model_params = list(self.model.parameters())
- self.master_params = self.model_params
- self.param_groups_and_shapes = None
- self.lg_loss_scale = initial_lg_loss_scale
-
- if self.use_fp16:
- self.param_groups_and_shapes = get_param_groups_and_shapes(
- self.model.named_parameters()
- )
- self.master_params = make_master_params(self.param_groups_and_shapes)
- self.model.convert_to_fp16()
-
- def zero_grad(self):
- zero_grad(self.model_params)
-
- def backward(self, loss: th.Tensor):
- if self.use_fp16:
- loss_scale = 2 ** self.lg_loss_scale
- (loss * loss_scale).backward()
- else:
- loss.backward()
-
- def optimize(self, opt: th.optim.Optimizer):
- if self.use_fp16:
- return self._optimize_fp16(opt)
- else:
- return self._optimize_normal(opt)
-
- def _optimize_fp16(self, opt: th.optim.Optimizer):
- logger.logkv_mean("lg_loss_scale", self.lg_loss_scale)
- model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params)
- grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale)
- if check_overflow(grad_norm):
- self.lg_loss_scale -= 1
- logger.log(f"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}")
- zero_master_grads(self.master_params)
- return False
-
- logger.logkv_mean("grad_norm", grad_norm)
- logger.logkv_mean("param_norm", param_norm)
-
- self.master_params[0].grad.mul_(1.0 / (2 ** self.lg_loss_scale))
- opt.step()
- zero_master_grads(self.master_params)
- master_params_to_model_params(self.param_groups_and_shapes, self.master_params)
- self.lg_loss_scale += self.fp16_scale_growth
- return True
-
- def _optimize_normal(self, opt: th.optim.Optimizer):
- grad_norm, param_norm = self._compute_norms()
- logger.logkv_mean("grad_norm", grad_norm)
- logger.logkv_mean("param_norm", param_norm)
- opt.step()
- return True
-
- def _compute_norms(self, grad_scale=1.0):
- grad_norm = 0.0
- param_norm = 0.0
- for p in self.master_params:
- with th.no_grad():
- param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2
- if p.grad is not None:
- grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2
- return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm)
-
- def master_params_to_state_dict(self, master_params):
- return master_params_to_state_dict(
- self.model, self.param_groups_and_shapes, master_params, self.use_fp16
- )
-
- def state_dict_to_master_params(self, state_dict):
- return state_dict_to_master_params(self.model, state_dict, self.use_fp16)
-
-
-def check_overflow(value):
- return (value == float("inf")) or (value == -float("inf")) or (value != value)
diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/generation.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/generation.py
deleted file mode 100644
index ad474d770235c7b665218e64699fb0b0b1b8cc3f..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/generation.py
+++ /dev/null
@@ -1,864 +0,0 @@
-import contextlib
-import gc
-import os
-import re
-import requests
-import gc
-import sys
-
-from encodec import EncodecModel
-import funcy
-import logging
-import numpy as np
-from scipy.special import softmax
-import torch
-import torch.nn.functional as F
-import tqdm
-from transformers import BertTokenizer
-from huggingface_hub import hf_hub_download, hf_hub_url
-
-from .model import GPTConfig, GPT
-from .model_fine import FineGPT, FineGPTConfig
-from .settings import initenv
-
-initenv(sys.argv)
-global_force_cpu = os.environ.get("BARK_FORCE_CPU", False)
-if (
- global_force_cpu != True and
- torch.cuda.is_available() and
- hasattr(torch.cuda, "amp") and
- hasattr(torch.cuda.amp, "autocast") and
- hasattr(torch.cuda, "is_bf16_supported") and
- torch.cuda.is_bf16_supported()
-):
- autocast = funcy.partial(torch.cuda.amp.autocast, dtype=torch.bfloat16)
-else:
- @contextlib.contextmanager
- def autocast():
- yield
-
-
-# hold models in global scope to lazy load
-global models
-models = {}
-
-global models_devices
-models_devices = {}
-
-
-CONTEXT_WINDOW_SIZE = 1024
-
-SEMANTIC_RATE_HZ = 49.9
-SEMANTIC_VOCAB_SIZE = 10_000
-
-CODEBOOK_SIZE = 1024
-N_COARSE_CODEBOOKS = 2
-N_FINE_CODEBOOKS = 8
-COARSE_RATE_HZ = 75
-
-SAMPLE_RATE = 24_000
-
-
-SUPPORTED_LANGS = [
- ("English", "en"),
- ("German", "de"),
- ("Spanish", "es"),
- ("French", "fr"),
- ("Hindi", "hi"),
- ("Italian", "it"),
- ("Japanese", "ja"),
- ("Korean", "ko"),
- ("Polish", "pl"),
- ("Portuguese", "pt"),
- ("Russian", "ru"),
- ("Turkish", "tr"),
- ("Chinese", "zh"),
-]
-
-ALLOWED_PROMPTS = {"announcer"}
-for _, lang in SUPPORTED_LANGS:
- for prefix in ("", f"v2{os.path.sep}"):
- for n in range(10):
- ALLOWED_PROMPTS.add(f"{prefix}{lang}_speaker_{n}")
-
-
-logger = logging.getLogger(__name__)
-
-
-CUR_PATH = os.path.dirname(os.path.abspath(__file__))
-
-
-#default_cache_dir = os.path.join(os.path.expanduser("~"), ".cache")
-#CACHE_DIR = os.path.join(os.getenv("XDG_CACHE_HOME", default_cache_dir), "suno", "bark_v0")
-#CACHE_DIR = os.path.join(os.getcwd(), "models"
-CACHE_DIR = "./models"
-
-
-def _cast_bool_env_var(s):
- return s.lower() in ('true', '1', 't')
-
-USE_SMALL_MODELS = _cast_bool_env_var(os.environ.get("SUNO_USE_SMALL_MODELS", "False"))
-GLOBAL_ENABLE_MPS = _cast_bool_env_var(os.environ.get("SUNO_ENABLE_MPS", "False"))
-OFFLOAD_CPU = _cast_bool_env_var(os.environ.get("SUNO_OFFLOAD_CPU", "False"))
-
-REMOTE_MODEL_PATHS = {
- "text_small": {
- "repo_id": "suno/bark",
- "file_name": "text.pt",
- },
- "coarse_small": {
- "repo_id": "suno/bark",
- "file_name": "coarse.pt",
- },
- "fine_small": {
- "repo_id": "suno/bark",
- "file_name": "fine.pt",
- },
- "text": {
- "repo_id": "suno/bark",
- "file_name": "text_2.pt",
- },
- "coarse": {
- "repo_id": "suno/bark",
- "file_name": "coarse_2.pt",
- },
- "fine": {
- "repo_id": "suno/bark",
- "file_name": "fine_2.pt",
- },
-}
-
-
-if not hasattr(torch.nn.functional, 'scaled_dot_product_attention') and torch.cuda.is_available():
- logger.warning(
- "torch version does not support flash attention. You will get faster" +
- " inference speed by upgrade torch to newest nightly version."
- )
-
-
-def grab_best_device(use_gpu=True):
- if torch.cuda.device_count() > 0 and use_gpu:
- device = "cuda"
- elif torch.backends.mps.is_available() and use_gpu and GLOBAL_ENABLE_MPS:
- device = "mps"
- else:
- device = "cpu"
- return device
-
-
-def _get_ckpt_path(model_type, use_small=False):
- key = model_type
- if use_small or USE_SMALL_MODELS:
- key += "_small"
- return os.path.join(CACHE_DIR, REMOTE_MODEL_PATHS[key]["file_name"])
-
-"""
-def _download(from_hf_path, file_name, destfilename):
- os.makedirs(CACHE_DIR, exist_ok=True)
- hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR, local_dir_use_symlinks=False)
- # Bug in original repo? Downloaded name differs from expected...
- if not os.path.exists(destfilename):
- localname = os.path.join(CACHE_DIR, file_name)
- os.rename(localname, destfilename)
-"""
-def _download(from_hf_path, file_name):
- os.makedirs(CACHE_DIR, exist_ok=True)
- hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR)
-
-
-class InferenceContext:
- def __init__(self, benchmark=False):
- # we can't expect inputs to be the same length, so disable benchmarking by default
- self._chosen_cudnn_benchmark = benchmark
- self._cudnn_benchmark = None
-
- def __enter__(self):
- self._cudnn_benchmark = torch.backends.cudnn.benchmark
- torch.backends.cudnn.benchmark = self._chosen_cudnn_benchmark
-
- def __exit__(self, exc_type, exc_value, exc_traceback):
- torch.backends.cudnn.benchmark = self._cudnn_benchmark
-
-
-if torch.cuda.is_available():
- torch.backends.cuda.matmul.allow_tf32 = True
- torch.backends.cudnn.allow_tf32 = True
-
-
-@contextlib.contextmanager
-def _inference_mode():
- with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast():
- yield
-
-
-def _clear_cuda_cache():
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- torch.cuda.synchronize()
-
-
-def clean_models(model_key=None):
- global models
- model_keys = [model_key] if model_key is not None else models.keys()
- for k in model_keys:
- if k in models:
- del models[k]
- _clear_cuda_cache()
- gc.collect()
-
-
-def _load_model(ckpt_path, device, use_small=False, model_type="text"):
- if model_type == "text":
- ConfigClass = GPTConfig
- ModelClass = GPT
- elif model_type == "coarse":
- ConfigClass = GPTConfig
- ModelClass = GPT
- elif model_type == "fine":
- ConfigClass = FineGPTConfig
- ModelClass = FineGPT
- else:
- raise NotImplementedError()
-
- # Force-remove Models to allow running on >12Gb GPU
- # CF: Probably not needed anymore
- #global models
- #models.clear()
- #gc.collect()
- #torch.cuda.empty_cache()
- # to here...
-
- model_key = f"{model_type}_small" if use_small or USE_SMALL_MODELS else model_type
- model_info = REMOTE_MODEL_PATHS[model_key]
- if not os.path.exists(ckpt_path):
- logger.info(f"{model_type} model not found, downloading into `{CACHE_DIR}`.")
- ## added next two lines to make it super clear which model is being downloaded
- remote_filename = hf_hub_url(model_info["repo_id"], model_info["file_name"])
- print(f"Downloading {model_key} {model_info['repo_id']} remote model file {remote_filename} {model_info['file_name']} to {CACHE_DIR}")
- _download(model_info["repo_id"], model_info["file_name"])
- # add next line to make it super clear which model is being loaded
- print(f"Loading {model_key} model from {ckpt_path} to {device}") # added
- checkpoint = torch.load(ckpt_path, map_location=device)
- # this is a hack
- model_args = checkpoint["model_args"]
- if "input_vocab_size" not in model_args:
- model_args["input_vocab_size"] = model_args["vocab_size"]
- model_args["output_vocab_size"] = model_args["vocab_size"]
- del model_args["vocab_size"]
- gptconf = ConfigClass(**checkpoint["model_args"])
- model = ModelClass(gptconf)
- state_dict = checkpoint["model"]
- # fixup checkpoint
- unwanted_prefix = "_orig_mod."
- for k, v in list(state_dict.items()):
- if k.startswith(unwanted_prefix):
- state_dict[k[len(unwanted_prefix) :]] = state_dict.pop(k)
- extra_keys = set(state_dict.keys()) - set(model.state_dict().keys())
- extra_keys = set([k for k in extra_keys if not k.endswith(".attn.bias")])
- missing_keys = set(model.state_dict().keys()) - set(state_dict.keys())
- missing_keys = set([k for k in missing_keys if not k.endswith(".attn.bias")])
- if len(extra_keys) != 0:
- raise ValueError(f"extra keys found: {extra_keys}")
- if len(missing_keys) != 0:
- raise ValueError(f"missing keys: {missing_keys}")
- model.load_state_dict(state_dict, strict=False)
- n_params = model.get_num_params()
- val_loss = checkpoint["best_val_loss"].item()
- logger.info(f"model loaded: {round(n_params/1e6,1)}M params, {round(val_loss,3)} loss")
- model.eval()
- model.to(device)
- del checkpoint, state_dict
- _clear_cuda_cache()
- if model_type == "text":
- tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased")
- return {
- "model": model,
- "tokenizer": tokenizer,
- }
- return model
-
-
-def _load_codec_model(device):
- model = EncodecModel.encodec_model_24khz()
- model.set_target_bandwidth(6.0)
- model.eval()
- model.to(device)
- _clear_cuda_cache()
- return model
-
-
-def load_model(use_gpu=True, use_small=False, force_reload=False, model_type="text"):
- _load_model_f = funcy.partial(_load_model, model_type=model_type, use_small=use_small)
- if model_type not in ("text", "coarse", "fine"):
- raise NotImplementedError()
- global models
- global models_devices
- device = grab_best_device(use_gpu=use_gpu)
- model_key = f"{model_type}"
- if OFFLOAD_CPU:
- models_devices[model_key] = device
- device = "cpu"
- if model_key not in models or force_reload:
- ckpt_path = _get_ckpt_path(model_type, use_small=use_small)
- clean_models(model_key=model_key)
- model = _load_model_f(ckpt_path, device)
- models[model_key] = model
- if model_type == "text":
- models[model_key]["model"].to(device)
- else:
- models[model_key].to(device)
- return models[model_key]
-
-
-def load_codec_model(use_gpu=True, force_reload=False):
- global models
- global models_devices
- device = grab_best_device(use_gpu=use_gpu)
- if device == "mps":
- # encodec doesn't support mps
- device = "cpu"
- model_key = "codec"
- if OFFLOAD_CPU:
- models_devices[model_key] = device
- device = "cpu"
- if model_key not in models or force_reload:
- clean_models(model_key=model_key)
- model = _load_codec_model(device)
- models[model_key] = model
- models[model_key].to(device)
- return models[model_key]
-
-
-def preload_models(
- text_use_gpu=True,
- text_use_small=False,
- coarse_use_gpu=True,
- coarse_use_small=False,
- fine_use_gpu=True,
- fine_use_small=False,
- codec_use_gpu=True,
- force_reload=False
-):
- """Load all the necessary models for the pipeline."""
- if grab_best_device() == "cpu" and (
- text_use_gpu or coarse_use_gpu or fine_use_gpu or codec_use_gpu
- ):
- logger.warning("No GPU being used. Careful, inference might be very slow!")
- _ = load_model(
- model_type="text", use_gpu=text_use_gpu, use_small=text_use_small, force_reload=force_reload
- )
- _ = load_model(
- model_type="coarse",
- use_gpu=coarse_use_gpu,
- use_small=coarse_use_small,
- force_reload=force_reload,
- )
- _ = load_model(
- model_type="fine", use_gpu=fine_use_gpu, use_small=fine_use_small, force_reload=force_reload
- )
- _ = load_codec_model(use_gpu=codec_use_gpu, force_reload=force_reload)
-
-
-####
-# Generation Functionality
-####
-
-
-def _tokenize(tokenizer, text):
- return tokenizer.encode(text, add_special_tokens=False)
-
-
-def _detokenize(tokenizer, enc_text):
- return tokenizer.decode(enc_text)
-
-
-def _normalize_whitespace(text):
- return re.sub(r"\s+", " ", text).strip()
-
-
-TEXT_ENCODING_OFFSET = 10_048
-SEMANTIC_PAD_TOKEN = 10_000
-TEXT_PAD_TOKEN = 129_595
-SEMANTIC_INFER_TOKEN = 129_599
-
-
-def _load_history_prompt(history_prompt_input):
- if isinstance(history_prompt_input, str) and history_prompt_input.endswith(".npz"):
- history_prompt = np.load(history_prompt_input)
- elif isinstance(history_prompt_input, str):
- # make sure this works on non-ubuntu
- history_prompt_input = os.path.join(*history_prompt_input.split("/"))
-# if history_prompt_input not in ALLOWED_PROMPTS:
-# raise ValueError("history prompt not found")
- history_prompt = np.load(
- os.path.join(CUR_PATH, "assets", "prompts", f"{history_prompt_input}.npz")
- )
- elif isinstance(history_prompt_input, dict):
- assert("semantic_prompt" in history_prompt_input)
- assert("coarse_prompt" in history_prompt_input)
- assert("fine_prompt" in history_prompt_input)
- history_prompt = history_prompt_input
- else:
- raise ValueError("history prompt format unrecognized")
- return history_prompt
-
-
-def generate_text_semantic(
- text,
- history_prompt=None,
- temp=0.7,
- top_k=None,
- top_p=None,
- silent=False,
- min_eos_p=0.2,
- max_gen_duration_s=None,
- allow_early_stop=True,
- use_kv_caching=False,
-):
- """Generate semantic tokens from text."""
- assert isinstance(text, str)
- text = _normalize_whitespace(text)
- assert len(text.strip()) > 0
- if history_prompt is not None:
- history_prompt = _load_history_prompt(history_prompt)
- semantic_history = history_prompt["semantic_prompt"]
- assert (
- isinstance(semantic_history, np.ndarray)
- and len(semantic_history.shape) == 1
- and len(semantic_history) > 0
- and semantic_history.min() >= 0
- and semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1
- )
- else:
- semantic_history = None
- # load models if not yet exist
- global models
- global models_devices
- if "text" not in models:
- preload_models()
- model_container = models["text"]
- model = model_container["model"]
- tokenizer = model_container["tokenizer"]
- encoded_text = np.array(_tokenize(tokenizer, text)) + TEXT_ENCODING_OFFSET
- if OFFLOAD_CPU:
- model.to(models_devices["text"])
- device = next(model.parameters()).device
- if len(encoded_text) > 256:
- p = round((len(encoded_text) - 256) / len(encoded_text) * 100, 1)
- logger.warning(f"warning, text too long, lopping of last {p}%")
- encoded_text = encoded_text[:256]
- encoded_text = np.pad(
- encoded_text,
- (0, 256 - len(encoded_text)),
- constant_values=TEXT_PAD_TOKEN,
- mode="constant",
- )
- if semantic_history is not None:
- semantic_history = semantic_history.astype(np.int64)
- # lop off if history is too long, pad if needed
- semantic_history = semantic_history[-256:]
- semantic_history = np.pad(
- semantic_history,
- (0, 256 - len(semantic_history)),
- constant_values=SEMANTIC_PAD_TOKEN,
- mode="constant",
- )
- else:
- semantic_history = np.array([SEMANTIC_PAD_TOKEN] * 256)
- x = torch.from_numpy(
- np.hstack([
- encoded_text, semantic_history, np.array([SEMANTIC_INFER_TOKEN])
- ]).astype(np.int64)
- )[None]
- assert x.shape[1] == 256 + 256 + 1
- with _inference_mode():
- x = x.to(device)
- n_tot_steps = 768
- # custom tqdm updates since we don't know when eos will occur
- pbar = tqdm.tqdm(disable=silent, total=100)
- pbar_state = 0
- tot_generated_duration_s = 0
- kv_cache = None
- for n in range(n_tot_steps):
- if use_kv_caching and kv_cache is not None:
- x_input = x[:, [-1]]
- else:
- x_input = x
- logits, kv_cache = model(
- x_input, merge_context=True, use_cache=use_kv_caching, past_kv=kv_cache
- )
- relevant_logits = logits[0, 0, :SEMANTIC_VOCAB_SIZE]
- if allow_early_stop:
- relevant_logits = torch.hstack(
- (relevant_logits, logits[0, 0, [SEMANTIC_PAD_TOKEN]]) # eos
- )
- if top_p is not None:
- # faster to convert to numpy
- original_device = relevant_logits.device
- relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy()
- sorted_indices = np.argsort(relevant_logits)[::-1]
- sorted_logits = relevant_logits[sorted_indices]
- cumulative_probs = np.cumsum(softmax(sorted_logits))
- sorted_indices_to_remove = cumulative_probs > top_p
- sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy()
- sorted_indices_to_remove[0] = False
- relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf
- relevant_logits = torch.from_numpy(relevant_logits)
- relevant_logits = relevant_logits.to(original_device)
- if top_k is not None:
- v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1)))
- relevant_logits[relevant_logits < v[-1]] = -float("Inf")
- probs = F.softmax(relevant_logits / temp, dim=-1)
- # multinomial bugged on mps: shuttle to cpu if necessary
- inf_device = probs.device
- if probs.device.type == "mps":
- probs = probs.to("cpu")
- item_next = torch.multinomial(probs, num_samples=1)
- probs = probs.to(inf_device)
- item_next = item_next.to(inf_device)
- if allow_early_stop and (
- item_next == SEMANTIC_VOCAB_SIZE
- or (min_eos_p is not None and probs[-1] >= min_eos_p)
- ):
- # eos found, so break
- pbar.update(100 - pbar_state)
- break
- x = torch.cat((x, item_next[None]), dim=1)
- tot_generated_duration_s += 1 / SEMANTIC_RATE_HZ
- if max_gen_duration_s is not None and tot_generated_duration_s > max_gen_duration_s:
- pbar.update(100 - pbar_state)
- break
- if n == n_tot_steps - 1:
- pbar.update(100 - pbar_state)
- break
- del logits, relevant_logits, probs, item_next
- req_pbar_state = np.min([100, int(round(100 * n / n_tot_steps))])
- if req_pbar_state > pbar_state:
- pbar.update(req_pbar_state - pbar_state)
- pbar_state = req_pbar_state
- pbar.close()
- out = x.detach().cpu().numpy().squeeze()[256 + 256 + 1 :]
- if OFFLOAD_CPU:
- model.to("cpu")
- assert all(0 <= out) and all(out < SEMANTIC_VOCAB_SIZE)
- _clear_cuda_cache()
- return out
-
-
-def _flatten_codebooks(arr, offset_size=CODEBOOK_SIZE):
- assert len(arr.shape) == 2
- arr = arr.copy()
- if offset_size is not None:
- for n in range(1, arr.shape[0]):
- arr[n, :] += offset_size * n
- flat_arr = arr.ravel("F")
- return flat_arr
-
-
-COARSE_SEMANTIC_PAD_TOKEN = 12_048
-COARSE_INFER_TOKEN = 12_050
-
-
-def generate_coarse(
- x_semantic,
- history_prompt=None,
- temp=0.7,
- top_k=None,
- top_p=None,
- silent=False,
- max_coarse_history=630, # min 60 (faster), max 630 (more context)
- sliding_window_len=60,
- use_kv_caching=False,
-):
- """Generate coarse audio codes from semantic tokens."""
-# CF: Uncommented because it breaks swap voice more than once
-# assert (
-# isinstance(x_semantic, np.ndarray)
-# and len(x_semantic.shape) == 1
-# and len(x_semantic) > 0
-# and x_semantic.min() >= 0
-# and x_semantic.max() <= SEMANTIC_VOCAB_SIZE - 1
-# )
- assert 60 <= max_coarse_history <= 630
- assert max_coarse_history + sliding_window_len <= 1024 - 256
- semantic_to_coarse_ratio = COARSE_RATE_HZ / SEMANTIC_RATE_HZ * N_COARSE_CODEBOOKS
- max_semantic_history = int(np.floor(max_coarse_history / semantic_to_coarse_ratio))
- if history_prompt is not None:
- history_prompt = _load_history_prompt(history_prompt)
- x_semantic_history = history_prompt["semantic_prompt"]
- x_coarse_history = history_prompt["coarse_prompt"]
- assert (
- isinstance(x_semantic_history, np.ndarray)
- and len(x_semantic_history.shape) == 1
- and len(x_semantic_history) > 0
- and x_semantic_history.min() >= 0
- and x_semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1
- and isinstance(x_coarse_history, np.ndarray)
- and len(x_coarse_history.shape) == 2
- and x_coarse_history.shape[0] == N_COARSE_CODEBOOKS
- and x_coarse_history.shape[-1] >= 0
- and x_coarse_history.min() >= 0
- and x_coarse_history.max() <= CODEBOOK_SIZE - 1
- #and (
- # round(x_coarse_history.shape[-1] / len(x_semantic_history), 1)
- # == round(semantic_to_coarse_ratio / N_COARSE_CODEBOOKS, 1)
- #)
- )
- x_coarse_history = _flatten_codebooks(x_coarse_history) + SEMANTIC_VOCAB_SIZE
- # trim histories correctly
- n_semantic_hist_provided = np.min(
- [
- max_semantic_history,
- len(x_semantic_history) - len(x_semantic_history) % 2,
- int(np.floor(len(x_coarse_history) / semantic_to_coarse_ratio)),
- ]
- )
- n_coarse_hist_provided = int(round(n_semantic_hist_provided * semantic_to_coarse_ratio))
- x_semantic_history = x_semantic_history[-n_semantic_hist_provided:].astype(np.int32)
- x_coarse_history = x_coarse_history[-n_coarse_hist_provided:].astype(np.int32)
- # TODO: bit of a hack for time alignment (sounds better)
- x_coarse_history = x_coarse_history[:-2]
- else:
- x_semantic_history = np.array([], dtype=np.int32)
- x_coarse_history = np.array([], dtype=np.int32)
- # load models if not yet exist
- global models
- global models_devices
- if "coarse" not in models:
- preload_models()
- model = models["coarse"]
- if OFFLOAD_CPU:
- model.to(models_devices["coarse"])
- device = next(model.parameters()).device
- # start loop
- n_steps = int(
- round(
- np.floor(len(x_semantic) * semantic_to_coarse_ratio / N_COARSE_CODEBOOKS)
- * N_COARSE_CODEBOOKS
- )
- )
- assert n_steps > 0 and n_steps % N_COARSE_CODEBOOKS == 0
- x_semantic = np.hstack([x_semantic_history, x_semantic]).astype(np.int32)
- x_coarse = x_coarse_history.astype(np.int32)
- base_semantic_idx = len(x_semantic_history)
- with _inference_mode():
- x_semantic_in = torch.from_numpy(x_semantic)[None].to(device)
- x_coarse_in = torch.from_numpy(x_coarse)[None].to(device)
- n_window_steps = int(np.ceil(n_steps / sliding_window_len))
- n_step = 0
- for _ in tqdm.tqdm(range(n_window_steps), total=n_window_steps, disable=silent):
- semantic_idx = base_semantic_idx + int(round(n_step / semantic_to_coarse_ratio))
- # pad from right side
- x_in = x_semantic_in[:, np.max([0, semantic_idx - max_semantic_history]) :]
- x_in = x_in[:, :256]
- x_in = F.pad(
- x_in,
- (0, 256 - x_in.shape[-1]),
- "constant",
- COARSE_SEMANTIC_PAD_TOKEN,
- )
- x_in = torch.hstack(
- [
- x_in,
- torch.tensor([COARSE_INFER_TOKEN])[None].to(device),
- x_coarse_in[:, -max_coarse_history:],
- ]
- )
- kv_cache = None
- for _ in range(sliding_window_len):
- if n_step >= n_steps:
- continue
- is_major_step = n_step % N_COARSE_CODEBOOKS == 0
-
- if use_kv_caching and kv_cache is not None:
- x_input = x_in[:, [-1]]
- else:
- x_input = x_in
-
- logits, kv_cache = model(x_input, use_cache=use_kv_caching, past_kv=kv_cache)
- logit_start_idx = (
- SEMANTIC_VOCAB_SIZE + (1 - int(is_major_step)) * CODEBOOK_SIZE
- )
- logit_end_idx = (
- SEMANTIC_VOCAB_SIZE + (2 - int(is_major_step)) * CODEBOOK_SIZE
- )
- relevant_logits = logits[0, 0, logit_start_idx:logit_end_idx]
- if top_p is not None:
- # faster to convert to numpy
- original_device = relevant_logits.device
- relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy()
- sorted_indices = np.argsort(relevant_logits)[::-1]
- sorted_logits = relevant_logits[sorted_indices]
- cumulative_probs = np.cumsum(softmax(sorted_logits))
- sorted_indices_to_remove = cumulative_probs > top_p
- sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy()
- sorted_indices_to_remove[0] = False
- relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf
- relevant_logits = torch.from_numpy(relevant_logits)
- relevant_logits = relevant_logits.to(original_device)
- if top_k is not None:
- v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1)))
- relevant_logits[relevant_logits < v[-1]] = -float("Inf")
- probs = F.softmax(relevant_logits / temp, dim=-1)
- # multinomial bugged on mps: shuttle to cpu if necessary
- inf_device = probs.device
- if probs.device.type == "mps":
- probs = probs.to("cpu")
- item_next = torch.multinomial(probs, num_samples=1)
- probs = probs.to(inf_device)
- item_next = item_next.to(inf_device)
- item_next += logit_start_idx
- x_coarse_in = torch.cat((x_coarse_in, item_next[None]), dim=1)
- x_in = torch.cat((x_in, item_next[None]), dim=1)
- del logits, relevant_logits, probs, item_next
- n_step += 1
- del x_in
- del x_semantic_in
- if OFFLOAD_CPU:
- model.to("cpu")
- gen_coarse_arr = x_coarse_in.detach().cpu().numpy().squeeze()[len(x_coarse_history) :]
- del x_coarse_in
- assert len(gen_coarse_arr) == n_steps
- gen_coarse_audio_arr = gen_coarse_arr.reshape(-1, N_COARSE_CODEBOOKS).T - SEMANTIC_VOCAB_SIZE
- for n in range(1, N_COARSE_CODEBOOKS):
- gen_coarse_audio_arr[n, :] -= n * CODEBOOK_SIZE
- _clear_cuda_cache()
- return gen_coarse_audio_arr
-
-
-def generate_fine(
- x_coarse_gen,
- history_prompt=None,
- temp=0.5,
- silent=True,
-):
- """Generate full audio codes from coarse audio codes."""
- assert (
- isinstance(x_coarse_gen, np.ndarray)
- and len(x_coarse_gen.shape) == 2
- and 1 <= x_coarse_gen.shape[0] <= N_FINE_CODEBOOKS - 1
- and x_coarse_gen.shape[1] > 0
- and x_coarse_gen.min() >= 0
- and x_coarse_gen.max() <= CODEBOOK_SIZE - 1
- )
- if history_prompt is not None:
- history_prompt = _load_history_prompt(history_prompt)
- x_fine_history = history_prompt["fine_prompt"]
- assert (
- isinstance(x_fine_history, np.ndarray)
- and len(x_fine_history.shape) == 2
- and x_fine_history.shape[0] == N_FINE_CODEBOOKS
- and x_fine_history.shape[1] >= 0
- and x_fine_history.min() >= 0
- and x_fine_history.max() <= CODEBOOK_SIZE - 1
- )
- else:
- x_fine_history = None
- n_coarse = x_coarse_gen.shape[0]
- # load models if not yet exist
- global models
- global models_devices
- if "fine" not in models:
- preload_models()
- model = models["fine"]
- if OFFLOAD_CPU:
- model.to(models_devices["fine"])
- device = next(model.parameters()).device
- # make input arr
- in_arr = np.vstack(
- [
- x_coarse_gen,
- np.zeros((N_FINE_CODEBOOKS - n_coarse, x_coarse_gen.shape[1]))
- + CODEBOOK_SIZE, # padding
- ]
- ).astype(np.int32)
- # prepend history if available (max 512)
- if x_fine_history is not None:
- x_fine_history = x_fine_history.astype(np.int32)
- in_arr = np.hstack(
- [
- x_fine_history[:, -512:].astype(np.int32),
- in_arr,
- ]
- )
- n_history = x_fine_history[:, -512:].shape[1]
- else:
- n_history = 0
- n_remove_from_end = 0
- # need to pad if too short (since non-causal model)
- if in_arr.shape[1] < 1024:
- n_remove_from_end = 1024 - in_arr.shape[1]
- in_arr = np.hstack(
- [
- in_arr,
- np.zeros((N_FINE_CODEBOOKS, n_remove_from_end), dtype=np.int32) + CODEBOOK_SIZE,
- ]
- )
- # we can be lazy about fractional loop and just keep overwriting codebooks
- n_loops = np.max([0, int(np.ceil((x_coarse_gen.shape[1] - (1024 - n_history)) / 512))]) + 1
- with _inference_mode():
- in_arr = torch.tensor(in_arr.T).to(device)
- for n in tqdm.tqdm(range(n_loops), disable=silent):
- start_idx = np.min([n * 512, in_arr.shape[0] - 1024])
- start_fill_idx = np.min([n_history + n * 512, in_arr.shape[0] - 512])
- rel_start_fill_idx = start_fill_idx - start_idx
- in_buffer = in_arr[start_idx : start_idx + 1024, :][None]
- for nn in range(n_coarse, N_FINE_CODEBOOKS):
- logits = model(nn, in_buffer)
- if temp is None:
- relevant_logits = logits[0, rel_start_fill_idx:, :CODEBOOK_SIZE]
- codebook_preds = torch.argmax(relevant_logits, -1)
- else:
- relevant_logits = logits[0, :, :CODEBOOK_SIZE] / temp
- probs = F.softmax(relevant_logits, dim=-1)
- # multinomial bugged on mps: shuttle to cpu if necessary
- inf_device = probs.device
- if probs.device.type == "mps":
- probs = probs.to("cpu")
- codebook_preds = torch.hstack(
- [
- torch.multinomial(probs[nnn], num_samples=1).to(inf_device)
- for nnn in range(rel_start_fill_idx, 1024)
- ]
- )
- in_buffer[0, rel_start_fill_idx:, nn] = codebook_preds
- del logits, codebook_preds
- # transfer over info into model_in and convert to numpy
- for nn in range(n_coarse, N_FINE_CODEBOOKS):
- in_arr[
- start_fill_idx : start_fill_idx + (1024 - rel_start_fill_idx), nn
- ] = in_buffer[0, rel_start_fill_idx:, nn]
- del in_buffer
- gen_fine_arr = in_arr.detach().cpu().numpy().squeeze().T
- del in_arr
- if OFFLOAD_CPU:
- model.to("cpu")
- gen_fine_arr = gen_fine_arr[:, n_history:]
- if n_remove_from_end > 0:
- gen_fine_arr = gen_fine_arr[:, :-n_remove_from_end]
- assert gen_fine_arr.shape[-1] == x_coarse_gen.shape[-1]
- _clear_cuda_cache()
- return gen_fine_arr
-
-
-def codec_decode(fine_tokens):
- """Turn quantized audio codes into audio array using encodec."""
- # load models if not yet exist
- global models
- global models_devices
- if "codec" not in models:
- preload_models()
- model = models["codec"]
- if OFFLOAD_CPU:
- model.to(models_devices["codec"])
- device = next(model.parameters()).device
- arr = torch.from_numpy(fine_tokens)[None]
- arr = arr.to(device)
- arr = arr.transpose(0, 1)
- emb = model.quantizer.decode(arr)
- out = model.decoder(emb)
- audio_arr = out.detach().cpu().numpy().squeeze()
- del arr, emb, out
- if OFFLOAD_CPU:
- model.to("cpu")
- return audio_arr
diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/util/__init__.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Pengyey/bingo-chuchu/src/components/settings.tsx b/spaces/Pengyey/bingo-chuchu/src/components/settings.tsx
deleted file mode 100644
index 80b8a2d3b252b875f5b6f7dfc2f6e3ad9cdfb22a..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/components/settings.tsx
+++ /dev/null
@@ -1,157 +0,0 @@
-import { useEffect, useState } from 'react'
-import { useAtom } from 'jotai'
-import { Switch } from '@headlessui/react'
-import { toast } from 'react-hot-toast'
-import { hashAtom, voiceAtom } from '@/state'
-import {
- Dialog,
- DialogContent,
- DialogDescription,
- DialogFooter,
- DialogHeader,
- DialogTitle
-} from '@/components/ui/dialog'
-import { Button } from './ui/button'
-import { Input } from './ui/input'
-import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils'
-import { ExternalLink } from './external-link'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-
-export function Settings() {
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- const [loc, setLoc] = useAtom(hashAtom)
- const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys)))
- const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0')
- const [enableTTS, setEnableTTS] = useAtom(voiceAtom)
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
-
- if (loc === 'settings') {
- return (
-
- )
- } else if (loc === 'voice') {
- return (
-
- )
- }
- return null
-}
diff --git a/spaces/PrussianBlue/White-box-Cartoonization/wbc/guided_filter.py b/spaces/PrussianBlue/White-box-Cartoonization/wbc/guided_filter.py
deleted file mode 100644
index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000
--- a/spaces/PrussianBlue/White-box-Cartoonization/wbc/guided_filter.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import tensorflow as tf
-import numpy as np
-
-
-
-
-def tf_box_filter(x, r):
- k_size = int(2*r+1)
- ch = x.get_shape().as_list()[-1]
- weight = 1/(k_size**2)
- box_kernel = weight*np.ones((k_size, k_size, ch, 1))
- box_kernel = np.array(box_kernel).astype(np.float32)
- output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME')
- return output
-
-
-
-def guided_filter(x, y, r, eps=1e-2):
-
- x_shape = tf.shape(x)
- #y_shape = tf.shape(y)
-
- N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r)
-
- mean_x = tf_box_filter(x, r) / N
- mean_y = tf_box_filter(y, r) / N
- cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf_box_filter(A, r) / N
- mean_b = tf_box_filter(b, r) / N
-
- output = mean_A * x + mean_b
-
- return output
-
-
-
-def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8):
-
- #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4
-
- lr_x_shape = tf.shape(lr_x)
- #lr_y_shape = tf.shape(lr_y)
- hr_x_shape = tf.shape(hr_x)
-
- N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r)
-
- mean_x = tf_box_filter(lr_x, r) / N
- mean_y = tf_box_filter(lr_y, r) / N
- cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf.image.resize_images(A, hr_x_shape[1: 3])
- mean_b = tf.image.resize_images(b, hr_x_shape[1: 3])
-
- output = mean_A * hr_x + mean_b
-
- return output
-
-
-if __name__ == '__main__':
- import cv2
- from tqdm import tqdm
-
- input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3])
- output = guided_filter(input_photo, input_photo, 5, eps=1)
- image = cv2.imread('output_figure1/cartoon2.jpg')
- image = image/127.5 - 1
- image = np.expand_dims(image, axis=0)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- sess = tf.Session(config=config)
- sess.run(tf.global_variables_initializer())
-
- out = sess.run(output, feed_dict={input_photo: image})
- out = (np.squeeze(out)+1)*127.5
- out = np.clip(out, 0, 255).astype(np.uint8)
- cv2.imwrite('output_figure1/cartoon2_filter.jpg', out)
diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/conditional_builder/utils.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/conditional_builder/utils.py
deleted file mode 100644
index d0ee175f2e05a80dbc71c22acbecb22dddadbb42..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/conditional_builder/utils.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import importlib
-from typing import List, Any, Tuple, Optional
-
-from taming.data.helper_types import BoundingBox, Annotation
-
-# source: seaborn, color palette tab10
-COLOR_PALETTE = [(30, 118, 179), (255, 126, 13), (43, 159, 43), (213, 38, 39), (147, 102, 188),
- (139, 85, 74), (226, 118, 193), (126, 126, 126), (187, 188, 33), (22, 189, 206)]
-BLACK = (0, 0, 0)
-GRAY_75 = (63, 63, 63)
-GRAY_50 = (127, 127, 127)
-GRAY_25 = (191, 191, 191)
-WHITE = (255, 255, 255)
-FULL_CROP = (0., 0., 1., 1.)
-
-
-def intersection_area(rectangle1: BoundingBox, rectangle2: BoundingBox) -> float:
- """
- Give intersection area of two rectangles.
- @param rectangle1: (x0, y0, w, h) of first rectangle
- @param rectangle2: (x0, y0, w, h) of second rectangle
- """
- rectangle1 = rectangle1[0], rectangle1[1], rectangle1[0] + rectangle1[2], rectangle1[1] + rectangle1[3]
- rectangle2 = rectangle2[0], rectangle2[1], rectangle2[0] + rectangle2[2], rectangle2[1] + rectangle2[3]
- x_overlap = max(0., min(rectangle1[2], rectangle2[2]) - max(rectangle1[0], rectangle2[0]))
- y_overlap = max(0., min(rectangle1[3], rectangle2[3]) - max(rectangle1[1], rectangle2[1]))
- return x_overlap * y_overlap
-
-
-def horizontally_flip_bbox(bbox: BoundingBox) -> BoundingBox:
- return 1 - (bbox[0] + bbox[2]), bbox[1], bbox[2], bbox[3]
-
-
-def absolute_bbox(relative_bbox: BoundingBox, width: int, height: int) -> Tuple[int, int, int, int]:
- bbox = relative_bbox
- bbox = bbox[0] * width, bbox[1] * height, (bbox[0] + bbox[2]) * width, (bbox[1] + bbox[3]) * height
- return int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3])
-
-
-def pad_list(list_: List, pad_element: Any, pad_to_length: int) -> List:
- return list_ + [pad_element for _ in range(pad_to_length - len(list_))]
-
-
-def rescale_annotations(annotations: List[Annotation], crop_coordinates: BoundingBox, flip: bool) -> \
- List[Annotation]:
- def clamp(x: float):
- return max(min(x, 1.), 0.)
-
- def rescale_bbox(bbox: BoundingBox) -> BoundingBox:
- x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
- y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
- if flip:
- x0 = 1 - (x0 + w)
- return x0, y0, w, h
-
- return [a._replace(bbox=rescale_bbox(a.bbox)) for a in annotations]
-
-
-def filter_annotations(annotations: List[Annotation], crop_coordinates: BoundingBox) -> List:
- return [a for a in annotations if intersection_area(a.bbox, crop_coordinates) > 0.0]
-
-
-def additional_parameters_string(annotation: Annotation, short: bool = True) -> str:
- sl = slice(1) if short else slice(None)
- string = ''
- if not (annotation.is_group_of or annotation.is_occluded or annotation.is_depiction or annotation.is_inside):
- return string
- if annotation.is_group_of:
- string += 'group'[sl] + ','
- if annotation.is_occluded:
- string += 'occluded'[sl] + ','
- if annotation.is_depiction:
- string += 'depiction'[sl] + ','
- if annotation.is_inside:
- string += 'inside'[sl]
- return '(' + string.strip(",") + ')'
-
-
-def get_plot_font_size(font_size: Optional[int], figure_size: Tuple[int, int]) -> int:
- if font_size is None:
- font_size = 10
- if max(figure_size) >= 256:
- font_size = 12
- if max(figure_size) >= 512:
- font_size = 15
- return font_size
-
-
-def get_circle_size(figure_size: Tuple[int, int]) -> int:
- circle_size = 2
- if max(figure_size) >= 256:
- circle_size = 3
- if max(figure_size) >= 512:
- circle_size = 4
- return circle_size
-
-
-def load_object_from_string(object_string: str) -> Any:
- """
- Source: https://stackoverflow.com/a/10773699
- """
- module_name, class_name = object_string.rsplit(".", 1)
- return getattr(importlib.import_module(module_name), class_name)
diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/policies.py b/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/policies.py
deleted file mode 100644
index 4ecf39a5fc04b24ad1b809232b186728366987b6..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/policies.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from typing import Any, Dict, List, Optional, Type
-
-import gym
-import torch as th
-from torch import nn
-
-from stable_baselines3.common.policies import BasePolicy, register_policy
-from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
-from stable_baselines3.common.type_aliases import Schedule
-
-
-class QNetwork(BasePolicy):
- """
- Action-Value (Q-Value) network for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- features_extractor: nn.Module,
- features_dim: int,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- normalize_images: bool = True,
- ):
- super(QNetwork, self).__init__(
- observation_space,
- action_space,
- features_extractor=features_extractor,
- normalize_images=normalize_images,
- )
-
- if net_arch is None:
- net_arch = [64, 64]
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.features_extractor = features_extractor
- self.features_dim = features_dim
- self.normalize_images = normalize_images
- action_dim = self.action_space.n # number of actions
- q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
- self.q_net = nn.Sequential(*q_net)
-
- def forward(self, obs: th.Tensor) -> th.Tensor:
- """
- Predict the q-values.
-
- :param obs: Observation
- :return: The estimated Q-Value for each action.
- """
- return self.q_net(self.extract_features(obs))
-
- def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
- q_values = self.forward(observation)
- # Greedy action
- action = q_values.argmax(dim=1).reshape(-1)
- return action
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_arch,
- features_dim=self.features_dim,
- activation_fn=self.activation_fn,
- features_extractor=self.features_extractor,
- )
- )
- return data
-
-
-class DQNPolicy(BasePolicy):
- """
- Policy class with Q-Value Net and target net for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param features_extractor_kwargs: Keyword arguments
- to pass to the features extractor.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(DQNPolicy, self).__init__(
- observation_space,
- action_space,
- features_extractor_class,
- features_extractor_kwargs,
- optimizer_class=optimizer_class,
- optimizer_kwargs=optimizer_kwargs,
- )
-
- if net_arch is None:
- if features_extractor_class == FlattenExtractor:
- net_arch = [64, 64]
- else:
- net_arch = []
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.normalize_images = normalize_images
-
- self.net_args = {
- "observation_space": self.observation_space,
- "action_space": self.action_space,
- "net_arch": self.net_arch,
- "activation_fn": self.activation_fn,
- "normalize_images": normalize_images,
- }
-
- self.q_net, self.q_net_target = None, None
- self._build(lr_schedule)
-
- def _build(self, lr_schedule: Schedule) -> None:
- """
- Create the network and the optimizer.
-
- :param lr_schedule: Learning rate schedule
- lr_schedule(1) is the initial learning rate
- """
-
- self.q_net = self.make_q_net()
- self.q_net_target = self.make_q_net()
- self.q_net_target.load_state_dict(self.q_net.state_dict())
-
- # Setup optimizer with initial learning rate
- self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
-
- def make_q_net(self) -> QNetwork:
- # Make sure we always have separate networks for features extractors etc
- net_args = self._update_features_extractor(self.net_args, features_extractor=None)
- return QNetwork(**net_args).to(self.device)
-
- def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self._predict(obs, deterministic=deterministic)
-
- def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self.q_net._predict(obs, deterministic=deterministic)
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_args["net_arch"],
- activation_fn=self.net_args["activation_fn"],
- lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
- optimizer_class=self.optimizer_class,
- optimizer_kwargs=self.optimizer_kwargs,
- features_extractor_class=self.features_extractor_class,
- features_extractor_kwargs=self.features_extractor_kwargs,
- )
- )
- return data
-
-
-MlpPolicy = DQNPolicy
-
-
-class CnnPolicy(DQNPolicy):
- """
- Policy class for DQN when using images as input.
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(CnnPolicy, self).__init__(
- observation_space,
- action_space,
- lr_schedule,
- net_arch,
- activation_fn,
- features_extractor_class,
- features_extractor_kwargs,
- normalize_images,
- optimizer_class,
- optimizer_kwargs,
- )
-
-
-register_policy("MlpPolicy", MlpPolicy)
-register_policy("CnnPolicy", CnnPolicy)
diff --git a/spaces/QuoQA-NLP/QuoQaGo/app.py b/spaces/QuoQA-NLP/QuoQaGo/app.py
deleted file mode 100644
index 18dbbdc6fa849b2ed538f54b47a15e335b81e611..0000000000000000000000000000000000000000
--- a/spaces/QuoQA-NLP/QuoQaGo/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# -*- coding: utf-8 -*-
-import numpy as np
-import streamlit as st
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
-
-st.set_page_config(
- page_title="쿼카고", layout="wide", initial_sidebar_state="expanded"
-)
-
-@st.cache
-def load_model(model_name):
- model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
- return model
-
-tokenizer = AutoTokenizer.from_pretrained("QuoQA-NLP/KE-T5-Ko2En-Base")
-ko2en_model = load_model("QuoQA-NLP/KE-T5-Ko2En-Base")
-en2ko_model = load_model("QuoQA-NLP/KE-T5-En2Ko-Base")
-
-
-st.title("🐻 쿼카고 번역기")
-st.write("좌측에 번역 모드를 선택하고, CTRL+Enter(CMD+Enter)를 누르세요 🤗")
-st.write("Select Translation Mode at the left and press CTRL+Enter(CMD+Enter)🤗")
-
-translation_list = ["한국어에서 영어 | Korean to English", "영어에서 한국어 | English to Korean"]
-translation_mode = st.sidebar.radio("번역 모드를 선택(Translation Mode):", translation_list)
-
-
-default_value = '신한카드 관계자는 "과거 내놓은 상품의 경우 출시 2개월 만에 적금 가입이 4만여 좌에 달할 정도로 인기를 끌었다"면서 "금리 인상에 따라 적금 금리를 더 올려 많은 고객이 몰릴 것으로 예상하고 있다"고 말했다.'
-src_text = st.text_area(
- "번역하고 싶은 문장을 입력하세요:",
- default_value,
- height=300,
- max_chars=200,
-)
-print(src_text)
-
-
-
-if src_text == "":
- st.warning("Please **enter text** for translation")
-else:
- # translate into english sentence
- if translation_mode == translation_list[0]:
- model = ko2en_model
- else:
- model = en2ko_model
-
- translation_result = model.generate(
- **tokenizer(
- src_text,
- return_tensors="pt",
- padding="max_length",
- truncation=True,
- max_length=64,
- ),
- max_length=64,
- num_beams=5,
- repetition_penalty=1.3,
- no_repeat_ngram_size=3,
- num_return_sequences=1,
- )
- translation_result = tokenizer.decode(
- translation_result[0],
- clean_up_tokenization_spaces=True,
- skip_special_tokens=True,
- )
-
- print(f"{src_text} -> {translation_result}")
-
- st.write(translation_result)
- print(translation_result)
diff --git a/spaces/RatKing243/Test/README.md b/spaces/RatKing243/Test/README.md
deleted file mode 100644
index dbaadbf4c5737fcbc8229efadbc89f06f4b0f9bd..0000000000000000000000000000000000000000
--- a/spaces/RatKing243/Test/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Test
-emoji: 📊
-colorFrom: red
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/tools/extract.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/tools/extract.py
deleted file mode 100644
index b3dea56a14f6c100b2c53978678bab69a656cdeb..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/tools/extract.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import os
-import glob
-from re import split
-from tqdm import tqdm
-from multiprocessing import Pool
-from functools import partial
-
-scannet_dir = "/root/data/ScanNet-v2-1.0.0/data/raw"
-dump_dir = "/root/data/scannet_dump"
-num_process = 32
-
-
-def extract(seq, scannet_dir, split, dump_dir):
- assert split == "train" or split == "test"
- if not os.path.exists(os.path.join(dump_dir, split, seq)):
- os.mkdir(os.path.join(dump_dir, split, seq))
- cmd = (
- "python reader.py --filename "
- + os.path.join(
- scannet_dir,
- "scans" if split == "train" else "scans_test",
- seq,
- seq + ".sens",
- )
- + " --output_path "
- + os.path.join(dump_dir, split, seq)
- + " --export_depth_images --export_color_images --export_poses --export_intrinsics"
- )
- os.system(cmd)
-
-
-if __name__ == "__main__":
- if not os.path.exists(dump_dir):
- os.mkdir(dump_dir)
- os.mkdir(os.path.join(dump_dir, "train"))
- os.mkdir(os.path.join(dump_dir, "test"))
-
- train_seq_list = [
- seq.split("/")[-1]
- for seq in glob.glob(os.path.join(scannet_dir, "scans", "scene*"))
- ]
- test_seq_list = [
- seq.split("/")[-1]
- for seq in glob.glob(os.path.join(scannet_dir, "scans_test", "scene*"))
- ]
-
- extract_train = partial(
- extract, scannet_dir=scannet_dir, split="train", dump_dir=dump_dir
- )
- extract_test = partial(
- extract, scannet_dir=scannet_dir, split="test", dump_dir=dump_dir
- )
-
- num_train_iter = (
- len(train_seq_list) // num_process
- if len(train_seq_list) % num_process == 0
- else len(train_seq_list) // num_process + 1
- )
- num_test_iter = (
- len(test_seq_list) // num_process
- if len(test_seq_list) % num_process == 0
- else len(test_seq_list) // num_process + 1
- )
-
- pool = Pool(num_process)
- for index in tqdm(range(num_train_iter)):
- seq_list = train_seq_list[
- index * num_process : min((index + 1) * num_process, len(train_seq_list))
- ]
- pool.map(extract_train, seq_list)
- pool.close()
- pool.join()
-
- pool = Pool(num_process)
- for index in tqdm(range(num_test_iter)):
- seq_list = test_seq_list[
- index * num_process : min((index + 1) * num_process, len(test_seq_list))
- ]
- pool.map(extract_test, seq_list)
- pool.close()
- pool.join()
diff --git a/spaces/Realcat/image-matching-webui/third_party/LightGlue/setup.py b/spaces/Realcat/image-matching-webui/third_party/LightGlue/setup.py
deleted file mode 100644
index 2b012e92a208d09e4983317c4eb3c1d8093177e8..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/LightGlue/setup.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from pathlib import Path
-from setuptools import setup
-
-description = ["LightGlue"]
-
-with open(str(Path(__file__).parent / "README.md"), "r", encoding="utf-8") as f:
- readme = f.read()
-with open(str(Path(__file__).parent / "requirements.txt"), "r") as f:
- dependencies = f.read().split("\n")
-
-setup(
- name="lightglue",
- version="0.0",
- packages=["lightglue"],
- python_requires=">=3.6",
- install_requires=dependencies,
- author="Philipp Lindenberger, Paul-Edouard Sarlin",
- description=description,
- long_description=readme,
- long_description_content_type="text/markdown",
- url="https://github.com/cvg/LightGlue/",
- classifiers=[
- "Programming Language :: Python :: 3",
- "License :: OSI Approved :: Apache Software License",
- "Operating System :: OS Independent",
- ],
-)
diff --git a/spaces/RitaParadaRamos/SmallCapDemo/src/utils_generate_retrieved_caps.py b/spaces/RitaParadaRamos/SmallCapDemo/src/utils_generate_retrieved_caps.py
deleted file mode 100644
index 805ca584de60342369b888f6e67af82f8ea2a624..0000000000000000000000000000000000000000
--- a/spaces/RitaParadaRamos/SmallCapDemo/src/utils_generate_retrieved_caps.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from torch.utils.data import Dataset
-from PIL import Image
-import torch
-import json
-import h5py
-import bisect
-
-CAPTION_LENGTH = 25
-SIMPLE_PREFIX = "This image shows "
-
-def prep_strings(text, tokenizer, template=None, retrieved_caps=None, k=None, is_test=False, max_length=None):
-
- if is_test:
- padding = False
- truncation = False
- else:
- padding = True
- truncation = True
-
- if retrieved_caps is not None:
- infix = '\n\n'.join(retrieved_caps[:k]) + '.'
- prefix = template.replace('||', infix)
- else:
- prefix = SIMPLE_PREFIX
-
- prefix_ids = tokenizer.encode(prefix)
- len_prefix = len(prefix_ids)
-
- text_ids = tokenizer.encode(text, add_special_tokens=False)
- if truncation:
- text_ids = text_ids[:CAPTION_LENGTH]
- input_ids = prefix_ids + text_ids if not is_test else prefix_ids
-
- # we ignore the prefix (minus one as the first subtoken in the prefix is not predicted)
- label_ids = [-100] * (len_prefix - 1) + text_ids + [tokenizer.eos_token_id]
- if padding:
- input_ids += [tokenizer.pad_token_id] * (max_length - len(input_ids))
- label_ids += [-100] * (max_length - len(label_ids))
-
- if is_test:
- return input_ids
- else:
- return input_ids, label_ids
-
-def postprocess_preds(pred, tokenizer):
- pred = pred.split(SIMPLE_PREFIX)[-1]
- pred = pred.replace(tokenizer.pad_token, '')
- if pred.startswith(tokenizer.bos_token):
- pred = pred[len(tokenizer.bos_token):]
- if pred.endswith(tokenizer.eos_token):
- pred = pred[:-len(tokenizer.eos_token)]
- return pred
-
-class TrainDataset(Dataset):
- def __init__(self, df, features_path, tokenizer, rag=False, template_path=None, k=None, max_caption_length=25):
- self.df = df
- self.tokenizer = tokenizer
- self.features = h5py.File(features_path, 'r')
-
- if rag:
- self.template = open(template_path).read().strip() + ' '
- self.max_target_length = (max_caption_length # target caption
- + max_caption_length * k # retrieved captions
- + len(tokenizer.encode(self.template)) # template
- + len(tokenizer.encode('\n\n')) * (k-1) # separator between captions
- )
- assert k is not None
- self.k = k
- self.rag = rag
-
- def __len__(self):
- return len(self.df)
-
- def __getitem__(self, idx):
- text = self.df['text'][idx]
- if self.rag:
- caps = self.df['caps'][idx]
- decoder_input_ids, labels = prep_strings(text, self.tokenizer, template=self.template,
- retrieved_caps=caps, k=self.k, max_length=self.max_target_length)
- else:
- decoder_input_ids, labels = prep_strings(text, self.tokenizer, max_length=self.max_target_length)
- # load precomputed features
- encoder_outputs = self.features[self.df['cocoid'][idx]][()]
- encoding = {"encoder_outputs": torch.tensor(encoder_outputs),
- "decoder_input_ids": torch.tensor(decoder_input_ids),
- "labels": torch.tensor(labels)}
-
- return encoding
-
-
-def load_data_for_training(annot_path, caps_path=None):
- annotations = json.load(open(annot_path))['images']
- if caps_path is not None:
- retrieved_caps = json.load(open(caps_path))
- data = {'train': [], 'val': []}
-
- for item in annotations:
- file_name = item['filename'].split('_')[-1]
- caps = retrieved_caps[str(item['cocoid'])]
-
- samples = []
- for sentence in item['sentences']:
- print("how are the retrieved caps", caps + ' '.join(sentence['tokens']))
-
- samples.append({'file_name': file_name, 'cocoid': str(item['cocoid']), 'caps': None, 'text': " ".join(caps) + ' '.join(sentence['tokens'])})
- if item['split'] == 'train' or item['split'] == 'restval':
- data['train'] += samples
- elif item['split'] == 'val':
- data['val'] += samples
- return data
-
-
-
-
-
-def load_data_for_inference(annot_path, caps_path=None):
- annotations = json.load(open(annot_path))['images']
- if caps_path is not None:
- retrieved_caps = json.load(open(caps_path))
- data = {'test': [], 'val': []}
-
- for item in annotations:
- file_name = item['filename'].split('_')[-1]
- if caps_path is not None:
- caps = retrieved_caps[str(item['cocoid'])]
- else:
- caps = None
- image = {'file_name': file_name, 'caps': caps, 'image_id': str(item['cocoid'])}
- if item['split'] == 'test':
- data['test'].append(image)
- elif item['split'] == 'val':
- data['val'].append(image)
-
- return data
-
diff --git a/spaces/Robert001/UniControl-Demo/README.md b/spaces/Robert001/UniControl-Demo/README.md
deleted file mode 100644
index a8457ea3f35dbb3a47b5573aff671c56b57d1f9c..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: UniControl Demo
-emoji: 📚
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py
deleted file mode 100644
index 988d9adf2f289ef223bd1c680a5ae1d3387f0269..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py
+++ /dev/null
@@ -1,412 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..utils import kaiming_init
-from .registry import PLUGIN_LAYERS
-
-
-@PLUGIN_LAYERS.register_module()
-class GeneralizedAttention(nn.Module):
- """GeneralizedAttention module.
-
- See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks'
- (https://arxiv.org/abs/1711.07971) for details.
-
- Args:
- in_channels (int): Channels of the input feature map.
- spatial_range (int): The spatial range. -1 indicates no spatial range
- constraint. Default: -1.
- num_heads (int): The head number of empirical_attention module.
- Default: 9.
- position_embedding_dim (int): The position embedding dimension.
- Default: -1.
- position_magnitude (int): A multiplier acting on coord difference.
- Default: 1.
- kv_stride (int): The feature stride acting on key/value feature map.
- Default: 2.
- q_stride (int): The feature stride acting on query feature map.
- Default: 1.
- attention_type (str): A binary indicator string for indicating which
- items in generalized empirical_attention module are used.
- Default: '1111'.
-
- - '1000' indicates 'query and key content' (appr - appr) item,
- - '0100' indicates 'query content and relative position'
- (appr - position) item,
- - '0010' indicates 'key content only' (bias - appr) item,
- - '0001' indicates 'relative position only' (bias - position) item.
- """
-
- _abbr_ = 'gen_attention_block'
-
- def __init__(self,
- in_channels,
- spatial_range=-1,
- num_heads=9,
- position_embedding_dim=-1,
- position_magnitude=1,
- kv_stride=2,
- q_stride=1,
- attention_type='1111'):
-
- super(GeneralizedAttention, self).__init__()
-
- # hard range means local range for non-local operation
- self.position_embedding_dim = (
- position_embedding_dim
- if position_embedding_dim > 0 else in_channels)
-
- self.position_magnitude = position_magnitude
- self.num_heads = num_heads
- self.in_channels = in_channels
- self.spatial_range = spatial_range
- self.kv_stride = kv_stride
- self.q_stride = q_stride
- self.attention_type = [bool(int(_)) for _ in attention_type]
- self.qk_embed_dim = in_channels // num_heads
- out_c = self.qk_embed_dim * num_heads
-
- if self.attention_type[0] or self.attention_type[1]:
- self.query_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_c,
- kernel_size=1,
- bias=False)
- self.query_conv.kaiming_init = True
-
- if self.attention_type[0] or self.attention_type[2]:
- self.key_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_c,
- kernel_size=1,
- bias=False)
- self.key_conv.kaiming_init = True
-
- self.v_dim = in_channels // num_heads
- self.value_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=self.v_dim * num_heads,
- kernel_size=1,
- bias=False)
- self.value_conv.kaiming_init = True
-
- if self.attention_type[1] or self.attention_type[3]:
- self.appr_geom_fc_x = nn.Linear(
- self.position_embedding_dim // 2, out_c, bias=False)
- self.appr_geom_fc_x.kaiming_init = True
-
- self.appr_geom_fc_y = nn.Linear(
- self.position_embedding_dim // 2, out_c, bias=False)
- self.appr_geom_fc_y.kaiming_init = True
-
- if self.attention_type[2]:
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
- appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv
- self.appr_bias = nn.Parameter(appr_bias_value)
-
- if self.attention_type[3]:
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
- geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv
- self.geom_bias = nn.Parameter(geom_bias_value)
-
- self.proj_conv = nn.Conv2d(
- in_channels=self.v_dim * num_heads,
- out_channels=in_channels,
- kernel_size=1,
- bias=True)
- self.proj_conv.kaiming_init = True
- self.gamma = nn.Parameter(torch.zeros(1))
-
- if self.spatial_range >= 0:
- # only works when non local is after 3*3 conv
- if in_channels == 256:
- max_len = 84
- elif in_channels == 512:
- max_len = 42
-
- max_len_kv = int((max_len - 1.0) / self.kv_stride + 1)
- local_constraint_map = np.ones(
- (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int)
- for iy in range(max_len):
- for ix in range(max_len):
- local_constraint_map[
- iy, ix,
- max((iy - self.spatial_range) //
- self.kv_stride, 0):min((iy + self.spatial_range +
- 1) // self.kv_stride +
- 1, max_len),
- max((ix - self.spatial_range) //
- self.kv_stride, 0):min((ix + self.spatial_range +
- 1) // self.kv_stride +
- 1, max_len)] = 0
-
- self.local_constraint_map = nn.Parameter(
- torch.from_numpy(local_constraint_map).byte(),
- requires_grad=False)
-
- if self.q_stride > 1:
- self.q_downsample = nn.AvgPool2d(
- kernel_size=1, stride=self.q_stride)
- else:
- self.q_downsample = None
-
- if self.kv_stride > 1:
- self.kv_downsample = nn.AvgPool2d(
- kernel_size=1, stride=self.kv_stride)
- else:
- self.kv_downsample = None
-
- self.init_weights()
-
- def get_position_embedding(self,
- h,
- w,
- h_kv,
- w_kv,
- q_stride,
- kv_stride,
- device,
- dtype,
- feat_dim,
- wave_length=1000):
- # the default type of Tensor is float32, leading to type mismatch
- # in fp16 mode. Cast it to support fp16 mode.
- h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype)
- h_idxs = h_idxs.view((h, 1)) * q_stride
-
- w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype)
- w_idxs = w_idxs.view((w, 1)) * q_stride
-
- h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to(
- device=device, dtype=dtype)
- h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride
-
- w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to(
- device=device, dtype=dtype)
- w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride
-
- # (h, h_kv, 1)
- h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0)
- h_diff *= self.position_magnitude
-
- # (w, w_kv, 1)
- w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0)
- w_diff *= self.position_magnitude
-
- feat_range = torch.arange(0, feat_dim / 4).to(
- device=device, dtype=dtype)
-
- dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype)
- dim_mat = dim_mat**((4. / feat_dim) * feat_range)
- dim_mat = dim_mat.view((1, 1, -1))
-
- embedding_x = torch.cat(
- ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2)
-
- embedding_y = torch.cat(
- ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2)
-
- return embedding_x, embedding_y
-
- def forward(self, x_input):
- num_heads = self.num_heads
-
- # use empirical_attention
- if self.q_downsample is not None:
- x_q = self.q_downsample(x_input)
- else:
- x_q = x_input
- n, _, h, w = x_q.shape
-
- if self.kv_downsample is not None:
- x_kv = self.kv_downsample(x_input)
- else:
- x_kv = x_input
- _, _, h_kv, w_kv = x_kv.shape
-
- if self.attention_type[0] or self.attention_type[1]:
- proj_query = self.query_conv(x_q).view(
- (n, num_heads, self.qk_embed_dim, h * w))
- proj_query = proj_query.permute(0, 1, 3, 2)
-
- if self.attention_type[0] or self.attention_type[2]:
- proj_key = self.key_conv(x_kv).view(
- (n, num_heads, self.qk_embed_dim, h_kv * w_kv))
-
- if self.attention_type[1] or self.attention_type[3]:
- position_embed_x, position_embed_y = self.get_position_embedding(
- h, w, h_kv, w_kv, self.q_stride, self.kv_stride,
- x_input.device, x_input.dtype, self.position_embedding_dim)
- # (n, num_heads, w, w_kv, dim)
- position_feat_x = self.appr_geom_fc_x(position_embed_x).\
- view(1, w, w_kv, num_heads, self.qk_embed_dim).\
- permute(0, 3, 1, 2, 4).\
- repeat(n, 1, 1, 1, 1)
-
- # (n, num_heads, h, h_kv, dim)
- position_feat_y = self.appr_geom_fc_y(position_embed_y).\
- view(1, h, h_kv, num_heads, self.qk_embed_dim).\
- permute(0, 3, 1, 2, 4).\
- repeat(n, 1, 1, 1, 1)
-
- position_feat_x /= math.sqrt(2)
- position_feat_y /= math.sqrt(2)
-
- # accelerate for saliency only
- if (np.sum(self.attention_type) == 1) and self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim).\
- repeat(n, 1, 1, 1)
-
- energy = torch.matmul(appr_bias, proj_key).\
- view(n, num_heads, 1, h_kv * w_kv)
-
- h = 1
- w = 1
- else:
- # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for
- if not self.attention_type[0]:
- energy = torch.zeros(
- n,
- num_heads,
- h,
- w,
- h_kv,
- w_kv,
- dtype=x_input.dtype,
- device=x_input.device)
-
- # attention_type[0]: appr - appr
- # attention_type[1]: appr - position
- # attention_type[2]: bias - appr
- # attention_type[3]: bias - position
- if self.attention_type[0] or self.attention_type[2]:
- if self.attention_type[0] and self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim)
- energy = torch.matmul(proj_query + appr_bias, proj_key).\
- view(n, num_heads, h, w, h_kv, w_kv)
-
- elif self.attention_type[0]:
- energy = torch.matmul(proj_query, proj_key).\
- view(n, num_heads, h, w, h_kv, w_kv)
-
- elif self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim).\
- repeat(n, 1, 1, 1)
-
- energy += torch.matmul(appr_bias, proj_key).\
- view(n, num_heads, 1, 1, h_kv, w_kv)
-
- if self.attention_type[1] or self.attention_type[3]:
- if self.attention_type[1] and self.attention_type[3]:
- geom_bias = self.geom_bias.\
- view(1, num_heads, 1, self.qk_embed_dim)
-
- proj_query_reshape = (proj_query + geom_bias).\
- view(n, num_heads, h, w, self.qk_embed_dim)
-
- energy_x = torch.matmul(
- proj_query_reshape.permute(0, 1, 3, 2, 4),
- position_feat_x.permute(0, 1, 2, 4, 3))
- energy_x = energy_x.\
- permute(0, 1, 3, 2, 4).unsqueeze(4)
-
- energy_y = torch.matmul(
- proj_query_reshape,
- position_feat_y.permute(0, 1, 2, 4, 3))
- energy_y = energy_y.unsqueeze(5)
-
- energy += energy_x + energy_y
-
- elif self.attention_type[1]:
- proj_query_reshape = proj_query.\
- view(n, num_heads, h, w, self.qk_embed_dim)
- proj_query_reshape = proj_query_reshape.\
- permute(0, 1, 3, 2, 4)
- position_feat_x_reshape = position_feat_x.\
- permute(0, 1, 2, 4, 3)
- position_feat_y_reshape = position_feat_y.\
- permute(0, 1, 2, 4, 3)
-
- energy_x = torch.matmul(proj_query_reshape,
- position_feat_x_reshape)
- energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4)
-
- energy_y = torch.matmul(proj_query_reshape,
- position_feat_y_reshape)
- energy_y = energy_y.unsqueeze(5)
-
- energy += energy_x + energy_y
-
- elif self.attention_type[3]:
- geom_bias = self.geom_bias.\
- view(1, num_heads, self.qk_embed_dim, 1).\
- repeat(n, 1, 1, 1)
-
- position_feat_x_reshape = position_feat_x.\
- view(n, num_heads, w*w_kv, self.qk_embed_dim)
-
- position_feat_y_reshape = position_feat_y.\
- view(n, num_heads, h * h_kv, self.qk_embed_dim)
-
- energy_x = torch.matmul(position_feat_x_reshape, geom_bias)
- energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv)
-
- energy_y = torch.matmul(position_feat_y_reshape, geom_bias)
- energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1)
-
- energy += energy_x + energy_y
-
- energy = energy.view(n, num_heads, h * w, h_kv * w_kv)
-
- if self.spatial_range >= 0:
- cur_local_constraint_map = \
- self.local_constraint_map[:h, :w, :h_kv, :w_kv].\
- contiguous().\
- view(1, 1, h*w, h_kv*w_kv)
-
- energy = energy.masked_fill_(cur_local_constraint_map,
- float('-inf'))
-
- attention = F.softmax(energy, 3)
-
- proj_value = self.value_conv(x_kv)
- proj_value_reshape = proj_value.\
- view((n, num_heads, self.v_dim, h_kv * w_kv)).\
- permute(0, 1, 3, 2)
-
- out = torch.matmul(attention, proj_value_reshape).\
- permute(0, 1, 3, 2).\
- contiguous().\
- view(n, self.v_dim * self.num_heads, h, w)
-
- out = self.proj_conv(out)
-
- # output is downsampled, upsample back to input size
- if self.q_downsample is not None:
- out = F.interpolate(
- out,
- size=x_input.shape[2:],
- mode='bilinear',
- align_corners=False)
-
- out = self.gamma * out + x_input
- return out
-
- def init_weights(self):
- for m in self.modules():
- if hasattr(m, 'kaiming_init') and m.kaiming_init:
- kaiming_init(
- m,
- mode='fan_in',
- nonlinearity='leaky_relu',
- bias=0,
- distribution='uniform',
- a=1)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/point_generator.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/point_generator.py
deleted file mode 100644
index e6fbd988c317992c092c68c827dc4c53223b4a4a..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/point_generator.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import torch
-
-from .builder import ANCHOR_GENERATORS
-
-
-@ANCHOR_GENERATORS.register_module()
-class PointGenerator(object):
-
- def _meshgrid(self, x, y, row_major=True):
- xx = x.repeat(len(y))
- yy = y.view(-1, 1).repeat(1, len(x)).view(-1)
- if row_major:
- return xx, yy
- else:
- return yy, xx
-
- def grid_points(self, featmap_size, stride=16, device='cuda'):
- feat_h, feat_w = featmap_size
- shift_x = torch.arange(0., feat_w, device=device) * stride
- shift_y = torch.arange(0., feat_h, device=device) * stride
- shift_xx, shift_yy = self._meshgrid(shift_x, shift_y)
- stride = shift_x.new_full((shift_xx.shape[0], ), stride)
- shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1)
- all_points = shifts.to(device)
- return all_points
-
- def valid_flags(self, featmap_size, valid_size, device='cuda'):
- feat_h, feat_w = featmap_size
- valid_h, valid_w = valid_size
- assert valid_h <= feat_h and valid_w <= feat_w
- valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device)
- valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device)
- valid_x[:valid_w] = 1
- valid_y[:valid_h] = 1
- valid_xx, valid_yy = self._meshgrid(valid_x, valid_y)
- valid = valid_xx & valid_yy
- return valid
diff --git a/spaces/Rominn/vits-uma-genshin-honkai/commons.py b/spaces/Rominn/vits-uma-genshin-honkai/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/Rominn/vits-uma-genshin-honkai/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/SAMControlNet/SyntheticDataSAM/README.md b/spaces/SAMControlNet/SyntheticDataSAM/README.md
deleted file mode 100644
index bb2a027d421f6ff9b80f5583adeeaa601689549f..0000000000000000000000000000000000000000
--- a/spaces/SAMControlNet/SyntheticDataSAM/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: WildSynth (ControlNet+SAM
-emoji: 🦬
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/__init__.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/__init__.py
deleted file mode 100644
index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000
--- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/text/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Sakil/question_answering_app/README.md b/spaces/Sakil/question_answering_app/README.md
deleted file mode 100644
index dfd859bb4ac38dd9778c2b42750237a9a3a5ae48..0000000000000000000000000000000000000000
--- a/spaces/Sakil/question_answering_app/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Question_answering_app
-emoji: 🐢
-colorFrom: red
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Sarath2002/YouTube_Video_Summarizer/support.py b/spaces/Sarath2002/YouTube_Video_Summarizer/support.py
deleted file mode 100644
index 892dbdcdebd4ef947c1c61b47bba2cd3a8ed82a2..0000000000000000000000000000000000000000
--- a/spaces/Sarath2002/YouTube_Video_Summarizer/support.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from youtube_transcript_api import YouTubeTranscriptApi
-from youtube_transcript_api.formatters import TextFormatter
-from transformers import BartForConditionalGeneration, BartTokenizer
-from sumy.parsers.plaintext import PlaintextParser
-from sumy.nlp.tokenizers import Tokenizer
-from sumy.summarizers.text_rank import TextRankSummarizer
-import torch
-
-def get_vidid(url):
- if "youtu.be" in url:
- url=url.replace("https://youtu.be/","")
-
- else:
- url=url.replace("https://www.youtube.com/watch?v=", '')
-
-
- return url
-
-
-def vid_transcript(video):
-
- transcript = YouTubeTranscriptApi.get_transcript(video)
- formatter = TextFormatter()
- text_formatted = formatter.format_transcript(transcript)
- with open('plaintext.txt', 'w', encoding='utf-8') as file:
- file.write(text_formatted)
-
-
-
-def ext_summarizer(path):
-
- language = "english"
- word_limit = 1500
-
- with open(path, "r", encoding="utf-8") as file:
- text = file.read()
-
-
- summarizer = TextRankSummarizer()
- parser = PlaintextParser.from_string(text, Tokenizer(language))
- summary = summarizer(parser.document, word_limit)
-
- summary_text = " ".join(str(sentence) for sentence in summary)
-
- return summary_text
-
-
-def abs_summarizer(t, max_length):
-
-
- tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
- model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
-
- text_input_ids = tokenizer.batch_encode_plus([t], return_tensors='pt', max_length=max_length)['input_ids']
- summary_ids = model.generate(text_input_ids, num_beams=10, max_length=max_length, min_length=30)
- summary_txt = tokenizer.decode(summary_ids.squeeze(), skip_special_tokens=True)
- return summary_txt
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Saturdays/mamamIA/app.py b/spaces/Saturdays/mamamIA/app.py
deleted file mode 100644
index 6f59145a90784e9bba02ad37493c80172ea56efa..0000000000000000000000000000000000000000
--- a/spaces/Saturdays/mamamIA/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-from numpy import dtype
-import streamlit as st
-import pandas as pd
-from sklearn.preprocessing import StandardScaler
-import numpy as np
-import joblib as jl
-
-
-# VALORES POR DEFECTO QUE INDICAN CELULAS NO CANCEROSAS
-# radius_mean 14.12
-# texture_mean 19.28
-# perimeter_mean 91.96
-# area_mean 551,17
-# compactness_mean 0.0092
-# concavity_mean 0.061
-# concave_points_mean 0.033
-# area_se 24.5
-# radius_worst 14.97
-# texture_worst 25.41
-# perimeter_worst 97.6
-# area_worst 686.5
-# smoothness_worst 0.1313
-# compactness_worst 0.20
-# concavity_worst 0.22
-# concave points_worst 0.09
-
-
-col=['radius_mean', 'texture_mean', 'perimeter_mean',
- 'area_mean', 'compactness_mean', 'concavity_mean',
- 'concave points_mean', 'area_se', 'radius_worst', 'texture_worst',
- 'perimeter_worst', 'area_worst', 'smoothness_worst',
- 'compactness_worst', 'concavity_worst', 'concave points_worst']
-
-
-modnames=['mlp_final.pkl','svm_final.pkl','lr_final.pkl']
-
-#@st.cache
-def getScaler():
- # Cargo el dataset para poder normalizar los valores recogidos en el formulario
- print ("cargando dataset")
- data=pd.read_csv('https://raw.githubusercontent.com/gitmecalvon/mamamIA/main/resources/data/cleaned/train_web.csv',sep=';')
- print("dataset cargado")
- scaler = StandardScaler()
- scaler.fit(data)
- return scaler
-
-
-# cargandolos para poder usarlos desde un sidebar si da tiempo
-def cargaModelos (indice):
- print('Preparando el guardado de Modelos ' )
- modelo=jl.load(modnames[indice])
- return modelo
-
-def interpreta (prediccion):
- respuesta ="Los datos introducidos pronostican que son células de tipo "
- if prediccion ==1:
- respuesta= respuesta + "Maligno"
- else:
- respuesta= respuesta + "BENIGNO"
- return respuesta
-
-
-def contruyeFormulario():
-
- # st.set_page_config(layout="wide")
-
- st.title("Mama mIA")
- st.markdown('',unsafe_allow_html=True)
- html_temp = """
-
Algoritmo de ayuda a la predicción diagnóstica del Cáncer de mama
-
"""
- st.markdown(html_temp, unsafe_allow_html = True)
-
- st.subheader("Por favor introduzca las medidas de la muestra")
- form = st.form(key="formulario")
- # col1, col2 = form.columns(2) # intento de dos columnas sin recurrir a html
- # with col1:
- radius_mean = form.number_input( label="Radio Promedio", min_value=0.00000, max_value=20.0,value=13.54, step=0.0001,format="%4f")
- texture_mean = form.number_input(label="Textura Promedio", min_value=0.00000, max_value=36.0,value=14.36, step=0.0001,format="%4f")
- perimeter_mean = form.number_input(label="Perímertro Promedio", min_value=0.00000, max_value=150.0,value=87.46, step=0.0001,format="%4f")
- area_mean = form.number_input(label="Área Promedio", min_value=0.00000, max_value=1600.0,value=566.3, step=0.0001,format="%4f")
- compactness_mean = form.number_input(label="Promedio de Compactabilidad", min_value=0.00000, max_value=1.0,value=0.08129, step=0.0001,format="%5f")
- concavity_mean = form.number_input(label="Promedio de Concavidad", min_value=0.00000, max_value=1.0,value=0.06664, step=0.0001,format="%5f")
- concave_points_mean = form.number_input(label="Puntos Cóncavos promedio", min_value=0.00000, max_value=1.0,value=0.04781, step=0.0001,format="%4f")
- area_se = form.number_input(label="Area Error Estandar", min_value=0.00000, max_value=150.0,value=23.56, step=0.0001,format="%4f")
- # with col2:
- radius_worst = form.number_input(label="Radio worst ", min_value=0.00000, max_value=30.0,value=15.11, step=0.0001,format="%4f")
- texture_worst= form.number_input(label="Textura worsk", min_value=0.00000, max_value=70.0,value=19.26, step=0.0001,format="%4f")
- perimeter_worst = form.number_input(label="Perimetro worst", min_value=0.00000, max_value=99.70,value=0.0092, step=0.0001,format="%4f")
- area_worst = form.number_input(label="Area ", min_value=0.00000, max_value=800.0,value=711.2, step=0.0001,format="%4f")
- smoothness_worst = form.number_input(label="Suavidad worst", min_value=0.00000, max_value=1.0,value=0.144, step=0.0001,format="%4f")
- compactness_worst = form.number_input(label="Compactabilidad worst", min_value=0.00000, max_value=2.0,value=0.1773, step=0.0001,format="%4f")
- concavity_worst = form.number_input(label="Concavidad worst", min_value=0.00000, max_value=2.0,value=0.2390, step=0.0001,format="%4f")
- concavepoints_worst = form.number_input(label="Puntos cóncavos worst", min_value=0.00000, max_value=2.0,value=0.1288, step=0.0001,format="%4f")
-
- submit = form.form_submit_button(label="Predicción")
-
- if submit:
- # Escalamos los datos del formulario
- scaler=getScaler()
- nbnormaliz=scaler.transform ([[radius_mean, texture_mean, perimeter_mean ,area_mean , compactness_mean , concavity_mean ,
- concave_points_mean , area_se , radius_worst , texture_worst ,perimeter_worst , area_worst , smoothness_worst ,
- compactness_worst , concavity_worst , concavepoints_worst ]])
-
- # Recuperamos el modelo
- print ("cargando modelo")
- print (modnames[2])
- algoritmo=cargaModelos(2)
-
- # Realizamos la prediccion
-
- print ("Preparando la prediccion...")
- prediccion=algoritmo.predict (nbnormaliz)
- print (prediccion)
- st.write ("")
- st.write (interpreta (prediccion))
-
-
-def main():
-
- contruyeFormulario()
-
-if __name__ == '__main__':
- main()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net/app.py b/spaces/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net/app.py
deleted file mode 100644
index 88dbdabc5ea2a00cedc87e8901a39407e0aa4442..0000000000000000000000000000000000000000
--- a/spaces/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net/app.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import streamlit as st
-
-import tensorflow as tf
-from PIL import Image
-import numpy as np
-import cv2
-from huggingface_hub import from_pretrained_keras
-
-
-try:
- model=from_pretrained_keras("SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net")
-except:
- model=tf.keras.models.load_model("dental_xray_seg.h5")
- pass
-
-st.header("Segmentation of Teeth in Panoramic X-ray Image Using UNet")
-
-examples=["107.png","108.png","109.png"]
-link='Check Out Our Github Repo ! [link](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)'
-st.markdown(link,unsafe_allow_html=True)
-
-
-def load_image(image_file):
- img = Image.open(image_file)
- return img
-
-def convert_one_channel(img):
- #some images have 3 channels , although they are grayscale image
- if len(img.shape)>2:
- img= cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- return img
- else:
- return img
-
-def convert_rgb(img):
- #some images have 3 channels , although they are grayscale image
- if len(img.shape)==2:
- img= cv2.cvtColor(img,cv2.COLOR_GRAY2RGB)
- return img
- else:
- return img
-
-
-st.subheader("Upload Dental Panoramic X-ray Image Image")
-image_file = st.file_uploader("Upload Images", type=["png","jpg","jpeg"])
-
-
-col1, col2, col3 = st.columns(3)
-with col1:
- ex=load_image(examples[0])
- st.image(ex,width=200)
- if st.button('Example 1'):
- image_file=examples[0]
-
-with col2:
- ex1=load_image(examples[1])
- st.image(ex1,width=200)
- if st.button('Example 2'):
- image_file=examples[1]
-
-
-with col3:
- ex2=load_image(examples[2])
- st.image(ex2,width=200)
- if st.button('Example 3'):
- image_file=examples[2]
-
-
-if image_file is not None:
-
- img=load_image(image_file)
-
- st.text("Making A Prediction ....")
- st.image(img,width=850)
-
- img=np.asarray(img)
-
- img_cv=convert_one_channel(img)
- img_cv=cv2.resize(img_cv,(512,512), interpolation=cv2.INTER_LANCZOS4)
- img_cv=np.float32(img_cv/255)
-
- img_cv=np.reshape(img_cv,(1,512,512,1))
- prediction=model.predict(img_cv)
- predicted=prediction[0]
- predicted = cv2.resize(predicted, (img.shape[1],img.shape[0]), interpolation=cv2.INTER_LANCZOS4)
- mask=np.uint8(predicted*255)#
- _, mask = cv2.threshold(mask, thresh=0, maxval=255, type=cv2.THRESH_BINARY+cv2.THRESH_OTSU)
- kernel =( np.ones((5,5), dtype=np.float32))
- mask=cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel,iterations=1 )
- mask=cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel,iterations=1 )
- cnts,hieararch=cv2.findContours(mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
- output = cv2.drawContours(convert_rgb(img), cnts, -1, (255, 0, 0) , 3)
-
-
- if output is not None :
- st.subheader("Predicted Image")
- st.write(output.shape)
- st.image(output,width=850)
-
- st.text("DONE ! ....")
diff --git a/spaces/Shad0ws/Chat-with-Files/embeddings.py b/spaces/Shad0ws/Chat-with-Files/embeddings.py
deleted file mode 100644
index d7596d473dd2539e182058296e1f8844c0a37a22..0000000000000000000000000000000000000000
--- a/spaces/Shad0ws/Chat-with-Files/embeddings.py
+++ /dev/null
@@ -1,115 +0,0 @@
-"""Wrapper around OpenAI embedding models."""
-from typing import Any, Dict, List, Optional
-
-from pydantic import BaseModel, Extra, root_validator
-
-from langchain.embeddings.base import Embeddings
-from langchain.utils import get_from_dict_or_env
-
-from tenacity import (
- retry,
- retry_if_exception_type,
- stop_after_attempt,
- wait_exponential,
-)
-from openai.error import Timeout, APIError, APIConnectionError, RateLimitError
-
-
-class OpenAIEmbeddings(BaseModel, Embeddings):
- """Wrapper around OpenAI embedding models.
- To use, you should have the ``openai`` python package installed, and the
- environment variable ``OPENAI_API_KEY`` set with your API key or pass it
- as a named parameter to the constructor.
- Example:
- .. code-block:: python
- from langchain.embeddings import OpenAIEmbeddings
- openai = OpenAIEmbeddings(openai_api_key="my-api-key")
- """
-
- client: Any #: :meta private:
- document_model_name: str = "text-embedding-ada-002"
- query_model_name: str = "text-embedding-ada-002"
- openai_api_key: Optional[str] = None
-
- class Config:
- """Configuration for this pydantic object."""
-
- extra = Extra.forbid
-
- # TODO: deprecate this
- @root_validator(pre=True, allow_reuse=True)
- def get_model_names(cls, values: Dict) -> Dict:
- """Get model names from just old model name."""
- if "model_name" in values:
- if "document_model_name" in values:
- raise ValueError(
- "Both `model_name` and `document_model_name` were provided, "
- "but only one should be."
- )
- if "query_model_name" in values:
- raise ValueError(
- "Both `model_name` and `query_model_name` were provided, "
- "but only one should be."
- )
- model_name = values.pop("model_name")
- values["document_model_name"] = f"text-search-{model_name}-doc-001"
- values["query_model_name"] = f"text-search-{model_name}-query-001"
- return values
-
- @root_validator(allow_reuse=True)
- def validate_environment(cls, values: Dict) -> Dict:
- """Validate that api key and python package exists in environment."""
- openai_api_key = get_from_dict_or_env(
- values, "openai_api_key", "OPENAI_API_KEY"
- )
- try:
- import openai
-
- openai.api_key = openai_api_key
- values["client"] = openai.Embedding
- except ImportError:
- raise ValueError(
- "Could not import openai python package. "
- "Please it install it with `pip install openai`."
- )
- return values
-
- @retry(
- reraise=True,
- stop=stop_after_attempt(100),
- wait=wait_exponential(multiplier=1, min=10, max=60),
- retry=(
- retry_if_exception_type(Timeout)
- | retry_if_exception_type(APIError)
- | retry_if_exception_type(APIConnectionError)
- | retry_if_exception_type(RateLimitError)
- ),
- )
- def _embedding_func(self, text: str, *, engine: str) -> List[float]:
- """Call out to OpenAI's embedding endpoint with exponential backoff."""
- # replace newlines, which can negatively affect performance.
- text = text.replace("\n", " ")
- return self.client.create(input=[text], engine=engine)["data"][0]["embedding"]
-
- def embed_documents(self, texts: List[str]) -> List[List[float]]:
- """Call out to OpenAI's embedding endpoint for embedding search docs.
- Args:
- texts: The list of texts to embed.
- Returns:
- List of embeddings, one for each text.
- """
- responses = [
- self._embedding_func(text, engine=self.document_model_name)
- for text in texts
- ]
- return responses
-
- def embed_query(self, text: str) -> List[float]:
- """Call out to OpenAI's embedding endpoint for embedding query text.
- Args:
- text: The text to embed.
- Returns:
- Embeddings for the text.
- """
- embedding = self._embedding_func(text, engine=self.query_model_name)
- return embedding
\ No newline at end of file
diff --git a/spaces/Shad0ws/imagetomusic/README.md b/spaces/Shad0ws/imagetomusic/README.md
deleted file mode 100644
index abcacaf53e12be7ab17decb462f397c10688cfc1..0000000000000000000000000000000000000000
--- a/spaces/Shad0ws/imagetomusic/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Img to Music Video
-emoji: ⚡
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: doevent/msk
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SoUmNerd/RemoteMojo/Dockerfile b/spaces/SoUmNerd/RemoteMojo/Dockerfile
deleted file mode 100644
index 9c4dda9e42b468a004526e94e94412da31f95c7f..0000000000000000000000000000000000000000
--- a/spaces/SoUmNerd/RemoteMojo/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM ubuntu:latest
-
-RUN apt-get update
-RUN apt-get install -y curl && apt-get install -y python3 && apt-get install -y python3-pip
-RUN pip install fastapi uvicorn
-RUN curl https://get.modular.com | \
-MODULAR_AUTH=mut_e87f7861fb9a4d4aa311afb0491b0398 sh -
-
-RUN modular install mojo
-
-COPY . .
-CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/Soumen/Text-Summarization-and-NLP-tasks/README.md b/spaces/Soumen/Text-Summarization-and-NLP-tasks/README.md
deleted file mode 100644
index a0b6af331bfda7d9b8d861c3c6743f74a5d33aa8..0000000000000000000000000000000000000000
--- a/spaces/Soumen/Text-Summarization-and-NLP-tasks/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Text Summarization And NLP Tasks
-emoji: 🏢
-colorFrom: purple
-colorTo: red
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: bsd
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SoundreameR/craiyon-exploration/README.md b/spaces/SoundreameR/craiyon-exploration/README.md
deleted file mode 100644
index 5342f9537e7fdaff8762e3f8b2c0e23f6886452a..0000000000000000000000000000000000000000
--- a/spaces/SoundreameR/craiyon-exploration/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Craiyon Exploration
-emoji: 🏃
-colorFrom: green
-colorTo: purple
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/StephanST/WALDOonline/run_local_onnx_largeinput_tiled_process.py b/spaces/StephanST/WALDOonline/run_local_onnx_largeinput_tiled_process.py
deleted file mode 100644
index 18772d9639e9d0dd34bceff6cb400fc6cc1e35f2..0000000000000000000000000000000000000000
--- a/spaces/StephanST/WALDOonline/run_local_onnx_largeinput_tiled_process.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import math
-import cv2
-import time
-import requests
-import random
-import numpy as np
-import onnxruntime as ort
-from PIL import Image
-from pathlib import Path
-from collections import OrderedDict,namedtuple
-import re
-
-
-def get_resolution_from_model_path(model_path):
- resolution = re.search(r"(\d+)px", model_path)
- if resolution:
- return int(resolution.group(1))
- return None
-
-
-
-
-def letterbox(im, new_shape=(960, 960), color=(114, 114, 114), auto=True, scaleup=True, stride=32):
- # Resize and pad image while meeting stride-multiple constraints
- shape = im.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better val mAP)
- r = min(r, 1.0)
-
- # Compute padding
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
-
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return im, r, (dw, dh)
-
-def split_image(image, tile_size=(960, 960), padding=(0, 0)):
- height, width, _ = image.shape
- tile_height, tile_width = tile_size
- pad_height, pad_width = padding
-
- # Calculate the number of tiles needed in each dimension
- num_tiles_x = math.ceil(width / tile_width)
- num_tiles_y = math.ceil(height / tile_height)
-
- # Pad the image to ensure it's divisible by the tile size
- padded_image = cv2.copyMakeBorder(
- image,
- 0,
- tile_height * num_tiles_y - height + pad_height * 2,
- 0,
- tile_width * num_tiles_x - width + pad_width * 2,
- cv2.BORDER_CONSTANT,
- value=(114, 114, 114),
- )
-
- # Split the image into tiles
- tiles = []
- for y in range(num_tiles_y):
- for x in range(num_tiles_x):
- tile = padded_image[
- y * tile_height : (y + 1) * tile_height + pad_height * 2,
- x * tile_width : (x + 1) * tile_width + pad_width * 2,
- :,
- ]
- tiles.append(((x, y), tile))
-
- return tiles, padded_image.shape[:2]
-
-def merge_tiles(tiles, output_shape, padding=(0, 0)):
- tile_height, tile_width = tiles[0][1].shape[:2]
- num_tiles_x = output_shape[1] // (tile_width - 2 * padding[1])
- num_tiles_y = output_shape[0] // (tile_height - 2 * padding[0])
-
- merged_image = np.zeros((*output_shape, 3), dtype=np.uint8)
-
- for (x, y), tile in tiles:
- tile_no_padding = tile[padding[0] : -padding[0], padding[1] : -padding[1], :]
- merged_image[
- y * (tile_height - 2 * padding[0]) : (y + 1) * (tile_height - 2 * padding[0]),
- x * (tile_width - 2 * padding[1]) : (x + 1) * (tile_width - 2 * padding[1]),
- :,
- ] = tile_no_padding
-
- return merged_image
-
-
-def process_large_image(image, model):
-
-
- # set cuda = true if you have an NVIDIA GPU
- cuda = False
-
- w = model
-
- if "ppp" in model:
- names = ['solarpanels', 'pool']
- else:
- names = ['car', 'van', 'truck', 'building', 'human', 'gastank', 'digger', 'container', 'bus', 'pylon', 'boat', 'bike']
-
-
- colors = {name:[random.randint(0, 255) for _ in range(3)] for i,name in enumerate(names)}
-
- img = image
-
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
- session = ort.InferenceSession(w, providers=providers)
-
- outname = [i.name for i in session.get_outputs()]
- outname
-
- inname = [i.name for i in session.get_inputs()]
- inname
-
- # Load the image and split it into tiles
- resolution = get_resolution_from_model_path(model)
- if resolution is None:
- print("Warning: Model resolution not found in the model path. Defaulting to 960px.")
- resolution = 960
- tile_size = (resolution, resolution)
- padding = (32, 32)
- tiles, padded_shape = split_image(image, tile_size=tile_size, padding=padding)
-
- # Initialize a dictionary to store the count of each category
- category_count = {name: 0 for name in names}
-
- # Process each tile with the ONNX model
- processed_tiles = []
- for i, (tile_idx, tile) in enumerate(tiles):
- image = tile.copy()
- image, ratio, dwdh = letterbox(image, new_shape=tile_size, auto=False)
- image = image.transpose((2, 0, 1))
- image = np.expand_dims(image, 0)
- image = np.ascontiguousarray(image)
-
- im = image.astype(np.float32)
- im /= 255
-
- inp = {inname[0]: im}
- outputs = session.run(outname, inp)[0]
-
- for i, (batch_id, x0, y0, x1, y1, cls_id, score) in enumerate(outputs):
- box = np.array([x0, y0, x1, y1])
- box -= np.array(dwdh * 2)
- box /= ratio
- box = box.round().astype(np.int32).tolist()
- cls_id = int(cls_id)
- score = round(float(score), 3)
- name = names[cls_id]
- color = colors[name]
- name += ' ' + str(score)
- cv2.rectangle(tile, box[:2], box[2:], color, 2)
- cv2.putText(tile, name, (box[0], box[1] - 2), cv2.FONT_HERSHEY_SIMPLEX, 0.75, [225, 255, 255], thickness=2)
-
- # Update the count for the detected category
- category_count[name.split()[0]] += 1
-
- processed_tiles.append((tile_idx, tile))
-
- # Merge the processed tiles back into the original image
- merged_image = merge_tiles(processed_tiles, padded_shape, padding=padding)
-
- # Remove padding from the merged image to get the final output
- final_image = merged_image[: img.shape[0], : img.shape[1], :]
-
- # Convert color space from RGB to BGR
- final_image_bgr = cv2.cvtColor(final_image, cv2.COLOR_RGB2BGR)
-
- # # Save the final image
- # cv2.imwrite('./Columbus_out.jpg', final_image)
-
- outputs_array = []
- # Print the total count of each class
- print("Total count of each class:")
- for name, count in category_count.items():
- print(f"{name}: {count}")
- outputs_array.append(f"{name}: {count}")
-
- return final_image, str(outputs_array)
-
-
-
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/samples/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/samples/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/samples/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/Sudhansu/07GR-NLP-Seq2Seq-AutoQA/app.py b/spaces/Sudhansu/07GR-NLP-Seq2Seq-AutoQA/app.py
deleted file mode 100644
index c1cd92499cf1c7d2a91b4dc226bf2d558ff67661..0000000000000000000000000000000000000000
--- a/spaces/Sudhansu/07GR-NLP-Seq2Seq-AutoQA/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import gradio as gr
-from qasrl_model_pipeline import QASRL_Pipeline
-
-models = ["kleinay/qanom-seq2seq-model-baseline",
- "kleinay/qanom-seq2seq-model-joint"]
-pipelines = {model: QASRL_Pipeline(model) for model in models}
-
-
-description = f"""Using Seq2Seq T5 model which takes a sequence of items and outputs another sequence this model generates Questions and Answers (QA) with focus on Semantic Role Labeling (SRL)"""
-title="Seq2Seq T5 Questions and Answers (QA) with Semantic Role Labeling (SRL)"
-examples = [[models[0], "In March and April the patient
had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "fall"],
- [models[1], "In March and April the patient had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions
like anaphylaxis and shortness of breath.", True, "reactions"],
- [models[0], "In March and April the patient had two falls. One was related
to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "relate"],
- [models[1], "In March and April the patient
had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", False, "fall"]]
-
-input_sent_box_label = "Insert sentence here. Mark the predicate by adding the token '
' before it."
-verb_form_inp_placeholder = "e.g. 'decide' for the nominalization 'decision', 'teach' for 'teacher', etc."
-links = """
"
- if predicate_marker not in sentence:
- raise ValueError("You must highlight one word of the sentence as a predicate using preceding '
'.")
-
- if not verb_form:
- if is_nominal:
- raise ValueError("You should provide the verbal form of the nominalization")
-
- toks = sentence.split(" ")
- pred_idx = toks.index(predicate_marker)
- predicate = toks(pred_idx+1)
- verb_form=predicate
- pipeline = pipelines[model_name]
- pipe_out = pipeline([sentence],
- predicate_marker=predicate_marker,
- predicate_type="nominal" if is_nominal else "verbal",
- verb_form=verb_form)[0]
- return pipe_out["QAs"], pipe_out["generated_text"]
-iface = gr.Interface(fn=call,
- inputs=[gr.inputs.Radio(choices=models, default=models[0], label="Model"),
- gr.inputs.Textbox(placeholder=input_sent_box_label, label="Sentence", lines=4),
- gr.inputs.Checkbox(default=True, label="Is Nominalization?"),
- gr.inputs.Textbox(placeholder=verb_form_inp_placeholder, label="Verbal form (for nominalizations)", default='')],
- outputs=[gr.outputs.JSON(label="Model Output - QASRL"), gr.outputs.Textbox(label="Raw output sequence")],
- title=title,
- description=description,
- article=links,
- examples=examples )
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/computation/numpy_backend.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/computation/numpy_backend.py
deleted file mode 100644
index 30d50cc0174859bde97552042f9154b9e68d538b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/computation/numpy_backend.py
+++ /dev/null
@@ -1,275 +0,0 @@
-import warnings
-from typing import Any, List, Optional, Tuple
-
-import numpy as np
-
-from docarray.computation import AbstractComputationalBackend
-from docarray.computation.abstract_numpy_based_backend import AbstractNumpyBasedBackend
-
-
-def _expand_if_single_axis(*matrices: np.ndarray) -> List[np.ndarray]:
- """Expands arrays that only have one axis, at dim 0.
- This ensures that all outputs can be treated as matrices, not vectors.
-
- :param matrices: Matrices to be expanded
- :return: List of the input matrices,
- where single axis matrices are expanded at dim 0.
- """
- expanded = []
- for m in matrices:
- if len(m.shape) == 1:
- expanded.append(np.expand_dims(m, axis=0))
- else:
- expanded.append(m)
- return expanded
-
-
-def _expand_if_scalar(arr: np.ndarray) -> np.ndarray:
- if len(arr.shape) == 0: # avoid scalar output
- arr = np.expand_dims(arr, axis=0)
- return arr
-
-
-def identity(array: np.ndarray) -> np.ndarray:
- return array
-
-
-class NumpyCompBackend(AbstractNumpyBasedBackend):
- """
- Computational backend for Numpy.
- """
-
- _module = np
- _cast_output = identity
- _get_tensor = identity
-
- @classmethod
- def to_device(cls, tensor: 'np.ndarray', device: str) -> 'np.ndarray':
- """Move the tensor to the specified device."""
- raise NotImplementedError('Numpy does not support devices (GPU).')
-
- @classmethod
- def device(cls, tensor: 'np.ndarray') -> Optional[str]:
- """Return device on which the tensor is allocated."""
- return None
-
- @classmethod
- def to_numpy(cls, array: 'np.ndarray') -> 'np.ndarray':
- return array
-
- @classmethod
- def none_value(cls) -> Any:
- """Provide a compatible value that represents None in numpy."""
- return None
-
- @classmethod
- def detach(cls, tensor: 'np.ndarray') -> 'np.ndarray':
- """
- Returns the tensor detached from its current graph.
-
- :param tensor: tensor to be detached
- :return: a detached tensor with the same data.
- """
- return tensor
-
- @classmethod
- def dtype(cls, tensor: 'np.ndarray') -> np.dtype:
- """Get the data type of the tensor."""
- return tensor.dtype
-
- @classmethod
- def minmax_normalize(
- cls,
- tensor: 'np.ndarray',
- t_range: Tuple = (0, 1),
- x_range: Optional[Tuple] = None,
- eps: float = 1e-7,
- ) -> 'np.ndarray':
- """
- Normalize values in `tensor` into `t_range`.
-
- `tensor` can be a 1D array or a 2D array. When `tensor` is a 2D array, then
- normalization is row-based.
-
- !!! note
-
- - with `t_range=(0, 1)` will normalize the min-value of data to 0, max to 1;
- - with `t_range=(1, 0)` will normalize the min-value of data to 1, max value
- of the data to 0.
-
- :param tensor: the data to be normalized
- :param t_range: a tuple represents the target range.
- :param x_range: a tuple represents tensors range.
- :param eps: a small jitter to avoid divide by zero
- :return: normalized data in `t_range`
- """
- a, b = t_range
-
- min_d = x_range[0] if x_range else np.min(tensor, axis=-1, keepdims=True)
- max_d = x_range[1] if x_range else np.max(tensor, axis=-1, keepdims=True)
- r = (b - a) * (tensor - min_d) / (max_d - min_d + eps) + a
-
- return np.clip(r, *((a, b) if a < b else (b, a)))
-
- class Retrieval(AbstractComputationalBackend.Retrieval[np.ndarray]):
- """
- Abstract class for retrieval and ranking functionalities
- """
-
- @staticmethod
- def top_k(
- values: 'np.ndarray',
- k: int,
- descending: bool = False,
- device: Optional[str] = None,
- ) -> Tuple['np.ndarray', 'np.ndarray']:
- """
- Retrieves the top k smallest values in `values`,
- and returns them alongside their indices in the input `values`.
- Can also be used to retrieve the top k largest values,
- by setting the `descending` flag.
-
- :param values: Torch tensor of values to rank.
- Should be of shape (n_queries, n_values_per_query).
- Inputs of shape (n_values_per_query,) will be expanded
- to (1, n_values_per_query).
- :param k: number of values to retrieve
- :param descending: retrieve largest values instead of smallest values
- :param device: Not supported for this backend
- :return: Tuple containing the retrieved values, and their indices.
- Both ar of shape (n_queries, k)
- """
- if device is not None:
- warnings.warn('`device` is not supported for numpy operations')
-
- if len(values.shape) == 1:
- values = np.expand_dims(values, axis=0)
-
- if descending:
- values = -values
-
- if k >= values.shape[1]:
- idx = values.argsort(axis=1)[:, :k]
- values = np.take_along_axis(values, idx, axis=1)
- else:
- idx_ps = values.argpartition(kth=k, axis=1)[:, :k]
- values = np.take_along_axis(values, idx_ps, axis=1)
- idx_fs = values.argsort(axis=1)
- idx = np.take_along_axis(idx_ps, idx_fs, axis=1)
- values = np.take_along_axis(values, idx_fs, axis=1)
-
- if descending:
- values = -values
-
- return values, idx
-
- class Metrics(AbstractComputationalBackend.Metrics[np.ndarray]):
- """
- Abstract base class for metrics (distances and similarities).
- """
-
- @staticmethod
- def cosine_sim(
- x_mat: np.ndarray,
- y_mat: np.ndarray,
- eps: float = 1e-7,
- device: Optional[str] = None,
- ) -> np.ndarray:
- """Pairwise cosine similarities between all vectors in x_mat and y_mat.
-
- :param x_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param y_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param eps: a small jitter to avoid divde by zero
- :param device: Not supported for this backend
- :return: np.ndarray of shape (n_vectors, n_vectors) containing all
- pairwise cosine distances.
- The index [i_x, i_y] contains the cosine distance between
- x_mat[i_x] and y_mat[i_y].
- """
- if device is not None:
- warnings.warn('`device` is not supported for numpy operations')
-
- x_mat, y_mat = _expand_if_single_axis(x_mat, y_mat)
-
- sims = np.clip(
- (np.dot(x_mat, y_mat.T) + eps)
- / (
- np.outer(
- np.linalg.norm(x_mat, axis=1), np.linalg.norm(y_mat, axis=1)
- )
- + eps
- ),
- -1,
- 1,
- ).squeeze()
- return _expand_if_scalar(sims)
-
- @classmethod
- def euclidean_dist(
- cls, x_mat: np.ndarray, y_mat: np.ndarray, device: Optional[str] = None
- ) -> np.ndarray:
- """Pairwise Euclidian distances between all vectors in x_mat and y_mat.
-
- :param x_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param y_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param eps: a small jitter to avoid divde by zero
- :param device: Not supported for this backend
- :return: np.ndarray of shape (n_vectors, n_vectors) containing all
- pairwise euclidian distances.
- The index [i_x, i_y] contains the euclidian distance between
- x_mat[i_x] and y_mat[i_y].
- """
- if device is not None:
- warnings.warn('`device` is not supported for numpy operations')
-
- x_mat, y_mat = _expand_if_single_axis(x_mat, y_mat)
-
- return _expand_if_scalar(
- np.sqrt(cls.sqeuclidean_dist(x_mat, y_mat)).squeeze()
- )
-
- @staticmethod
- def sqeuclidean_dist(
- x_mat: np.ndarray,
- y_mat: np.ndarray,
- device: Optional[str] = None,
- ) -> np.ndarray:
- """Pairwise Squared Euclidian distances between all vectors in
- x_mat and y_mat.
-
- :param x_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param y_mat: np.ndarray of shape (n_vectors, n_dim), where n_vectors is
- the number of vectors and n_dim is the number of dimensions of each
- example.
- :param device: Not supported for this backend
- :return: np.ndarray of shape (n_vectors, n_vectors) containing all
- pairwise Squared Euclidian distances.
- The index [i_x, i_y] contains the cosine Squared Euclidian between
- x_mat[i_x] and y_mat[i_y].
- """
- eps: float = 1e-7 # avoid problems with numerical inaccuracies
-
- if device is not None:
- warnings.warn('`device` is not supported for numpy operations')
-
- x_mat, y_mat = _expand_if_single_axis(x_mat, y_mat)
-
- dists = (
- np.sum(y_mat**2, axis=1)
- + np.sum(x_mat**2, axis=1)[:, np.newaxis]
- - 2 * np.dot(x_mat, y_mat.T)
- ).squeeze()
-
- # remove numerical artifacts
- dists = np.where(np.logical_and(dists < 0, dists > -eps), 0, dists)
- return _expand_if_scalar(dists)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/ndarray.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/ndarray.py
deleted file mode 100644
index 18e84050a25554eca73918e01c2ba63c182cf25e..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/ndarray.py
+++ /dev/null
@@ -1,202 +0,0 @@
-from typing import TYPE_CHECKING, Any, Generic, List, Tuple, Type, TypeVar, Union, cast
-
-import numpy as np
-
-from docarray.typing.proto_register import _register_proto
-from docarray.typing.tensor.abstract_tensor import AbstractTensor
-
-if TYPE_CHECKING:
- from pydantic import BaseConfig
- from pydantic.fields import ModelField
-
- from docarray.computation.numpy_backend import NumpyCompBackend
- from docarray.proto import NdArrayProto
-
-from docarray.base_doc.base_node import BaseNode
-
-T = TypeVar('T', bound='NdArray')
-ShapeT = TypeVar('ShapeT')
-
-tensor_base: type = type(BaseNode)
-
-
-# the mypy error suppression below should not be necessary anymore once the following
-# is released in mypy: https://github.com/python/mypy/pull/14135
-class metaNumpy(AbstractTensor.__parametrized_meta__, tensor_base): # type: ignore
- pass
-
-
-@_register_proto(proto_type_name='ndarray')
-class NdArray(np.ndarray, AbstractTensor, Generic[ShapeT]):
- """
- Subclass of `np.ndarray`, intended for use in a Document.
- This enables (de)serialization from/to protobuf and json, data validation,
- and coersion from compatible types like `torch.Tensor`.
-
- This type can also be used in a parametrized way, specifying the shape of the array.
-
- ---
-
- ```python
- from docarray import BaseDoc
- from docarray.typing import NdArray
- import numpy as np
-
-
- class MyDoc(BaseDoc):
- arr: NdArray
- image_arr: NdArray[3, 224, 224]
- square_crop: NdArray[3, 'x', 'x']
- random_image: NdArray[3, ...] # first dimension is fixed, can have arbitrary shape
-
-
- # create a document with tensors
- doc = MyDoc(
- arr=np.zeros((128,)),
- image_arr=np.zeros((3, 224, 224)),
- square_crop=np.zeros((3, 64, 64)),
- random_image=np.zeros((3, 128, 256)),
- )
- assert doc.image_arr.shape == (3, 224, 224)
-
- # automatic shape conversion
- doc = MyDoc(
- arr=np.zeros((128,)),
- image_arr=np.zeros((224, 224, 3)), # will reshape to (3, 224, 224)
- square_crop=np.zeros((3, 128, 128)),
- random_image=np.zeros((3, 64, 128)),
- )
- assert doc.image_arr.shape == (3, 224, 224)
-
- # !! The following will raise an error due to shape mismatch !!
- from pydantic import ValidationError
-
- try:
- doc = MyDoc(
- arr=np.zeros((128,)),
- image_arr=np.zeros((224, 224)), # this will fail validation
- square_crop=np.zeros((3, 128, 64)), # this will also fail validation
- random_image=np.zeros((4, 64, 128)), # this will also fail validation
- )
- except ValidationError as e:
- pass
- ```
-
- ---
- """
-
- __parametrized_meta__ = metaNumpy
-
- @classmethod
- def __get_validators__(cls):
- # one or more validators may be yielded which will be called in the
- # order to validate the input, each validator will receive as an input
- # the value returned from the previous validator
- yield cls.validate
-
- @classmethod
- def validate(
- cls: Type[T],
- value: Union[T, np.ndarray, List[Any], Tuple[Any], Any],
- field: 'ModelField',
- config: 'BaseConfig',
- ) -> T:
- if isinstance(value, np.ndarray):
- return cls._docarray_from_native(value)
- elif isinstance(value, NdArray):
- return cast(T, value)
- elif isinstance(value, list) or isinstance(value, tuple):
- try:
- arr_from_list: np.ndarray = np.asarray(value)
- return cls._docarray_from_native(arr_from_list)
- except Exception:
- pass # handled below
- else:
- try:
- arr: np.ndarray = np.ndarray(value)
- return cls._docarray_from_native(arr)
- except Exception:
- pass # handled below
- raise ValueError(f'Expected a numpy.ndarray compatible type, got {type(value)}')
-
- @classmethod
- def _docarray_from_native(cls: Type[T], value: np.ndarray) -> T:
- if cls.__unparametrizedcls__: # This is not None if the tensor is parametrized
- return cast(T, value.view(cls.__unparametrizedcls__))
- return value.view(cls)
-
- def _docarray_to_json_compatible(self) -> np.ndarray:
- """
- Convert `NdArray` into a json compatible object
- :return: a representation of the tensor compatible with orjson
- """
- return self.unwrap()
-
- def unwrap(self) -> np.ndarray:
- """
- Return the original ndarray without any memory copy.
-
- The original view rest intact and is still a Document `NdArray`
- but the return object is a pure `np.ndarray` but both object share
- the same memory layout.
-
- ---
-
- ```python
- from docarray.typing import NdArray
- import numpy as np
-
- t1 = NdArray.validate(np.zeros((3, 224, 224)), None, None)
- # here t1 is a docarray NdArray
- t2 = t1.unwrap()
- # here t2 is a pure np.ndarray but t1 is still a Docarray NdArray
- # But both share the same underlying memory
- ```
-
- ---
-
- :return: a `numpy.ndarray`
- """
- return self.view(np.ndarray)
-
- @classmethod
- def from_protobuf(cls: Type[T], pb_msg: 'NdArrayProto') -> 'T':
- """
- Read ndarray from a proto msg
- :param pb_msg:
- :return: a numpy array
- """
- source = pb_msg.dense
- if source.buffer:
- x = np.frombuffer(bytearray(source.buffer), dtype=source.dtype)
- return cls._docarray_from_native(x.reshape(source.shape))
- elif len(source.shape) > 0:
- return cls._docarray_from_native(np.zeros(source.shape))
- else:
- raise ValueError(f'proto message {pb_msg} cannot be cast to a NdArray')
-
- def to_protobuf(self) -> 'NdArrayProto':
- """
- Transform self into a NdArrayProto protobuf message
- """
- from docarray.proto import NdArrayProto
-
- nd_proto = NdArrayProto()
-
- nd_proto.dense.buffer = self.tobytes()
- nd_proto.dense.ClearField('shape')
- nd_proto.dense.shape.extend(list(self.shape))
- nd_proto.dense.dtype = self.dtype.str
-
- return nd_proto
-
- @staticmethod
- def get_comp_backend() -> 'NumpyCompBackend':
- """Return the computational backend of the tensor"""
- from docarray.computation.numpy_backend import NumpyCompBackend
-
- return NumpyCompBackend()
-
- def __class_getitem__(cls, item: Any, *args, **kwargs):
- # see here for mypy bug: https://github.com/python/mypy/issues/14123
- return AbstractTensor.__class_getitem__.__func__(cls, item) # type: ignore
diff --git a/spaces/Swan608/Spaceair/app.py b/spaces/Swan608/Spaceair/app.py
deleted file mode 100644
index 213184a4e2be5569151d1f5af573676a7a1d58ea..0000000000000000000000000000000000000000
--- a/spaces/Swan608/Spaceair/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import gradio as gr
-import numpy as np
-import keras
-import os
-
-model = keras.models.load_model("mnist_model.h5")
-
-def rgb2gray(rgb):
-
- return [255] - np.dot(rgb[..., :3], [0.2989, 0.5870, 0.1140])
-
-def number_classifier(target):
-
- target = rgb2gray(target).flatten()
-
- inputs = np.array([target])
-
- results = model.predict(inputs)
-
- result_as_dict = {}
-
- for i in range(10):
-
- result_as_dict[str(i)] = float(results[0][i])
-
- return result_as_dict
-
-# def add_two_number(a,b):
-# return str(a+b) + "입니다."
-
-# app = gr.Interface(fn=add_two_number, inputs=["number", "number"], outputs=["text"])
-
-# app.launch()
-
-examples_list = []
-
-for item in os.listdir("examples/"):
- examples_list.append("examples/" + item)
-
-app = gr.Interface(fn=number_classifier,
- inputs=[gr.Image(shape=(28, 28))],
- outputs=[gr.Label(num_top_classes=3)],
- examples=examples_list
- )
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/TRaw/dtet/Dockerfile b/spaces/TRaw/dtet/Dockerfile
deleted file mode 100644
index fb4d04336ede050357a8846aba48ef5c42f13f88..0000000000000000000000000000000000000000
--- a/spaces/TRaw/dtet/Dockerfile
+++ /dev/null
@@ -1,121 +0,0 @@
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-
-FROM node:19 as chatui-builder
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-WORKDIR /app
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- git gettext && \
- rm -rf /var/lib/apt/lists/*
-
-
-RUN git clone https://github.com/huggingface/chat-ui.git
-
-WORKDIR /app/chat-ui
-
-
-COPY .env.local.template .env.local.template
-
-RUN mkdir defaults
-ADD defaults /defaults
-RUN chmod -R 777 /defaults
-RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \
- MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \
- && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \
- && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \
- && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \
- && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \
- echo "${MONGODB_URL}" && \
- envsubst < ".env.local.template" > ".env.local" \
- && rm .env.local.template
-
-
-
-RUN --mount=type=cache,target=/app/.npm \
- npm set cache /app/.npm && \
- npm ci
-
-RUN npm run build
-
-FROM ghcr.io/huggingface/text-generation-inference:0.9.4
-
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-ENV TZ=Europe/Paris \
- PORT=3000
-
-
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- gnupg \
- curl \
- gettext && \
- rm -rf /var/lib/apt/lists/*
-COPY entrypoint.sh.template entrypoint.sh.template
-
-RUN mkdir defaults
-ADD defaults /defaults
-RUN chmod -R 777 /defaults
-
-RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \
- MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \
- && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \
- && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \
- && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \
- && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \
- envsubst < "entrypoint.sh.template" > "entrypoint.sh" \
- && rm entrypoint.sh.template
-
-
-RUN curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \
- gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg \
- --dearmor
-
-RUN echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-6.0.list
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- mongodb-org && \
- rm -rf /var/lib/apt/lists/*
-
-RUN mkdir -p /data/db
-RUN chown -R 1000:1000 /data
-
-RUN curl -fsSL https://deb.nodesource.com/setup_19.x | /bin/bash -
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- nodejs && \
- rm -rf /var/lib/apt/lists/*
-
-RUN mkdir /app
-RUN chown -R 1000:1000 /app
-
-RUN useradd -m -u 1000 user
-
-# Switch to the "user" user
-USER user
-
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-RUN npm config set prefix /home/user/.local
-RUN npm install -g pm2
-
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/node_modules /app/node_modules
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/package.json /app/package.json
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/build /app/build
-
-ENTRYPOINT ["/bin/bash"]
-CMD ["entrypoint.sh"]
-
-
diff --git a/spaces/TachibanaYoshino/AnimeGANv3/app.py b/spaces/TachibanaYoshino/AnimeGANv3/app.py
deleted file mode 100644
index 677f429041bf7da6a7161135726355e315db4712..0000000000000000000000000000000000000000
--- a/spaces/TachibanaYoshino/AnimeGANv3/app.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import os
-import cv2
-import gradio as gr
-import AnimeGANv3_src
-
-
-os.makedirs('output', exist_ok=True)
-
-
-def inference(img_path, Style, if_face=None):
- print(img_path, Style, if_face)
- try:
- img = cv2.imread(img_path)
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if Style == "AnimeGANv3_Arcane":
- f = "A"
- elif Style == "AnimeGANv3_Trump v1.0":
- f = "T"
- elif Style == "AnimeGANv3_Shinkai":
- f = "S"
- elif Style == "AnimeGANv3_PortraitSketch":
- f = "P"
- elif Style == "AnimeGANv3_Hayao":
- f = "H"
- elif Style == "AnimeGANv3_Disney v1.0":
- f = "D"
- elif Style == "AnimeGANv3_JP_face v1.0":
- f = "J"
- else:
- f = "U"
-
- try:
- det_face = True if if_face=="Yes" else False
- output = AnimeGANv3_src.Convert(img, f, det_face)
- save_path = f"output/out.{img_path.rsplit('.')[-1]}"
- cv2.imwrite(save_path, output[:, :, ::-1])
- return output, save_path
- except RuntimeError as error:
- print('Error', error)
- except Exception as error:
- print('global exception', error)
- return None, None
-
-
-title = "AnimeGANv3: To produce your own animation."
-description = r"""Official online demo for AnimeGANv3. If you like what I'm doing you can tip me on **patreon**.
-It can be used to turn your photos or videos into anime.
-To use it, simply upload your image. It can convert landscape photos to Hayao Miyazaki or Makoto Shinkai style anime, as well as 6 style conversions about human faces.
-If AnimeGANv3 is helpful, please help to ⭐ the Github Repo and recommend it to your friends. 😊
-
-"""
-article = r"""
-
-[](https://github.com/TachibanaYoshino/AnimeGANv3)
-
-### 🔥 Demo
-I. Video to anime (Hayao Style)
-
-
-
-
-
-II. Video to anime (USA cartoon + Disney style)
-
-
-----------
-
-## License
-This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv3 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.
-
-## Acknowledgement
-* Huggingface UI is referenced from @akhaliq/GFPGAN.
-* The dataset of AnimeGANv3_JP_face v1.0 is from DCTnet and then manually optimized.
-
-## Author
-Xin Chen
-If you have any question, please open an issue on GitHub Repo.
-
-
-
-"""
-gr.Interface(
- inference, [
- gr.inputs.Image(type="filepath", label="Input"),
- gr.Dropdown([
- 'AnimeGANv3_Hayao',
- 'AnimeGANv3_Shinkai',
- 'AnimeGANv3_Arcane',
- 'AnimeGANv3_USA',
- 'AnimeGANv3_Trump v1.0',
- 'AnimeGANv3_Disney v1.0',
- 'AnimeGANv3_PortraitSketch',
- 'AnimeGANv3_JP_face v1.0',
- ],
- type="value",
- value='AnimeGANv3_Hayao',
- label='AnimeGANv3 Style'),
- gr.inputs.Radio(['Yes', 'No'], type="value", default='No', label='Extract face'),
- ], [
- gr.outputs.Image(type="numpy", label="Output (The whole image)"),
- gr.outputs.File(label="Download the output image")
- ],
- title=title,
- description=description,
- article=article,
- allow_flagging="never",
- examples=[['samples/7_out.jpg', 'AnimeGANv3_Arcane', "Yes"], ['samples/15566.jpg', 'AnimeGANv3_USA', "Yes"],['samples/23034.jpg', 'AnimeGANv3_Trump v1.0', "Yes"], ['samples/jp_13.jpg', 'AnimeGANv3_Hayao', "No"],
- ['samples/jp_20.jpg', 'AnimeGANv3_Shinkai', "No"], ['samples/Hamabe Minami.jpg', 'AnimeGANv3_Disney v1.0', "Yes"], ['samples/120.jpg', 'AnimeGANv3_JP_face v1.0', "Yes"], ['samples/52014.jpg', 'AnimeGANv3_PortraitSketch', "Yes"]]).launch(enable_queue=True)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/manifest.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/manifest.py
deleted file mode 100644
index ca0fe442d9ca499466df9438df16eca405c5f102..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/manifest.py
+++ /dev/null
@@ -1,393 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2012-2013 Python Software Foundation.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-"""
-Class representing the list of files in a distribution.
-
-Equivalent to distutils.filelist, but fixes some problems.
-"""
-import fnmatch
-import logging
-import os
-import re
-import sys
-
-from . import DistlibException
-from .compat import fsdecode
-from .util import convert_path
-
-
-__all__ = ['Manifest']
-
-logger = logging.getLogger(__name__)
-
-# a \ followed by some spaces + EOL
-_COLLAPSE_PATTERN = re.compile('\\\\w*\n', re.M)
-_COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S)
-
-#
-# Due to the different results returned by fnmatch.translate, we need
-# to do slightly different processing for Python 2.7 and 3.2 ... this needed
-# to be brought in for Python 3.6 onwards.
-#
-_PYTHON_VERSION = sys.version_info[:2]
-
-class Manifest(object):
- """A list of files built by on exploring the filesystem and filtered by
- applying various patterns to what we find there.
- """
-
- def __init__(self, base=None):
- """
- Initialise an instance.
-
- :param base: The base directory to explore under.
- """
- self.base = os.path.abspath(os.path.normpath(base or os.getcwd()))
- self.prefix = self.base + os.sep
- self.allfiles = None
- self.files = set()
-
- #
- # Public API
- #
-
- def findall(self):
- """Find all files under the base and set ``allfiles`` to the absolute
- pathnames of files found.
- """
- from stat import S_ISREG, S_ISDIR, S_ISLNK
-
- self.allfiles = allfiles = []
- root = self.base
- stack = [root]
- pop = stack.pop
- push = stack.append
-
- while stack:
- root = pop()
- names = os.listdir(root)
-
- for name in names:
- fullname = os.path.join(root, name)
-
- # Avoid excess stat calls -- just one will do, thank you!
- stat = os.stat(fullname)
- mode = stat.st_mode
- if S_ISREG(mode):
- allfiles.append(fsdecode(fullname))
- elif S_ISDIR(mode) and not S_ISLNK(mode):
- push(fullname)
-
- def add(self, item):
- """
- Add a file to the manifest.
-
- :param item: The pathname to add. This can be relative to the base.
- """
- if not item.startswith(self.prefix):
- item = os.path.join(self.base, item)
- self.files.add(os.path.normpath(item))
-
- def add_many(self, items):
- """
- Add a list of files to the manifest.
-
- :param items: The pathnames to add. These can be relative to the base.
- """
- for item in items:
- self.add(item)
-
- def sorted(self, wantdirs=False):
- """
- Return sorted files in directory order
- """
-
- def add_dir(dirs, d):
- dirs.add(d)
- logger.debug('add_dir added %s', d)
- if d != self.base:
- parent, _ = os.path.split(d)
- assert parent not in ('', '/')
- add_dir(dirs, parent)
-
- result = set(self.files) # make a copy!
- if wantdirs:
- dirs = set()
- for f in result:
- add_dir(dirs, os.path.dirname(f))
- result |= dirs
- return [os.path.join(*path_tuple) for path_tuple in
- sorted(os.path.split(path) for path in result)]
-
- def clear(self):
- """Clear all collected files."""
- self.files = set()
- self.allfiles = []
-
- def process_directive(self, directive):
- """
- Process a directive which either adds some files from ``allfiles`` to
- ``files``, or removes some files from ``files``.
-
- :param directive: The directive to process. This should be in a format
- compatible with distutils ``MANIFEST.in`` files:
-
- http://docs.python.org/distutils/sourcedist.html#commands
- """
- # Parse the line: split it up, make sure the right number of words
- # is there, and return the relevant words. 'action' is always
- # defined: it's the first word of the line. Which of the other
- # three are defined depends on the action; it'll be either
- # patterns, (dir and patterns), or (dirpattern).
- action, patterns, thedir, dirpattern = self._parse_directive(directive)
-
- # OK, now we know that the action is valid and we have the
- # right number of words on the line for that action -- so we
- # can proceed with minimal error-checking.
- if action == 'include':
- for pattern in patterns:
- if not self._include_pattern(pattern, anchor=True):
- logger.warning('no files found matching %r', pattern)
-
- elif action == 'exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, anchor=True)
- #if not found:
- # logger.warning('no previously-included files '
- # 'found matching %r', pattern)
-
- elif action == 'global-include':
- for pattern in patterns:
- if not self._include_pattern(pattern, anchor=False):
- logger.warning('no files found matching %r '
- 'anywhere in distribution', pattern)
-
- elif action == 'global-exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, anchor=False)
- #if not found:
- # logger.warning('no previously-included files '
- # 'matching %r found anywhere in '
- # 'distribution', pattern)
-
- elif action == 'recursive-include':
- for pattern in patterns:
- if not self._include_pattern(pattern, prefix=thedir):
- logger.warning('no files found matching %r '
- 'under directory %r', pattern, thedir)
-
- elif action == 'recursive-exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, prefix=thedir)
- #if not found:
- # logger.warning('no previously-included files '
- # 'matching %r found under directory %r',
- # pattern, thedir)
-
- elif action == 'graft':
- if not self._include_pattern(None, prefix=dirpattern):
- logger.warning('no directories found matching %r',
- dirpattern)
-
- elif action == 'prune':
- if not self._exclude_pattern(None, prefix=dirpattern):
- logger.warning('no previously-included directories found '
- 'matching %r', dirpattern)
- else: # pragma: no cover
- # This should never happen, as it should be caught in
- # _parse_template_line
- raise DistlibException(
- 'invalid action %r' % action)
-
- #
- # Private API
- #
-
- def _parse_directive(self, directive):
- """
- Validate a directive.
- :param directive: The directive to validate.
- :return: A tuple of action, patterns, thedir, dir_patterns
- """
- words = directive.split()
- if len(words) == 1 and words[0] not in ('include', 'exclude',
- 'global-include',
- 'global-exclude',
- 'recursive-include',
- 'recursive-exclude',
- 'graft', 'prune'):
- # no action given, let's use the default 'include'
- words.insert(0, 'include')
-
- action = words[0]
- patterns = thedir = dir_pattern = None
-
- if action in ('include', 'exclude',
- 'global-include', 'global-exclude'):
- if len(words) < 2:
- raise DistlibException(
- '%r expects ...' % action)
-
- patterns = [convert_path(word) for word in words[1:]]
-
- elif action in ('recursive-include', 'recursive-exclude'):
- if len(words) < 3:
- raise DistlibException(
- '%r expects ...' % action)
-
- thedir = convert_path(words[1])
- patterns = [convert_path(word) for word in words[2:]]
-
- elif action in ('graft', 'prune'):
- if len(words) != 2:
- raise DistlibException(
- '%r expects a single ' % action)
-
- dir_pattern = convert_path(words[1])
-
- else:
- raise DistlibException('unknown action %r' % action)
-
- return action, patterns, thedir, dir_pattern
-
- def _include_pattern(self, pattern, anchor=True, prefix=None,
- is_regex=False):
- """Select strings (presumably filenames) from 'self.files' that
- match 'pattern', a Unix-style wildcard (glob) pattern.
-
- Patterns are not quite the same as implemented by the 'fnmatch'
- module: '*' and '?' match non-special characters, where "special"
- is platform-dependent: slash on Unix; colon, slash, and backslash on
- DOS/Windows; and colon on Mac OS.
-
- If 'anchor' is true (the default), then the pattern match is more
- stringent: "*.py" will match "foo.py" but not "foo/bar.py". If
- 'anchor' is false, both of these will match.
-
- If 'prefix' is supplied, then only filenames starting with 'prefix'
- (itself a pattern) and ending with 'pattern', with anything in between
- them, will match. 'anchor' is ignored in this case.
-
- If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and
- 'pattern' is assumed to be either a string containing a regex or a
- regex object -- no translation is done, the regex is just compiled
- and used as-is.
-
- Selected strings will be added to self.files.
-
- Return True if files are found.
- """
- # XXX docstring lying about what the special chars are?
- found = False
- pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex)
-
- # delayed loading of allfiles list
- if self.allfiles is None:
- self.findall()
-
- for name in self.allfiles:
- if pattern_re.search(name):
- self.files.add(name)
- found = True
- return found
-
- def _exclude_pattern(self, pattern, anchor=True, prefix=None,
- is_regex=False):
- """Remove strings (presumably filenames) from 'files' that match
- 'pattern'.
-
- Other parameters are the same as for 'include_pattern()', above.
- The list 'self.files' is modified in place. Return True if files are
- found.
-
- This API is public to allow e.g. exclusion of SCM subdirs, e.g. when
- packaging source distributions
- """
- found = False
- pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex)
- for f in list(self.files):
- if pattern_re.search(f):
- self.files.remove(f)
- found = True
- return found
-
- def _translate_pattern(self, pattern, anchor=True, prefix=None,
- is_regex=False):
- """Translate a shell-like wildcard pattern to a compiled regular
- expression.
-
- Return the compiled regex. If 'is_regex' true,
- then 'pattern' is directly compiled to a regex (if it's a string)
- or just returned as-is (assumes it's a regex object).
- """
- if is_regex:
- if isinstance(pattern, str):
- return re.compile(pattern)
- else:
- return pattern
-
- if _PYTHON_VERSION > (3, 2):
- # ditch start and end characters
- start, _, end = self._glob_to_re('_').partition('_')
-
- if pattern:
- pattern_re = self._glob_to_re(pattern)
- if _PYTHON_VERSION > (3, 2):
- assert pattern_re.startswith(start) and pattern_re.endswith(end)
- else:
- pattern_re = ''
-
- base = re.escape(os.path.join(self.base, ''))
- if prefix is not None:
- # ditch end of pattern character
- if _PYTHON_VERSION <= (3, 2):
- empty_pattern = self._glob_to_re('')
- prefix_re = self._glob_to_re(prefix)[:-len(empty_pattern)]
- else:
- prefix_re = self._glob_to_re(prefix)
- assert prefix_re.startswith(start) and prefix_re.endswith(end)
- prefix_re = prefix_re[len(start): len(prefix_re) - len(end)]
- sep = os.sep
- if os.sep == '\\':
- sep = r'\\'
- if _PYTHON_VERSION <= (3, 2):
- pattern_re = '^' + base + sep.join((prefix_re,
- '.*' + pattern_re))
- else:
- pattern_re = pattern_re[len(start): len(pattern_re) - len(end)]
- pattern_re = r'%s%s%s%s.*%s%s' % (start, base, prefix_re, sep,
- pattern_re, end)
- else: # no prefix -- respect anchor flag
- if anchor:
- if _PYTHON_VERSION <= (3, 2):
- pattern_re = '^' + base + pattern_re
- else:
- pattern_re = r'%s%s%s' % (start, base, pattern_re[len(start):])
-
- return re.compile(pattern_re)
-
- def _glob_to_re(self, pattern):
- """Translate a shell-like glob pattern to a regular expression.
-
- Return a string containing the regex. Differs from
- 'fnmatch.translate()' in that '*' does not match "special characters"
- (which are platform-specific).
- """
- pattern_re = fnmatch.translate(pattern)
-
- # '?' and '*' in the glob pattern become '.' and '.*' in the RE, which
- # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix,
- # and by extension they shouldn't match such "special characters" under
- # any OS. So change all non-escaped dots in the RE to match any
- # character except the special characters (currently: just os.sep).
- sep = os.sep
- if os.sep == '\\':
- # we're using a regex to manipulate a regex, so we need
- # to escape the backslash twice
- sep = r'\\\\'
- escaped = r'\1[^%s]' % sep
- pattern_re = re.sub(r'((? 800:
- start = random.randint(0, spec.shape[1]-800)
- end = start + 790
- spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end]
- audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length]
-
- return c, f0, spec, audio_norm, spk, uv
-
- def __getitem__(self, index):
- return self.get_audio(self.audiopaths[index][0])
-
- def __len__(self):
- return len(self.audiopaths)
-
-
-class TextAudioCollate:
-
- def __call__(self, batch):
- batch = [b for b in batch if b is not None]
-
- input_lengths, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].shape[1] for x in batch]),
- dim=0, descending=True)
-
- max_c_len = max([x[0].size(1) for x in batch])
- max_wav_len = max([x[3].size(1) for x in batch])
-
- lengths = torch.LongTensor(len(batch))
-
- c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len)
- f0_padded = torch.FloatTensor(len(batch), max_c_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- spkids = torch.LongTensor(len(batch), 1)
- uv_padded = torch.FloatTensor(len(batch), max_c_len)
-
- c_padded.zero_()
- spec_padded.zero_()
- f0_padded.zero_()
- wav_padded.zero_()
- uv_padded.zero_()
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- c = row[0]
- c_padded[i, :, :c.size(1)] = c
- lengths[i] = c.size(1)
-
- f0 = row[1]
- f0_padded[i, :f0.size(0)] = f0
-
- spec = row[2]
- spec_padded[i, :, :spec.size(1)] = spec
-
- wav = row[3]
- wav_padded[i, :, :wav.size(1)] = wav
-
- spkids[i, 0] = row[4]
-
- uv = row[5]
- uv_padded[i, :uv.size(0)] = uv
-
- return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded
diff --git a/spaces/Vertaix/vendiscore/vendiscore.py b/spaces/Vertaix/vendiscore/vendiscore.py
deleted file mode 100644
index 66f52ece421dd61e6e1eced6ee5baf1434d99e54..0000000000000000000000000000000000000000
--- a/spaces/Vertaix/vendiscore/vendiscore.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import evaluate
-import datasets
-import numpy as np
-
-from vendi_score import vendi, text_utils
-
-# TODO: Add BibTeX citation
-_CITATION = ""
-_DESCRIPTION = """\
-The Vendi Score is a metric for evaluating diversity in machine learning.
-The input to metric is a collection of samples and a pairwise similarity function, and the output is a number, which can be interpreted as the effective number of unique elements in the sample.
-See the project's README at https://github.com/vertaix/Vendi-Score for more information.
-The interactive example calculates the Vendi Score for a set of strings using the n-gram overlap similarity, averaged between n=1 and n=2.
-"""
-
-
-_KWARGS_DESCRIPTION = """
-Calculates the Vendi Score given samples and a similarity function.
-Args:
- samples: an iterable containing n samples to score, an n x n similarity
- matrix K, or an n x d feature matrix X.
- k: a pairwise similarity function, or a string identifying a predefined
- similarity function.
- Options: ngram_overlap, text_embeddings.
- score_K: if true, samples is an n x n similarity matrix K.
- score_X: if true, samples is an n x d feature matrix X.
- score_dual: if true, compute diversity score of X @ X.T.
- normalize: if true, normalize the similarity scores.
- model (optional): if k is "text_embeddings", a model mapping sentences to
- embeddings (output should be an object with an attribute called
- `pooler_output` or `last_hidden_state`).
- tokenizer (optional): if k is "text_embeddings" or "ngram_overlap", a
- tokenizer mapping strings to lists.
- model_path (optional): if k is "text_embeddings", the name of a model on the
- HuggingFace hub.
- ns (optional): if k is "ngram_overlap", the values of n to calculate.
- batch_size (optional): batch size to use if k is "text_embedding".
- device (optional): a string (e.g. "cuda", "cpu") or torch.device identifying
- the device to use if k is "text_embedding".
-Returns:
- VS: The Vendi Score.
-Examples:
- >>> vendiscore = evaluate.load("Vertaix/vendiscore", "text")
- >>> samples = ["Look, Jane.",
- "See Spot.",
- "See Spot run.",
- "Run, Spot, run.",
- "Jane sees Spot run."]
- >>> results = vendiscore.compute(samples, k="ngram_overlap", ns=[1, 2])
- >>> print(results)
- {'VS': 3.90657...}
-"""
-
-
-def get_features(config_name):
- if config_name in ("text", "default"):
- return datasets.Features({"samples": datasets.Value("string")})
- # if config_name == "image":
- # return datasets.Features({"samples": datasets.Image})
- if config_name in ("K", "X"):
- return [
- datasets.Features(
- {"samples": datasets.Sequence(datasets.Value("float"))}
- ),
- datasets.Features(
- {"samples": datasets.Sequence(datasets.Value("int32"))}
- ),
- ]
- return [
- datasets.Features({"samples": datasets.Value("float")}),
- datasets.Features({"samples": datasets.Value("int32")}),
- datasets.Features({"samples": datasets.Array2D}),
- ]
-
-
-@evaluate.utils.file_utils.add_start_docstrings(
- _DESCRIPTION, _KWARGS_DESCRIPTION
-)
-class VendiScore(evaluate.Metric):
- """TODO: Short description of my evaluation module."""
-
- def _info(self):
- # TODO: Specifies the evaluate.EvaluationModuleInfo object
- return evaluate.MetricInfo(
- # This is the description that will appear on the modules page.
- module_type="metric",
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=get_features(self.config_name),
- homepage="http://github.com/Vertaix/Vendi-Score",
- codebase_urls=["http://github.com/Vertaix/Vendi-Score"],
- reference_urls=[],
- )
-
- def _download_and_prepare(self, dl_manager):
- import nltk
-
- nltk.download("punkt")
-
- def _compute(
- self,
- samples,
- k="ngram_overlap",
- score_K=False,
- score_X=False,
- score_dual=False,
- normalize=False,
- model=None,
- tokenizer=None,
- model_path=None,
- ns=[1, 2],
- batch_size=16,
- device="cpu",
- ):
- if score_K:
- vs = vendi.score_K(np.array(samples), normalize=normalize)
- elif score_dual:
- vs = vendi.score_dual(np.array(samples), normalize=normalize)
- elif score_X:
- vs = vendi.score_X(np.array(samples), normalize=normalize)
- elif type(k) == str and k == "ngram_overlap":
- vs = text_utils.ngram_vendi_score(
- samples, ns=ns, tokenizer=tokenizer
- )
- elif type(k) == str and k == "text_embeddings":
- vs = text_utils.embedding_vendi_score(
- samples,
- model=model,
- tokenizer=tokenizer,
- batch_size=batch_size,
- device=device,
- model_path=model_path,
- )
- # elif type(k) == str and k == "pixels":
- # vs = image_utils.pixel_vendi_score(
- # [Image.fromarray(x) for x in samples]
- # )
- # elif type(k) == str and k == "image_embeddings":
- # vs = image_utils.embedding_vendi_score(
- # [Image.fromarray(x) for x in samples],
- # batch_size=batch_size,
- # device=device,
- # model=model,
- # transform=transform,
- # )
- else:
- vs = vendi.score(samples, k)
- return {"VS": vs}
diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/GetGpt.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/GetGpt.py
deleted file mode 100644
index 56a121f6ee5f430da7beda3b65abdea64a87c36b..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/GetGpt.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import json
-import uuid
-import requests
-from Crypto.Cipher import AES
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://chat.getgpt.world/'
-model = ['gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- def encrypt(e):
- t = os.urandom(8).hex().encode('utf-8')
- n = os.urandom(8).hex().encode('utf-8')
- r = e.encode('utf-8')
- cipher = AES.new(t, AES.MODE_CBC, n)
- ciphertext = cipher.encrypt(pad_data(r))
- return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8')
-
- def pad_data(data: bytes) -> bytes:
- block_size = AES.block_size
- padding_size = block_size - len(data) % block_size
- padding = bytes([padding_size] * padding_size)
- return data + padding
-
- headers = {
- 'Content-Type': 'application/json',
- 'Referer': 'https://chat.getgpt.world/',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36'
- }
-
- data = json.dumps({
- 'messages': messages,
- 'frequency_penalty': kwargs.get('frequency_penalty', 0),
- 'max_tokens': kwargs.get('max_tokens', 4000),
- 'model': 'gpt-3.5-turbo',
- 'presence_penalty': kwargs.get('presence_penalty', 0),
- 'temperature': kwargs.get('temperature', 1),
- 'top_p': kwargs.get('top_p', 1),
- 'stream': True,
- 'uuid': str(uuid.uuid4())
- })
-
- res = requests.post('https://chat.getgpt.world/api/chat/stream',
- headers=headers, json={'signature': encrypt(data)}, stream=True)
-
- for line in res.iter_lines():
- if b'content' in line:
- line_json = json.loads(line.decode('utf-8').split('data: ')[1])
- yield (line_json['choices'][0]['delta']['content'])
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f'{name}: {get_type_hints(_create_completion)[name].__name__}' for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/slicer2.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/slicer2.py
deleted file mode 100644
index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000
--- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/slicer2.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import numpy as np
-
-
-# This function is obtained from librosa.
-def get_rms(
- y,
- frame_length=2048,
- hop_length=512,
- pad_mode="constant",
-):
- padding = (int(frame_length // 2), int(frame_length // 2))
- y = np.pad(y, padding, mode=pad_mode)
-
- axis = -1
- # put our new within-frame axis at the end for now
- out_strides = y.strides + tuple([y.strides[axis]])
- # Reduce the shape on the framing axis
- x_shape_trimmed = list(y.shape)
- x_shape_trimmed[axis] -= frame_length - 1
- out_shape = tuple(x_shape_trimmed) + tuple([frame_length])
- xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides)
- if axis < 0:
- target_axis = axis - 1
- else:
- target_axis = axis + 1
- xw = np.moveaxis(xw, -1, target_axis)
- # Downsample along the target axis
- slices = [slice(None)] * xw.ndim
- slices[axis] = slice(0, None, hop_length)
- x = xw[tuple(slices)]
-
- # Calculate power
- power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True)
-
- return np.sqrt(power)
-
-
-class Slicer:
- def __init__(
- self,
- sr: int,
- threshold: float = -40.0,
- min_length: int = 5000,
- min_interval: int = 300,
- hop_size: int = 20,
- max_sil_kept: int = 5000,
- ):
- if not min_length >= min_interval >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: min_length >= min_interval >= hop_size"
- )
- if not max_sil_kept >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: max_sil_kept >= hop_size"
- )
- min_interval = sr * min_interval / 1000
- self.threshold = 10 ** (threshold / 20.0)
- self.hop_size = round(sr * hop_size / 1000)
- self.win_size = min(round(min_interval), 4 * self.hop_size)
- self.min_length = round(sr * min_length / 1000 / self.hop_size)
- self.min_interval = round(min_interval / self.hop_size)
- self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
-
- def _apply_slice(self, waveform, begin, end):
- if len(waveform.shape) > 1:
- return waveform[
- :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size)
- ]
- else:
- return waveform[
- begin * self.hop_size : min(waveform.shape[0], end * self.hop_size)
- ]
-
- # @timeit
- def slice(self, waveform):
- if len(waveform.shape) > 1:
- samples = waveform.mean(axis=0)
- else:
- samples = waveform
- if samples.shape[0] <= self.min_length:
- return [waveform]
- rms_list = get_rms(
- y=samples, frame_length=self.win_size, hop_length=self.hop_size
- ).squeeze(0)
- sil_tags = []
- silence_start = None
- clip_start = 0
- for i, rms in enumerate(rms_list):
- # Keep looping while frame is silent.
- if rms < self.threshold:
- # Record start of silent frames.
- if silence_start is None:
- silence_start = i
- continue
- # Keep looping while frame is not silent and silence start has not been recorded.
- if silence_start is None:
- continue
- # Clear recorded silence start if interval is not enough or clip is too short
- is_leading_silence = silence_start == 0 and i > self.max_sil_kept
- need_slice_middle = (
- i - silence_start >= self.min_interval
- and i - clip_start >= self.min_length
- )
- if not is_leading_silence and not need_slice_middle:
- silence_start = None
- continue
- # Need slicing. Record the range of silent frames to be removed.
- if i - silence_start <= self.max_sil_kept:
- pos = rms_list[silence_start : i + 1].argmin() + silence_start
- if silence_start == 0:
- sil_tags.append((0, pos))
- else:
- sil_tags.append((pos, pos))
- clip_start = pos
- elif i - silence_start <= self.max_sil_kept * 2:
- pos = rms_list[
- i - self.max_sil_kept : silence_start + self.max_sil_kept + 1
- ].argmin()
- pos += i - self.max_sil_kept
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- clip_start = pos_r
- else:
- sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
- clip_start = max(pos_r, pos)
- else:
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- else:
- sil_tags.append((pos_l, pos_r))
- clip_start = pos_r
- silence_start = None
- # Deal with trailing silence.
- total_frames = rms_list.shape[0]
- if (
- silence_start is not None
- and total_frames - silence_start >= self.min_interval
- ):
- silence_end = min(total_frames, silence_start + self.max_sil_kept)
- pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start
- sil_tags.append((pos, total_frames + 1))
- # Apply and return slices.
- if len(sil_tags) == 0:
- return [waveform]
- else:
- chunks = []
- if sil_tags[0][0] > 0:
- chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0]))
- for i in range(len(sil_tags) - 1):
- chunks.append(
- self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0])
- )
- if sil_tags[-1][1] < total_frames:
- chunks.append(
- self._apply_slice(waveform, sil_tags[-1][1], total_frames)
- )
- return chunks
-
-
-def main():
- import os.path
- from argparse import ArgumentParser
-
- import librosa
- import soundfile
-
- parser = ArgumentParser()
- parser.add_argument("audio", type=str, help="The audio to be sliced")
- parser.add_argument(
- "--out", type=str, help="Output directory of the sliced audio clips"
- )
- parser.add_argument(
- "--db_thresh",
- type=float,
- required=False,
- default=-40,
- help="The dB threshold for silence detection",
- )
- parser.add_argument(
- "--min_length",
- type=int,
- required=False,
- default=5000,
- help="The minimum milliseconds required for each sliced audio clip",
- )
- parser.add_argument(
- "--min_interval",
- type=int,
- required=False,
- default=300,
- help="The minimum milliseconds for a silence part to be sliced",
- )
- parser.add_argument(
- "--hop_size",
- type=int,
- required=False,
- default=10,
- help="Frame length in milliseconds",
- )
- parser.add_argument(
- "--max_sil_kept",
- type=int,
- required=False,
- default=500,
- help="The maximum silence length kept around the sliced clip, presented in milliseconds",
- )
- args = parser.parse_args()
- out = args.out
- if out is None:
- out = os.path.dirname(os.path.abspath(args.audio))
- audio, sr = librosa.load(args.audio, sr=None, mono=False)
- slicer = Slicer(
- sr=sr,
- threshold=args.db_thresh,
- min_length=args.min_length,
- min_interval=args.min_interval,
- hop_size=args.hop_size,
- max_sil_kept=args.max_sil_kept,
- )
- chunks = slicer.slice(audio)
- if not os.path.exists(out):
- os.makedirs(out)
- for i, chunk in enumerate(chunks):
- if len(chunk.shape) > 1:
- chunk = chunk.T
- soundfile.write(
- os.path.join(
- out,
- f"%s_%d.wav"
- % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i),
- ),
- chunk,
- sr,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Wayben/ChatGPT/assets/custom.js b/spaces/Wayben/ChatGPT/assets/custom.js
deleted file mode 100644
index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000
--- a/spaces/Wayben/ChatGPT/assets/custom.js
+++ /dev/null
@@ -1 +0,0 @@
-// custom javascript here
\ No newline at end of file
diff --git a/spaces/XzJosh/otto-Bert-VITS2/models.py b/spaces/XzJosh/otto-Bert-VITS2/models.py
deleted file mode 100644
index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/otto-Bert-VITS2/models.py
+++ /dev/null
@@ -1,707 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-from commons import init_weights, get_padding
-from text import symbols, num_tones, num_languages
-class DurationDiscriminator(nn.Module): #vits2
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.dur_proj = nn.Conv1d(1, filter_channels, 1)
-
- self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
- self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- self.output_layer = nn.Sequential(
- nn.Linear(filter_channels, 1),
- nn.Sigmoid()
- )
-
- def forward_probability(self, x, x_mask, dur, g=None):
- dur = self.dur_proj(dur)
- x = torch.cat([x, dur], dim=1)
- x = self.pre_out_conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_1(x)
- x = self.drop(x)
- x = self.pre_out_conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_2(x)
- x = self.drop(x)
- x = x * x_mask
- x = x.transpose(1, 2)
- output_prob = self.output_layer(x)
- return output_prob
-
- def forward(self, x, x_mask, dur_r, dur_hat, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
-
- output_probs = []
- for dur in [dur_r, dur_hat]:
- output_prob = self.forward_probability(x, x_mask, dur, g)
- output_probs.append(output_prob)
-
- return output_probs
-
-class TransformerCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- n_flows=4,
- gin_channels=0,
- share_parameter=False
- ):
-
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
-
- self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None
-
- for i in range(n_flows):
- self.flows.append(
- modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=0):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
- self.emb = nn.Embedding(len(symbols), hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
- self.tone_emb = nn.Embedding(num_tones, hidden_channels)
- nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5)
- self.language_emb = nn.Embedding(num_languages, hidden_channels)
- nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5)
- self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, tone, language, bert, g=None):
- x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask, g=g)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-class ReferenceEncoder(nn.Module):
- '''
- inputs --- [N, Ty/r, n_mels*r] mels
- outputs --- [N, ref_enc_gru_size]
- '''
-
- def __init__(self, spec_channels, gin_channels=0):
-
- super().__init__()
- self.spec_channels = spec_channels
- ref_enc_filters = [32, 32, 64, 64, 128, 128]
- K = len(ref_enc_filters)
- filters = [1] + ref_enc_filters
- convs = [weight_norm(nn.Conv2d(in_channels=filters[i],
- out_channels=filters[i + 1],
- kernel_size=(3, 3),
- stride=(2, 2),
- padding=(1, 1))) for i in range(K)]
- self.convs = nn.ModuleList(convs)
- # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)])
-
- out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
- self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels,
- hidden_size=256 // 2,
- batch_first=True)
- self.proj = nn.Linear(128, gin_channels)
-
- def forward(self, inputs, mask=None):
- N = inputs.size(0)
- out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
- for conv in self.convs:
- out = conv(out)
- # out = wn(out)
- out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
-
- out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
- T = out.size(1)
- N = out.size(0)
- out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
-
- self.gru.flatten_parameters()
- memory, out = self.gru(out) # out --- [1, N, 128]
-
- return self.proj(out.squeeze(0))
-
- def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
- for i in range(n_convs):
- L = (L - kernel_size + 2 * pad) // stride + 1
- return L
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=256,
- gin_channels=256,
- use_sdp=True,
- n_flow_layer = 4,
- n_layers_trans_flow = 3,
- flow_share_parameter = False,
- use_transformer_flow = True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_layers_trans_flow = n_layers_trans_flow
- self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True)
- self.use_sdp = use_sdp
- self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
- self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
- self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
- self.current_mas_noise_scale = self.mas_noise_scale_initial
- if self.use_spk_conditioned_encoder and gin_channels > 0:
- self.enc_gin_channels = gin_channels
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.enc_gin_channels)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- if use_transformer_flow:
- self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter)
- else:
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels)
- self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers >= 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- else:
- self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert):
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
- if self.use_noise_scaled_mas:
- epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale
- neg_cent = neg_cent + epsilon
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
-
- l_length_sdp = self.sdp(x, x_mask, w, g=g)
- l_length_sdp = l_length_sdp / torch.sum(x_mask)
-
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- l_length = l_length_dp + l_length_sdp
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_)
-
- def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None):
- #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
- # g = self.gst(y)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
diff --git a/spaces/YUANAI/DiffspeechResearch/docs/portaspeech.md b/spaces/YUANAI/DiffspeechResearch/docs/portaspeech.md
deleted file mode 100644
index 94e8b9b4241a2daae5bbfba660aa2a4a9068360d..0000000000000000000000000000000000000000
--- a/spaces/YUANAI/DiffspeechResearch/docs/portaspeech.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Run PortaSpeech
-
-## Quick Start
-
-### Install Dependencies
-
-Install dependencies following [readme.md](../readme.md)
-
-### Set Config Path and Experiment Name
-
-#### PortaSpeech (normal)
-```bash
-export CONFIG_NAME=egs/datasets/audio/lj/ps_flow_nips2021.yaml
-export MY_EXP_NAME=ps_normal_exp
-```
-
-#### PortaSpeech (small)
-```bash
-export CONFIG_NAME=egs/datasets/audio/lj/ps_flow_small_nips2021.yaml
-export MY_EXP_NAME=ps_small_exp
-```
-
-### Preprocess and binary dataset
-
-Prepare dataset following [prepare_data.md](./prepare_data.md)
-
-### Prepare Vocoder
-
-Prepare vocoder following [prepare_vocoder.md](./prepare_vocoder.md)
-
-## Training
-
-```bash
-CUDA_VISIBLE_DEVICES=0 python tasks/run.py --config $CONFIG_NAME --exp_name $MY_EXP_NAME --reset
-```
-
-You can check the training and validation curves open Tensorboard via:
-
-```bash
-tensorboard --logdir checkpoints/$MY_EXP_NAME
-```
-
-## Inference (Testing)
-
-```bash
-CUDA_VISIBLE_DEVICES=0 python tasks/run.py --config $PS_CONFIG --exp_name $MY_EXP_NAME --infer
-```
-
-## Citation
-
-If you find this useful for your research, please use the following.
-
-```
-@article{ren2021portaspeech,
- title={PortaSpeech: Portable and High-Quality Generative Text-to-Speech},
- author={Ren, Yi and Liu, Jinglin and Zhao, Zhou},
- journal={Advances in Neural Information Processing Systems},
- volume={34},
- year={2021}
-}
-```
diff --git a/spaces/Yabo/ControlVideo/models/RIFE/IFNet_HDv3.py b/spaces/Yabo/ControlVideo/models/RIFE/IFNet_HDv3.py
deleted file mode 100644
index d57f0a2f0889fec5d68c52bf99bf2dbd91150381..0000000000000000000000000000000000000000
--- a/spaces/Yabo/ControlVideo/models/RIFE/IFNet_HDv3.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from diffusers import ModelMixin
-
-from .warplayer import warp
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1):
- return nn.Sequential(
- nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation, bias=True),
- nn.PReLU(out_planes)
- )
-
-def conv_bn(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1):
- return nn.Sequential(
- nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation, bias=False),
- nn.BatchNorm2d(out_planes),
- nn.PReLU(out_planes)
- )
-
-def convert(param):
- return {
- k.replace("module.", ""): v
- for k, v in param.items()
- if "module." in k
- }
-
-class IFBlock(nn.Module):
- def __init__(self, in_planes, c=64):
- super(IFBlock, self).__init__()
- self.conv0 = nn.Sequential(
- conv(in_planes, c//2, 3, 2, 1),
- conv(c//2, c, 3, 2, 1),
- )
- self.convblock0 = nn.Sequential(
- conv(c, c),
- conv(c, c)
- )
- self.convblock1 = nn.Sequential(
- conv(c, c),
- conv(c, c)
- )
- self.convblock2 = nn.Sequential(
- conv(c, c),
- conv(c, c)
- )
- self.convblock3 = nn.Sequential(
- conv(c, c),
- conv(c, c)
- )
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(c, c//2, 4, 2, 1),
- nn.PReLU(c//2),
- nn.ConvTranspose2d(c//2, 4, 4, 2, 1),
- )
- self.conv2 = nn.Sequential(
- nn.ConvTranspose2d(c, c//2, 4, 2, 1),
- nn.PReLU(c//2),
- nn.ConvTranspose2d(c//2, 1, 4, 2, 1),
- )
-
- def forward(self, x, flow, scale=1):
- x = F.interpolate(x, scale_factor= 1. / scale, mode="bilinear", align_corners=False, recompute_scale_factor=False)
- flow = F.interpolate(flow, scale_factor= 1. / scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) * 1. / scale
- feat = self.conv0(torch.cat((x, flow), 1))
- feat = self.convblock0(feat) + feat
- feat = self.convblock1(feat) + feat
- feat = self.convblock2(feat) + feat
- feat = self.convblock3(feat) + feat
- flow = self.conv1(feat)
- mask = self.conv2(feat)
- flow = F.interpolate(flow, scale_factor=scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) * scale
- mask = F.interpolate(mask, scale_factor=scale, mode="bilinear", align_corners=False, recompute_scale_factor=False)
- return flow, mask
-
-class IFNet(ModelMixin):
- def __init__(self, ckpt_path="checkpoints/flownet.pkl"):
- super(IFNet, self).__init__()
- self.block0 = IFBlock(7+4, c=90)
- self.block1 = IFBlock(7+4, c=90)
- self.block2 = IFBlock(7+4, c=90)
- self.block_tea = IFBlock(10+4, c=90)
- if ckpt_path is not None:
- self.load_state_dict(convert(torch.load(ckpt_path, map_location ='cpu')))
-
- def inference(self, img0, img1, scale=1.0):
- imgs = torch.cat((img0, img1), 1)
- scale_list = [4/scale, 2/scale, 1/scale]
- flow, mask, merged = self.forward(imgs, scale_list)
- return merged[2]
-
- def forward(self, x, scale_list=[4, 2, 1], training=False):
- if training == False:
- channel = x.shape[1] // 2
- img0 = x[:, :channel]
- img1 = x[:, channel:]
- flow_list = []
- merged = []
- mask_list = []
- warped_img0 = img0
- warped_img1 = img1
- flow = (x[:, :4]).detach() * 0
- mask = (x[:, :1]).detach() * 0
- loss_cons = 0
- block = [self.block0, self.block1, self.block2]
- for i in range(3):
- f0, m0 = block[i](torch.cat((warped_img0[:, :3], warped_img1[:, :3], mask), 1), flow, scale=scale_list[i])
- f1, m1 = block[i](torch.cat((warped_img1[:, :3], warped_img0[:, :3], -mask), 1), torch.cat((flow[:, 2:4], flow[:, :2]), 1), scale=scale_list[i])
- flow = flow + (f0 + torch.cat((f1[:, 2:4], f1[:, :2]), 1)) / 2
- mask = mask + (m0 + (-m1)) / 2
- mask_list.append(mask)
- flow_list.append(flow)
- warped_img0 = warp(img0, flow[:, :2])
- warped_img1 = warp(img1, flow[:, 2:4])
- merged.append((warped_img0, warped_img1))
- '''
- c0 = self.contextnet(img0, flow[:, :2])
- c1 = self.contextnet(img1, flow[:, 2:4])
- tmp = self.unet(img0, img1, warped_img0, warped_img1, mask, flow, c0, c1)
- res = tmp[:, 1:4] * 2 - 1
- '''
- for i in range(3):
- mask_list[i] = torch.sigmoid(mask_list[i])
- merged[i] = merged[i][0] * mask_list[i] + merged[i][1] * (1 - mask_list[i])
- # merged[i] = torch.clamp(merged[i] + res, 0, 1)
- return flow_list, mask_list[2], merged
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
deleted file mode 100644
index 2242d21b1d9147b61181cd43c59649dbafbdc598..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
+++ /dev/null
@@ -1,459 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import torch
-
-import PIL
-from transformers import CLIPFeatureExtractor, CLIPTokenizer
-
-from ...configuration_utils import FrozenDict
-from ...onnx_utils import ORT_TO_NP_TYPE, OnnxRuntimeModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from ...utils import PIL_INTERPOLATION, deprecate, logging
-from . import StableDiffusionPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def preprocess(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- return 2.0 * image - 1.0
-
-
-class OnnxStableDiffusionImg2ImgPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-guided image to image generation using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- vae_encoder: OnnxRuntimeModel
- vae_decoder: OnnxRuntimeModel
- text_encoder: OnnxRuntimeModel
- tokenizer: CLIPTokenizer
- unet: OnnxRuntimeModel
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler]
- safety_checker: OnnxRuntimeModel
- feature_extractor: CLIPFeatureExtractor
-
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae_encoder: OnnxRuntimeModel,
- vae_decoder: OnnxRuntimeModel,
- text_encoder: OnnxRuntimeModel,
- tokenizer: CLIPTokenizer,
- unet: OnnxRuntimeModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: OnnxRuntimeModel,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
- )
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["clip_sample"] = False
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae_encoder=vae_encoder,
- vae_decoder=vae_decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline._encode_prompt
- def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="np",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
-
- if not np.array_equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
- text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt] * batch_size
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="np",
- )
- uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0]
- uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
-
- return text_embeddings
-
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: Union[np.ndarray, PIL.Image.Image],
- strength: float = 0.8,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.0,
- generator: Optional[np.random.RandomState] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
- callback_steps: Optional[int] = 1,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`np.ndarray` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference. This parameter will be modulated by `strength`.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`np.random.RandomState`, *optional*):
- A np.random.RandomState to make generation deterministic.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- message = "Please use `image` instead of `init_image`."
- init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs)
- image = init_image or image
-
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if generator is None:
- generator = np.random
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- if isinstance(image, PIL.Image.Image):
- image = preprocess(image)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- text_embeddings = self._encode_prompt(
- prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- latents_dtype = text_embeddings.dtype
- image = image.astype(latents_dtype)
- # encode the init image into latents and scale the latents
- init_latents = self.vae_encoder(sample=image)[0]
- init_latents = 0.18215 * init_latents
-
- if isinstance(prompt, str):
- prompt = [prompt]
- if len(prompt) > init_latents.shape[0] and len(prompt) % init_latents.shape[0] == 0:
- # expand init_latents for batch_size
- deprecation_message = (
- f"You have passed {len(prompt)} text prompts (`prompt`), but only {init_latents.shape[0]} initial"
- " images (`image`). Initial images are now duplicating to match the number of text prompts. Note"
- " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update"
- " your script to pass as many initial images as text prompts to suppress this warning."
- )
- deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False)
- additional_image_per_prompt = len(prompt) // init_latents.shape[0]
- init_latents = np.concatenate([init_latents] * additional_image_per_prompt * num_images_per_prompt, axis=0)
- elif len(prompt) > init_latents.shape[0] and len(prompt) % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {len(prompt)} text prompts."
- )
- else:
- init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0)
-
- # get the original timestep using init_timestep
- offset = self.scheduler.config.get("steps_offset", 0)
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
-
- timesteps = self.scheduler.timesteps.numpy()[-init_timestep]
- timesteps = np.array([timesteps] * batch_size * num_images_per_prompt)
-
- # add noise to latents using the timesteps
- noise = generator.randn(*init_latents.shape).astype(latents_dtype)
- init_latents = self.scheduler.add_noise(
- torch.from_numpy(init_latents), torch.from_numpy(noise), torch.from_numpy(timesteps)
- )
- init_latents = init_latents.numpy()
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- latents = init_latents
-
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- timesteps = self.scheduler.timesteps[t_start:].numpy()
-
- timestep_dtype = next(
- (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)"
- )
- timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
-
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t)
- latent_model_input = latent_model_input.cpu().numpy()
-
- # predict the noise residual
- timestep = np.array([t], dtype=timestep_dtype)
- noise_pred = self.unet(
- sample=latent_model_input, timestep=timestep, encoder_hidden_states=text_embeddings
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- scheduler_output = self.scheduler.step(
- torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs
- )
- latents = scheduler_output.prev_sample.numpy()
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- latents = 1 / 0.18215 * latents
- # image = self.vae_decoder(latent_sample=latents)[0]
- # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
- image = np.concatenate(
- [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
- )
-
- image = np.clip(image / 2 + 0.5, 0, 1)
- image = image.transpose((0, 2, 3, 1))
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(
- self.numpy_to_pil(image), return_tensors="np"
- ).pixel_values.astype(image.dtype)
- # safety_checker does not support batched inputs yet
- images, has_nsfw_concept = [], []
- for i in range(image.shape[0]):
- image_i, has_nsfw_concept_i = self.safety_checker(
- clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
- )
- images.append(image_i)
- has_nsfw_concept.append(has_nsfw_concept_i[0])
- image = np.concatenate(images)
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/YuAnthony/Audio-Caption/data_handling/.ipynb_checkpoints/collate_fn_test-checkpoint.py b/spaces/YuAnthony/Audio-Caption/data_handling/.ipynb_checkpoints/collate_fn_test-checkpoint.py
deleted file mode 100644
index b0891eeadde635953497663f310214e48878612f..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Audio-Caption/data_handling/.ipynb_checkpoints/collate_fn_test-checkpoint.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from torch import cat as pt_cat, zeros as pt_zeros, from_numpy, Tensor
-def clotho_collate_fn_test(batch, nb_t_steps, input_pad_at):
- if type(nb_t_steps) == str:
- truncate_fn = max if nb_t_steps.lower() == 'max' else min
- in_t_steps = truncate_fn([i[0].shape[0] for i in batch])
- else:
- in_t_steps = nb_t_steps
-
- in_dim = batch[0][0].shape[-1]
-
- input_tensor = []
-
- for in_b, filename in batch:
- if in_t_steps >= in_b.shape[0]:
- padding = pt_zeros(in_t_steps - in_b.shape[0], in_dim).float()
- data = [from_numpy(in_b).float()]
- if input_pad_at.lower() == 'start':
- data.insert(0, padding)
- else:
- data.append(padding)
- tmp_in: Tensor = pt_cat(data)
- else:
- tmp_in: Tensor = from_numpy(in_b[:in_t_steps, :]).float()
- input_tensor.append(tmp_in.unsqueeze_(0))
-
- input_tensor = pt_cat(input_tensor)
-
- filename = [i[1] for i in batch]
-
- return input_tensor, filename
\ No newline at end of file
diff --git a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/prepare-datahub.md b/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/prepare-datahub.md
deleted file mode 100644
index b9b7f477f0ceea58721426a15de890d8d3edce50..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/prepare-datahub.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Preparing Your Local DataHub Environment
-
-## Deploy DataHub Quickstart
-
-You'll need a local instance of DataHub running for this tutorial:
-- Follow the [DataHub Quickstart Guide](/docs/quickstart.md) to get one up and running.
-```shell
-python3 -m pip install --upgrade pip wheel setuptools
-python3 -m pip install --upgrade acryl-datahub
-```
-If you can see datahub version like this, you're good to go.
-```shell
-$ datahub version
-DataHub CLI version: 0.10.0.1
-Python version: 3.9.6 (default, Jun 16 2022, 21:38:53)
-[Clang 13.0.0 (clang-1300.0.27.3)]
-```
-
-Run datahub quickstart. This will deploy local datahub server to http://localhost:9002
-```shell
-datahub docker quickstart
-```
-After logging in with the default credential(`username: datahub / password: datahub`), you can see DataHub ready for you.
-
-
-
-Please refer to [DataHub Quickstart Guide](/docs/quickstart.md) for more information.
-
-## Ingest Sample Data
-We will use sample data provided with datahub quickstart.
-If you already have data on your datahub, you might skip this part.
-
-```shell
-datahub docker ingest-sample-data
-```
-This will ingest various entities like datasets, terms and tags to your local DataHub.
-
-
-Now you're ready to start!
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/furthest_point_sample.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/furthest_point_sample.py
deleted file mode 100644
index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/furthest_point_sample.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'furthest_point_sampling_forward',
- 'furthest_point_sampling_with_dist_forward'
-])
-
-
-class FurthestPointSampling(Function):
- """Uses iterative furthest point sampling to select a set of features whose
- corresponding points have the furthest distance."""
-
- @staticmethod
- def forward(ctx, points_xyz: torch.Tensor,
- num_points: int) -> torch.Tensor:
- """
- Args:
- points_xyz (Tensor): (B, N, 3) where N > num_points.
- num_points (int): Number of points in the sampled set.
-
- Returns:
- Tensor: (B, num_points) indices of the sampled points.
- """
- assert points_xyz.is_contiguous()
-
- B, N = points_xyz.size()[:2]
- output = torch.cuda.IntTensor(B, num_points)
- temp = torch.cuda.FloatTensor(B, N).fill_(1e10)
-
- ext_module.furthest_point_sampling_forward(
- points_xyz,
- temp,
- output,
- b=B,
- n=N,
- m=num_points,
- )
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(output)
- return output
-
- @staticmethod
- def backward(xyz, a=None):
- return None, None
-
-
-class FurthestPointSamplingWithDist(Function):
- """Uses iterative furthest point sampling to select a set of features whose
- corresponding points have the furthest distance."""
-
- @staticmethod
- def forward(ctx, points_dist: torch.Tensor,
- num_points: int) -> torch.Tensor:
- """
- Args:
- points_dist (Tensor): (B, N, N) Distance between each point pair.
- num_points (int): Number of points in the sampled set.
-
- Returns:
- Tensor: (B, num_points) indices of the sampled points.
- """
- assert points_dist.is_contiguous()
-
- B, N, _ = points_dist.size()
- output = points_dist.new_zeros([B, num_points], dtype=torch.int32)
- temp = points_dist.new_zeros([B, N]).fill_(1e10)
-
- ext_module.furthest_point_sampling_with_dist_forward(
- points_dist, temp, output, b=B, n=N, m=num_points)
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(output)
- return output
-
- @staticmethod
- def backward(xyz, a=None):
- return None, None
-
-
-furthest_point_sample = FurthestPointSampling.apply
-furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/checkpoints/train_vq.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/checkpoints/train_vq.py
deleted file mode 100644
index d89b9930ba1262747542df3d5b2f03f8fab1b04a..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/checkpoints/train_vq.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import os
-import json
-
-import torch
-import torch.optim as optim
-from torch.utils.tensorboard import SummaryWriter
-
-import models.vqvae as vqvae
-import utils.losses as losses
-import options.option_vq as option_vq
-import utils.utils_model as utils_model
-from dataset import dataset_VQ, dataset_TM_eval
-import utils.eval_trans as eval_trans
-from options.get_eval_option import get_opt
-from models.evaluator_wrapper import EvaluatorModelWrapper
-import warnings
-warnings.filterwarnings('ignore')
-from utils.word_vectorizer import WordVectorizer
-
-def update_lr_warm_up(optimizer, nb_iter, warm_up_iter, lr):
-
- current_lr = lr * (nb_iter + 1) / (warm_up_iter + 1)
- for param_group in optimizer.param_groups:
- param_group["lr"] = current_lr
-
- return optimizer, current_lr
-
-##### ---- Exp dirs ---- #####
-args = option_vq.get_args_parser()
-torch.manual_seed(args.seed)
-
-args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
-os.makedirs(args.out_dir, exist_ok = True)
-
-##### ---- Logger ---- #####
-logger = utils_model.get_logger(args.out_dir)
-writer = SummaryWriter(args.out_dir)
-logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
-
-
-
-w_vectorizer = WordVectorizer('./glove', 'our_vab')
-
-if args.dataname == 'kit' :
- dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt'
- args.nb_joints = 21
-
-else :
- dataset_opt_path = 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
- args.nb_joints = 22
-
-logger.info(f'Training on {args.dataname}, motions are with {args.nb_joints} joints')
-
-wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
-eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
-
-
-##### ---- Dataloader ---- #####
-train_loader = dataset_VQ.DATALoader(args.dataname,
- args.batch_size,
- window_size=args.window_size,
- unit_length=2**args.down_t)
-
-train_loader_iter = dataset_VQ.cycle(train_loader)
-
-val_loader = dataset_TM_eval.DATALoader(args.dataname, False,
- 32,
- w_vectorizer,
- unit_length=2**args.down_t)
-
-##### ---- Network ---- #####
-net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
- args.nb_code,
- args.code_dim,
- args.output_emb_width,
- args.down_t,
- args.stride_t,
- args.width,
- args.depth,
- args.dilation_growth_rate,
- args.vq_act,
- args.vq_norm)
-
-
-if args.resume_pth :
- logger.info('loading checkpoint from {}'.format(args.resume_pth))
- ckpt = torch.load(args.resume_pth, map_location='cpu')
- net.load_state_dict(ckpt['net'], strict=True)
-net.train()
-net.cuda()
-
-##### ---- Optimizer & Scheduler ---- #####
-optimizer = optim.AdamW(net.parameters(), lr=args.lr, betas=(0.9, 0.99), weight_decay=args.weight_decay)
-scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_scheduler, gamma=args.gamma)
-
-
-Loss = losses.ReConsLoss(args.recons_loss, args.nb_joints)
-
-##### ------ warm-up ------- #####
-avg_recons, avg_perplexity, avg_commit = 0., 0., 0.
-
-for nb_iter in range(1, args.warm_up_iter):
-
- optimizer, current_lr = update_lr_warm_up(optimizer, nb_iter, args.warm_up_iter, args.lr)
-
- gt_motion = next(train_loader_iter)
- gt_motion = gt_motion.cuda().float() # (bs, 64, dim)
-
- pred_motion, loss_commit, perplexity = net(gt_motion)
- loss_motion = Loss(pred_motion, gt_motion)
- loss_vel = Loss.forward_vel(pred_motion, gt_motion)
-
- loss = loss_motion + args.commit * loss_commit + args.loss_vel * loss_vel
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- avg_recons += loss_motion.item()
- avg_perplexity += perplexity.item()
- avg_commit += loss_commit.item()
-
- if nb_iter % args.print_iter == 0 :
- avg_recons /= args.print_iter
- avg_perplexity /= args.print_iter
- avg_commit /= args.print_iter
-
- logger.info(f"Warmup. Iter {nb_iter} : lr {current_lr:.5f} \t Commit. {avg_commit:.5f} \t PPL. {avg_perplexity:.2f} \t Recons. {avg_recons:.5f}")
-
- avg_recons, avg_perplexity, avg_commit = 0., 0., 0.
-
-##### ---- Training ---- #####
-avg_recons, avg_perplexity, avg_commit = 0., 0., 0.
-best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_vqvae(args.out_dir, val_loader, net, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, eval_wrapper=eval_wrapper)
-
-for nb_iter in range(1, args.total_iter + 1):
-
- gt_motion = next(train_loader_iter)
- gt_motion = gt_motion.cuda().float() # bs, nb_joints, joints_dim, seq_len
-
- pred_motion, loss_commit, perplexity = net(gt_motion)
- loss_motion = Loss(pred_motion, gt_motion)
- loss_vel = Loss.forward_vel(pred_motion, gt_motion)
-
- loss = loss_motion + args.commit * loss_commit + args.loss_vel * loss_vel
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
- scheduler.step()
-
- avg_recons += loss_motion.item()
- avg_perplexity += perplexity.item()
- avg_commit += loss_commit.item()
-
- if nb_iter % args.print_iter == 0 :
- avg_recons /= args.print_iter
- avg_perplexity /= args.print_iter
- avg_commit /= args.print_iter
-
- writer.add_scalar('./Train/L1', avg_recons, nb_iter)
- writer.add_scalar('./Train/PPL', avg_perplexity, nb_iter)
- writer.add_scalar('./Train/Commit', avg_commit, nb_iter)
-
- logger.info(f"Train. Iter {nb_iter} : \t Commit. {avg_commit:.5f} \t PPL. {avg_perplexity:.2f} \t Recons. {avg_recons:.5f}")
-
- avg_recons, avg_perplexity, avg_commit = 0., 0., 0.,
-
- if nb_iter % args.eval_iter==0 :
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_vqvae(args.out_dir, val_loader, net, logger, writer, nb_iter, best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, eval_wrapper=eval_wrapper)
-
\ No newline at end of file
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/clock.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/clock.py
deleted file mode 100644
index f05cf948d2c517f34a356c6568e3951fb91a06b9..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/clock.py
+++ /dev/null
@@ -1,642 +0,0 @@
-"""Precise framerate calculation function scheduling.
-
-The :py:mod:`~pyglet.clock` module allows you to schedule functions
-to run periodically, or for one-shot future execution. pyglet's default
-event loop (:py:func:`~pyglet.app.run`) keeps an internal instance of
-a :py:class:`~pyglet.clock.Clock`, which is ticked automatically.
-
-..note:: Some internal modules will schedule items on the clock. If you
- are using a custom event loop, always remember to `tick` the clock!
-
-Scheduling
-==========
-
-You can schedule a function to be called every time the clock is ticked::
-
- def callback(dt):
- print(f"{dt} seconds since last callback")
-
- clock.schedule(callback)
-
-The `schedule_interval` method causes a function to be called every "n"
-seconds::
-
- clock.schedule_interval(callback, 0.5) # called twice a second
-
-The `schedule_once` method causes a function to be called once "n" seconds
-in the future::
-
- clock.schedule_once(callback, 5) # called in 5 seconds
-
-All the `schedule` methods will pass on any additional args or keyword args
-you specify to the callback function::
-
- def move(dt, velocity, sprite):
- sprite.position += dt * velocity
-
- clock.schedule(move, velocity=5.0, sprite=alien)
-
-You can cancel a function scheduled with any of these methods using
-`unschedule`::
-
- clock.unschedule(move)
-
-Using multiple clocks
-=====================
-
-The clock functions are all relayed to an instance of
-:py:class:`~pyglet.clock.Clock` which is initialised with the module. You can
-get this instance to use directly::
-
- clk = pyglet.clock.get_default()
-
-You can also replace the default clock with your own:
-
- myclk = pyglet.clock.Clock()
- pyglet.clock.set_default(myclk)
-
-Each clock maintains its own set of scheduled functions and frequency
-measurement. Each clock must be "ticked" separately.
-
-Multiple and derived clocks potentially allow you to separate "game-time" and
-"wall-time", or to synchronise your clock to an audio or video stream instead
-of the system clock.
-"""
-
-import time as _time
-
-from typing import Callable
-from heapq import heappop as _heappop
-from heapq import heappush as _heappush
-from heapq import heappushpop as _heappushpop
-from operator import attrgetter as _attrgetter
-from collections import deque as _deque
-
-
-class _ScheduledItem:
- __slots__ = ['func', 'args', 'kwargs']
-
- def __init__(self, func, args, kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
-
-class _ScheduledIntervalItem:
- __slots__ = ['func', 'interval', 'last_ts', 'next_ts', 'args', 'kwargs']
-
- def __init__(self, func, interval, last_ts, next_ts, args, kwargs):
- self.func = func
- self.interval = interval
- self.last_ts = last_ts
- self.next_ts = next_ts
- self.args = args
- self.kwargs = kwargs
-
- def __lt__(self, other):
- try:
- return self.next_ts < other.next_ts
- except AttributeError:
- return self.next_ts < other
-
-
-class Clock:
- """Class for calculating and limiting framerate.
-
- It is also used for calling scheduled functions.
- """
- # List of functions to call every tick.
- _schedule_items = None
-
- # List of schedule interval items kept in sort order.
- _schedule_interval_items = None
-
- # If True, a sleep(0) is inserted on every tick.
- _force_sleep = False
-
- def __init__(self, time_function=_time.perf_counter):
- """Initialise a Clock, with optional custom time function.
-
- You can provide a custom time function to return the elapsed
- time of the application, in seconds. Defaults to time.perf_counter,
- but can be replaced to allow for easy time dilation effects or game
- pausing.
- """
- self.time = time_function
- self.next_ts = self.time()
- self.last_ts = None
-
- # Used by self.get_frequency to show update frequency
- self.times = _deque()
- self.cumulative_time = 0
- self.window_size = 60
-
- self._schedule_items = []
- self._schedule_interval_items = []
- self._current_interval_item = None
-
- @staticmethod
- def sleep(microseconds):
- _time.sleep(microseconds * 1e-6)
-
- def update_time(self):
- """Get the elapsed time since the last call to `update_time`.
-
- This updates the clock's internal measure of time and returns
- the difference since the last update (or since the clock was created).
-
- .. versionadded:: 1.2
-
- :rtype: float
- :return: The number of seconds since the last `update_time`, or 0
- if this was the first time it was called.
- """
- ts = self.time()
- if self.last_ts is None:
- delta_t = 0
- else:
- delta_t = ts - self.last_ts
- self.times.appendleft(delta_t)
- if len(self.times) > self.window_size:
- self.cumulative_time -= self.times.pop()
- self.cumulative_time += delta_t
- self.last_ts = ts
-
- return delta_t
-
- def call_scheduled_functions(self, dt):
- """Call scheduled functions that elapsed on the last `update_time`.
-
- .. versionadded:: 1.2
-
- :Parameters:
- dt : float
- The elapsed time since the last update to pass to each
- scheduled function. This is *not* used to calculate which
- functions have elapsed.
-
- :rtype: bool
- :return: True if any functions were called, otherwise False.
- """
- now = self.last_ts
- result = False # flag indicates if any function was called
-
- # handle items scheduled for every tick
- if self._schedule_items:
- result = True
- # duplicate list in case event unschedules itself
- for item in list(self._schedule_items):
- item.func(dt, *item.args, **item.kwargs)
-
- # check the next scheduled item that is not called each tick
- # if it is scheduled in the future, then exit
- interval_items = self._schedule_interval_items
- try:
- if interval_items[0].next_ts > now:
- return result
-
- # raised when the interval_items list is empty
- except IndexError:
- return result
-
- # NOTE: there is no special handling required to manage things
- # that are scheduled during this loop, due to the heap
- self._current_interval_item = item = None
- get_soft_next_ts = self._get_soft_next_ts
- while interval_items:
-
- # the scheduler will hold onto a reference to an item in
- # case it needs to be rescheduled. it is more efficient
- # to push and pop the heap at once rather than two operations
- if item is None:
- item = _heappop(interval_items)
- else:
- item = _heappushpop(interval_items, item)
-
- # a scheduled function may try to unschedule itself,
- # so we need to keep a reference to the current
- # item no longer on heap to be able to check
- self._current_interval_item = item
-
- # if next item is scheduled in the future then break
- if item.next_ts > now:
- break
-
- # execute the callback
- try:
- item.func(now - item.last_ts, *item.args, **item.kwargs)
- except ReferenceError:
- pass # weakly-referenced object no longer exists.
-
- if item.interval:
-
- # Try to keep timing regular, even if overslept this time;
- # but don't schedule in the past (which could lead to
- # infinitely-worsening error).
- item.next_ts = item.last_ts + item.interval
- item.last_ts = now
-
- # test the schedule for the next execution
- if item.next_ts <= now:
- # the scheduled time of this item has already
- # passed, so it must be rescheduled
- if now - item.next_ts < 0.05:
- # missed execution time by 'reasonable' amount, so
- # reschedule at normal interval
- item.next_ts = now + item.interval
- else:
- # missed by significant amount, now many events have
- # likely missed execution. do a soft re-schedule to
- # avoid lumping many events together.
- # in this case, the next dt will not be accurate
- item.next_ts = get_soft_next_ts(now, item.interval)
- item.last_ts = item.next_ts - item.interval
- else:
- # not an interval, so this item will not be rescheduled
- self._current_interval_item = item = None
-
- if item is not None:
- _heappush(interval_items, item)
-
- return True
-
- def tick(self, poll=False):
- """Signify that one frame has passed.
-
- This will call any scheduled functions that have elapsed.
-
- :Parameters:
- `poll` : bool
- If True, the function will call any scheduled functions
- but will not sleep or busy-wait for any reason. Recommended
- for advanced applications managing their own sleep timers
- only.
-
- Since pyglet 1.1.
-
- :rtype: float
- :return: The number of seconds since the last "tick", or 0 if this was
- the first frame.
- """
- if not poll and self._force_sleep:
- self.sleep(0)
-
- delta_t = self.update_time()
- self.call_scheduled_functions(delta_t)
- return delta_t
-
- def get_sleep_time(self, sleep_idle):
- """Get the time until the next item is scheduled.
-
- Applications can choose to continue receiving updates at the
- maximum framerate during idle time (when no functions are scheduled),
- or they can sleep through their idle time and allow the CPU to
- switch to other processes or run in low-power mode.
-
- If `sleep_idle` is ``True`` the latter behaviour is selected, and
- ``None`` will be returned if there are no scheduled items.
-
- Otherwise, if `sleep_idle` is ``False``, or if any scheduled items
- exist, a value of 0 is returned.
-
- :Parameters:
- `sleep_idle` : bool
- If True, the application intends to sleep through its idle
- time; otherwise it will continue ticking at the maximum
- frame rate allowed.
-
- :rtype: float
- :return: Time until the next scheduled event in seconds, or ``None``
- if there is no event scheduled.
-
- .. versionadded:: 1.1
- """
- if self._schedule_items or not sleep_idle:
- return 0.0
-
- if self._schedule_interval_items:
- return max(self._schedule_interval_items[0].next_ts - self.time(), 0.0)
-
- return None
-
- def get_frequency(self):
- """Get the average clock update frequency of recent history.
-
- The result is the average of a sliding window of the last "n" updates,
- where "n" is some number designed to cover approximately 1 second.
- This is **not** the Window redraw rate.
-
- :rtype: float
- :return: The measured updates per second.
- """
- if not self.cumulative_time:
- return 0
- return len(self.times) / self.cumulative_time
-
- def _get_nearest_ts(self):
- """Get the nearest timestamp.
-
- Schedule from now, unless now is sufficiently close to last_ts, in
- which case use last_ts. This clusters together scheduled items that
- probably want to be scheduled together. The old (pre 1.1.1)
- behaviour was to always use self.last_ts, and not look at ts. The
- new behaviour is needed because clock ticks can now be quite
- irregular, and span several seconds.
- """
- last_ts = self.last_ts or self.next_ts
- ts = self.time()
- if ts - last_ts > 0.2:
- return ts
- return last_ts
-
- def _get_soft_next_ts(self, last_ts, interval):
-
- def taken(ts, e):
- """Check if `ts` has already got an item scheduled nearby."""
- # TODO this function is slow and called very often.
- # Optimise it, maybe?
- for item in self._schedule_interval_items:
- if abs(item.next_ts - ts) <= e:
- return True
- elif item.next_ts > ts + e:
- return False
-
- return False
-
- # sorted list is required to produce expected results
- # taken() will iterate through the heap, expecting it to be sorted
- # and will not always catch the smallest value, so sort here.
- # do not remove the sort key...it is faster than relaying comparisons
- # NOTE: do not rewrite as popping from heap, as that is super slow!
- self._schedule_interval_items.sort(key=_attrgetter('next_ts'))
-
- # Binary division over interval:
- #
- # 0 interval
- # |--------------------------|
- # 5 3 6 2 7 4 8 1 Order of search
- #
- # i.e., first scheduled at interval,
- # then at interval/2
- # then at interval/4
- # then at interval*3/4
- # then at ...
- #
- # Schedule is hopefully then evenly distributed for any interval,
- # and any number of scheduled functions.
-
- next_ts = last_ts + interval
- if not taken(next_ts, interval / 4):
- return next_ts
-
- dt = interval
- divs = 1
- while True:
- next_ts = last_ts
- for i in range(divs - 1):
- next_ts += dt
- if not taken(next_ts, dt / 4):
- return next_ts
- dt /= 2
- divs *= 2
-
- # Avoid infinite loop in pathological case
- if divs > 16:
- return next_ts
-
- def schedule(self, func, *args, **kwargs):
- """Schedule a function to be called every frame.
-
- The function should have a prototype that includes ``dt`` as the
- first argument, which gives the elapsed time, in seconds, since the
- last clock tick. Any additional arguments given to this function
- are passed on to the callback::
-
- def callback(dt, *args, **kwargs):
- pass
-
- :Parameters:
- `func` : callable
- The function to call each frame.
- """
- item = _ScheduledItem(func, args, kwargs)
- self._schedule_items.append(item)
-
- def schedule_once(self, func, delay, *args, **kwargs):
- """Schedule a function to be called once after `delay` seconds.
-
- The callback function prototype is the same as for `schedule`.
-
- :Parameters:
- `func` : callable
- The function to call when the timer lapses.
- `delay` : float
- The number of seconds to wait before the timer lapses.
- """
- last_ts = self._get_nearest_ts()
- next_ts = last_ts + delay
- item = _ScheduledIntervalItem(func, 0, last_ts, next_ts, args, kwargs)
- _heappush(self._schedule_interval_items, item)
-
- def schedule_interval(self, func, interval, *args, **kwargs):
- """Schedule a function to be called every `interval` seconds.
-
- Specifying an interval of 0 prevents the function from being
- called again (see `schedule` to call a function as often as possible).
-
- The callback function prototype is the same as for `schedule`.
-
- :Parameters:
- `func` : callable
- The function to call when the timer lapses.
- `interval` : float
- The number of seconds to wait between each call.
-
- """
- last_ts = self._get_nearest_ts()
- next_ts = last_ts + interval
- item = _ScheduledIntervalItem(func, interval, last_ts, next_ts, args, kwargs)
- _heappush(self._schedule_interval_items, item)
-
- def schedule_interval_soft(self, func, interval, *args, **kwargs):
- """Schedule a function to be called every ``interval`` seconds.
-
- This method is similar to `schedule_interval`, except that the
- clock will move the interval out of phase with other scheduled
- functions in order to distribute CPU load more evenly.
-
- This is useful for functions that need to be called regularly,
- but not relative to the initial start time. :py:mod:`pyglet.media`
- does this for scheduling audio buffer updates, which need to occur
- regularly -- if all audio updates are scheduled at the same time
- (for example, mixing several tracks of a music score, or playing
- multiple videos back simultaneously), the resulting load on the
- CPU is excessive for those intervals but idle outside. Using
- the soft interval scheduling, the load is more evenly distributed.
-
- Soft interval scheduling can also be used as an easy way to schedule
- graphics animations out of phase; for example, multiple flags
- waving in the wind.
-
- .. versionadded:: 1.1
-
- :Parameters:
- `func` : callable
- The function to call when the timer lapses.
- `interval` : float
- The number of seconds to wait between each call.
-
- """
- next_ts = self._get_soft_next_ts(self._get_nearest_ts(), interval)
- last_ts = next_ts - interval
- item = _ScheduledIntervalItem(func, interval, last_ts, next_ts, args, kwargs)
- _heappush(self._schedule_interval_items, item)
-
- def unschedule(self, func):
- """Remove a function from the schedule.
-
- If the function appears in the schedule more than once, all occurrences
- are removed. If the function was not scheduled, no error is raised.
-
- :Parameters:
- `func` : callable
- The function to remove from the schedule.
-
- """
- # clever remove item without disturbing the heap:
- # 1. set function to an empty lambda -- original function is not called
- # 2. set interval to 0 -- item will be removed from heap eventually
- valid_items = set(item for item in self._schedule_interval_items if item.func == func)
-
- if self._current_interval_item:
- if self._current_interval_item.func == func:
- valid_items.add(self._current_interval_item)
-
- for item in valid_items:
- item.interval = 0
- item.func = lambda x, *args, **kwargs: x
-
- self._schedule_items = [i for i in self._schedule_items if i.func != func]
-
-
-# Default clock.
-_default = Clock()
-
-
-def set_default(default) -> None:
- """Set the default clock to use for all module-level functions.
-
- By default, an instance of :py:class:`~pyglet.clock.Clock` is used.
- """
- global _default
- _default = default
-
-
-def get_default():
- """Get the pyglet default Clock.
-
- Return the :py:class:`~pyglet.clock.Clock` instance that is used by all
- module-level clock functions.
- """
- return _default
-
-
-def tick(poll: bool = False) -> float:
- """Signify that one frame has passed on the default clock.
-
- This will call any scheduled functions that have elapsed,
- and return the elapsed seconds since the last tick. The
- return value will be 0.0 if this is the first tick.
-
- :Parameters:
- `poll` : bool
- If True, the function will call any scheduled functions
- but will not sleep or busy-wait for any reason. Recommended
- for advanced applications managing their own sleep timers
- only.
-
- Since pyglet 1.1.
- """
- return _default.tick(poll)
-
-
-def get_sleep_time(sleep_idle: bool) -> float:
- """Get the time until the next item is scheduled on the default clock.
-
- Returns the time until the next scheduled event in seconds, or
- ``None`` if there is no event scheduled.
-
- See `Clock.get_sleep_time` for details.
-
- :Parameters:
- `sleep_idle` : bool
- If True, the application intends to sleep through its idle
- time; otherwise it will continue ticking at the maximum
- frame rate allowed.
- """
- return _default.get_sleep_time(sleep_idle)
-
-
-def get_frequency() -> float:
- """Get the average clock update frequency.
-
- The result is the sliding average of the last "n" updates,
- where "n" is some number designed to cover approximately 1
- second. This is the internal clock update rate, **not** the
- Window redraw rate. Platform events, such as moving the
- mouse rapidly, will cause the clock to refresh more often.
- """
- return _default.get_frequency()
-
-
-def schedule(func: Callable, *args, **kwargs) -> None:
- """Schedule 'func' to be called every frame on the default clock.
-
- The arguments passed to func are ``dt``, followed by any ``*args`` and
- ``**kwargs`` given here.
- """
- _default.schedule(func, *args, **kwargs)
-
-
-def schedule_interval(func: Callable, interval: float, *args, **kwargs) -> None:
- """Schedule ``func`` on the default clock every ``interval`` seconds.
-
- The arguments passed to ``func`` are ``dt`` (time since last function
- call), followed by any ``*args`` and ``**kwargs`` given here.
- """
- _default.schedule_interval(func, interval, *args, **kwargs)
-
-
-def schedule_interval_soft(func: Callable, interval: float, *args, **kwargs) -> None:
- """Schedule ``func`` on the default clock every interval seconds.
-
- The clock will move the interval out of phase with other scheduled
- functions in order to distribute CPU load more evenly.
-
- The arguments passed to ``func`` are ``dt`` (time since last function
- call), followed by any ``*args`` and ``**kwargs`` given here.
-
- :see: `Clock.schedule_interval_soft`
- """
- _default.schedule_interval_soft(func, interval, *args, **kwargs)
-
-
-def schedule_once(func: Callable, delay: float, *args, **kwargs) -> None:
- """Schedule ``func`` to be called once after ``delay`` seconds.
-
- This function uses the default clock. ``delay`` can be a float. The
- arguments passed to ``func`` are ``dt`` (time since last function call),
- followed by any ``*args`` and ``**kwargs`` given here.
-
- If no default clock is set, the func is queued and will be scheduled
- on the default clock as soon as it is created.
- """
- _default.schedule_once(func, delay, *args, **kwargs)
-
-
-def unschedule(func: Callable) -> None:
- """Remove ``func`` from the default clock's schedule.
-
- No error is raised if the ``func`` was never scheduled.
- """
- _default.unschedule(func)
diff --git a/spaces/adlozano1/gibberish_detector/README.md b/spaces/adlozano1/gibberish_detector/README.md
deleted file mode 100644
index 16f0d17e7c828732da33ebae276878b71f929f7c..0000000000000000000000000000000000000000
--- a/spaces/adlozano1/gibberish_detector/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Gibberish_detector
-emoji: 🔥
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 2.8.12
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
-
-A gibberish detection program based on https://github.com/rrenaud/Gibberish-Detector deployed in Gradio.
diff --git a/spaces/ahmedxeno/brain_tumor_vs_normal_classification/app.py b/spaces/ahmedxeno/brain_tumor_vs_normal_classification/app.py
deleted file mode 100644
index 78b051e415575baef57f2e4986fefbf4d01389fa..0000000000000000000000000000000000000000
--- a/spaces/ahmedxeno/brain_tumor_vs_normal_classification/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-
-import gradio as gr
-import tensorflow as tf
-import tensorflow.keras
-import matplotlib.pyplot as plt
-import cv2
-import tensorflow_io as tfio
-import numpy as np
-
-loaded_model = tf.keras.models.load_model( 'brain1.h5')
-
-def take_img(img):
-
- resize = tf.image.resize(img, (128,128))
- gray = tfio.experimental.color.bgr_to_rgb(resize)
- yhat = loaded_model.predict(np.expand_dims(gray/255, 0))
- label_names = {
- "1": "Tumor",
- "2": "Normal"}
- classes_x=np.argmax(yhat,axis=1)
- a = classes_x[0]
- input_value = a + 1
- input_str = str(input_value)
- predicted_label = label_names[input_str]
- tumor = yhat[0][0]
- tumor = str(tumor)
- normal = yhat[0][1]
- normal = str(normal)
- return {'Tumour': tumor, 'Normal':normal}
-
-
-
-image = gr.inputs.Image(shape=(128,128))
-
-label = gr.outputs.Label('ok')
-gr.Interface(fn=take_img, inputs=image, outputs="label",interpretation='default').launch(debug='True')
diff --git a/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/README.md b/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/README.md
deleted file mode 100644
index 03003fe5d80504b497d7a88ee29bf8deaf3c6322..0000000000000000000000000000000000000000
--- a/spaces/akdeniz27/contract-understanding-atticus-dataset-demo/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Contract Understanding Atticus Dataset (CUAD) Demo
-emoji: 💻
-colorFrom: red
-colorTo: purple
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/grid_sample_gradfix.py b/spaces/akhaliq/stylegan3_clip/torch_utils/ops/grid_sample_gradfix.py
deleted file mode 100644
index 269ffe81b04a8b11b4a8dea1913ae876b0ac4d30..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/grid_sample_gradfix.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom replacement for `torch.nn.functional.grid_sample` that
-supports arbitrarily high order gradients between the input and output.
-Only works on 2D images and assumes
-`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`."""
-
-import torch
-
-# pylint: disable=redefined-builtin
-# pylint: disable=arguments-differ
-# pylint: disable=protected-access
-
-#----------------------------------------------------------------------------
-
-enabled = False # Enable the custom op by setting this to true.
-
-#----------------------------------------------------------------------------
-
-def grid_sample(input, grid):
- if _should_use_custom_op():
- return _GridSample2dForward.apply(input, grid)
- return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
-
-#----------------------------------------------------------------------------
-
-def _should_use_custom_op():
- return enabled
-
-#----------------------------------------------------------------------------
-
-class _GridSample2dForward(torch.autograd.Function):
- @staticmethod
- def forward(ctx, input, grid):
- assert input.ndim == 4
- assert grid.ndim == 4
- output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
- ctx.save_for_backward(input, grid)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- input, grid = ctx.saved_tensors
- grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid)
- return grad_input, grad_grid
-
-#----------------------------------------------------------------------------
-
-class _GridSample2dBackward(torch.autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input, grid):
- op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
- grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
- ctx.save_for_backward(grid)
- return grad_input, grad_grid
-
- @staticmethod
- def backward(ctx, grad2_grad_input, grad2_grad_grid):
- _ = grad2_grad_grid # unused
- grid, = ctx.saved_tensors
- grad2_grad_output = None
- grad2_input = None
- grad2_grid = None
-
- if ctx.needs_input_grad[0]:
- grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid)
-
- assert not ctx.needs_input_grad[2]
- return grad2_grad_output, grad2_input, grad2_grid
-
-#----------------------------------------------------------------------------
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/parser.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/parser.py
deleted file mode 100644
index a1c99a8cb301f222feb1845be4e80d9b1f9d2622..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/parser.py
+++ /dev/null
@@ -1,292 +0,0 @@
-"""Base option parser setup"""
-
-import logging
-import optparse
-import shutil
-import sys
-import textwrap
-from contextlib import suppress
-from typing import Any, Dict, Iterator, List, Tuple
-
-from pip._internal.cli.status_codes import UNKNOWN_ERROR
-from pip._internal.configuration import Configuration, ConfigurationError
-from pip._internal.utils.misc import redact_auth_from_url, strtobool
-
-logger = logging.getLogger(__name__)
-
-
-class PrettyHelpFormatter(optparse.IndentedHelpFormatter):
- """A prettier/less verbose help formatter for optparse."""
-
- def __init__(self, *args: Any, **kwargs: Any) -> None:
- # help position must be aligned with __init__.parseopts.description
- kwargs["max_help_position"] = 30
- kwargs["indent_increment"] = 1
- kwargs["width"] = shutil.get_terminal_size()[0] - 2
- super().__init__(*args, **kwargs)
-
- def format_option_strings(self, option: optparse.Option) -> str:
- return self._format_option_strings(option)
-
- def _format_option_strings(
- self, option: optparse.Option, mvarfmt: str = " <{}>", optsep: str = ", "
- ) -> str:
- """
- Return a comma-separated list of option strings and metavars.
-
- :param option: tuple of (short opt, long opt), e.g: ('-f', '--format')
- :param mvarfmt: metavar format string
- :param optsep: separator
- """
- opts = []
-
- if option._short_opts:
- opts.append(option._short_opts[0])
- if option._long_opts:
- opts.append(option._long_opts[0])
- if len(opts) > 1:
- opts.insert(1, optsep)
-
- if option.takes_value():
- assert option.dest is not None
- metavar = option.metavar or option.dest.lower()
- opts.append(mvarfmt.format(metavar.lower()))
-
- return "".join(opts)
-
- def format_heading(self, heading: str) -> str:
- if heading == "Options":
- return ""
- return heading + ":\n"
-
- def format_usage(self, usage: str) -> str:
- """
- Ensure there is only one newline between usage and the first heading
- if there is no description.
- """
- msg = "\nUsage: {}\n".format(self.indent_lines(textwrap.dedent(usage), " "))
- return msg
-
- def format_description(self, description: str) -> str:
- # leave full control over description to us
- if description:
- if hasattr(self.parser, "main"):
- label = "Commands"
- else:
- label = "Description"
- # some doc strings have initial newlines, some don't
- description = description.lstrip("\n")
- # some doc strings have final newlines and spaces, some don't
- description = description.rstrip()
- # dedent, then reindent
- description = self.indent_lines(textwrap.dedent(description), " ")
- description = f"{label}:\n{description}\n"
- return description
- else:
- return ""
-
- def format_epilog(self, epilog: str) -> str:
- # leave full control over epilog to us
- if epilog:
- return epilog
- else:
- return ""
-
- def indent_lines(self, text: str, indent: str) -> str:
- new_lines = [indent + line for line in text.split("\n")]
- return "\n".join(new_lines)
-
-
-class UpdatingDefaultsHelpFormatter(PrettyHelpFormatter):
- """Custom help formatter for use in ConfigOptionParser.
-
- This is updates the defaults before expanding them, allowing
- them to show up correctly in the help listing.
-
- Also redact auth from url type options
- """
-
- def expand_default(self, option: optparse.Option) -> str:
- default_values = None
- if self.parser is not None:
- assert isinstance(self.parser, ConfigOptionParser)
- self.parser._update_defaults(self.parser.defaults)
- assert option.dest is not None
- default_values = self.parser.defaults.get(option.dest)
- help_text = super().expand_default(option)
-
- if default_values and option.metavar == "URL":
- if isinstance(default_values, str):
- default_values = [default_values]
-
- # If its not a list, we should abort and just return the help text
- if not isinstance(default_values, list):
- default_values = []
-
- for val in default_values:
- help_text = help_text.replace(val, redact_auth_from_url(val))
-
- return help_text
-
-
-class CustomOptionParser(optparse.OptionParser):
- def insert_option_group(
- self, idx: int, *args: Any, **kwargs: Any
- ) -> optparse.OptionGroup:
- """Insert an OptionGroup at a given position."""
- group = self.add_option_group(*args, **kwargs)
-
- self.option_groups.pop()
- self.option_groups.insert(idx, group)
-
- return group
-
- @property
- def option_list_all(self) -> List[optparse.Option]:
- """Get a list of all options, including those in option groups."""
- res = self.option_list[:]
- for i in self.option_groups:
- res.extend(i.option_list)
-
- return res
-
-
-class ConfigOptionParser(CustomOptionParser):
- """Custom option parser which updates its defaults by checking the
- configuration files and environmental variables"""
-
- def __init__(
- self,
- *args: Any,
- name: str,
- isolated: bool = False,
- **kwargs: Any,
- ) -> None:
- self.name = name
- self.config = Configuration(isolated)
-
- assert self.name
- super().__init__(*args, **kwargs)
-
- def check_default(self, option: optparse.Option, key: str, val: Any) -> Any:
- try:
- return option.check_value(key, val)
- except optparse.OptionValueError as exc:
- print(f"An error occurred during configuration: {exc}")
- sys.exit(3)
-
- def _get_ordered_configuration_items(self) -> Iterator[Tuple[str, Any]]:
- # Configuration gives keys in an unordered manner. Order them.
- override_order = ["global", self.name, ":env:"]
-
- # Pool the options into different groups
- section_items: Dict[str, List[Tuple[str, Any]]] = {
- name: [] for name in override_order
- }
- for section_key, val in self.config.items():
- # ignore empty values
- if not val:
- logger.debug(
- "Ignoring configuration key '%s' as it's value is empty.",
- section_key,
- )
- continue
-
- section, key = section_key.split(".", 1)
- if section in override_order:
- section_items[section].append((key, val))
-
- # Yield each group in their override order
- for section in override_order:
- for key, val in section_items[section]:
- yield key, val
-
- def _update_defaults(self, defaults: Dict[str, Any]) -> Dict[str, Any]:
- """Updates the given defaults with values from the config files and
- the environ. Does a little special handling for certain types of
- options (lists)."""
-
- # Accumulate complex default state.
- self.values = optparse.Values(self.defaults)
- late_eval = set()
- # Then set the options with those values
- for key, val in self._get_ordered_configuration_items():
- # '--' because configuration supports only long names
- option = self.get_option("--" + key)
-
- # Ignore options not present in this parser. E.g. non-globals put
- # in [global] by users that want them to apply to all applicable
- # commands.
- if option is None:
- continue
-
- assert option.dest is not None
-
- if option.action in ("store_true", "store_false"):
- try:
- val = strtobool(val)
- except ValueError:
- self.error(
- "{} is not a valid value for {} option, " # noqa
- "please specify a boolean value like yes/no, "
- "true/false or 1/0 instead.".format(val, key)
- )
- elif option.action == "count":
- with suppress(ValueError):
- val = strtobool(val)
- with suppress(ValueError):
- val = int(val)
- if not isinstance(val, int) or val < 0:
- self.error(
- "{} is not a valid value for {} option, " # noqa
- "please instead specify either a non-negative integer "
- "or a boolean value like yes/no or false/true "
- "which is equivalent to 1/0.".format(val, key)
- )
- elif option.action == "append":
- val = val.split()
- val = [self.check_default(option, key, v) for v in val]
- elif option.action == "callback":
- assert option.callback is not None
- late_eval.add(option.dest)
- opt_str = option.get_opt_string()
- val = option.convert_value(opt_str, val)
- # From take_action
- args = option.callback_args or ()
- kwargs = option.callback_kwargs or {}
- option.callback(option, opt_str, val, self, *args, **kwargs)
- else:
- val = self.check_default(option, key, val)
-
- defaults[option.dest] = val
-
- for key in late_eval:
- defaults[key] = getattr(self.values, key)
- self.values = None
- return defaults
-
- def get_default_values(self) -> optparse.Values:
- """Overriding to make updating the defaults after instantiation of
- the option parser possible, _update_defaults() does the dirty work."""
- if not self.process_default_values:
- # Old, pre-Optik 1.5 behaviour.
- return optparse.Values(self.defaults)
-
- # Load the configuration, or error out in case of an error
- try:
- self.config.load()
- except ConfigurationError as err:
- self.exit(UNKNOWN_ERROR, str(err))
-
- defaults = self._update_defaults(self.defaults.copy()) # ours
- for option in self._get_all_options():
- assert option.dest is not None
- default = defaults.get(option.dest)
- if isinstance(default, str):
- opt_str = option.get_opt_string()
- defaults[option.dest] = option.check_value(opt_str, default)
- return optparse.Values(defaults)
-
- def error(self, msg: str) -> None:
- self.print_usage(sys.stderr)
- self.exit(UNKNOWN_ERROR, f"{msg}\n")
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel.py
deleted file mode 100644
index b0d2fc9eadb9349c0b8e69b58351648f3e54dfb5..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import logging
-import os
-from typing import Optional
-
-from pip._vendor.pep517.wrappers import Pep517HookCaller
-
-from pip._internal.utils.subprocess import runner_with_spinner_message
-
-logger = logging.getLogger(__name__)
-
-
-def build_wheel_pep517(
- name: str,
- backend: Pep517HookCaller,
- metadata_directory: str,
- tempd: str,
-) -> Optional[str]:
- """Build one InstallRequirement using the PEP 517 build process.
-
- Returns path to wheel if successfully built. Otherwise, returns None.
- """
- assert metadata_directory is not None
- try:
- logger.debug("Destination directory: %s", tempd)
-
- runner = runner_with_spinner_message(
- f"Building wheel for {name} (pyproject.toml)"
- )
- with backend.subprocess_runner(runner):
- wheel_name = backend.build_wheel(
- tempd,
- metadata_directory=metadata_directory,
- )
- except Exception:
- logger.error("Failed building wheel for %s", name)
- return None
- return os.path.join(tempd, wheel_name)
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py
deleted file mode 100644
index 9e29623bdc54a7c6d11bcc167d71bb44cc9be39d..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py
+++ /dev/null
@@ -1,92 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .mbcharsetprober import MultiByteCharSetProber
-from .codingstatemachine import CodingStateMachine
-from .chardistribution import SJISDistributionAnalysis
-from .jpcntx import SJISContextAnalysis
-from .mbcssm import SJIS_SM_MODEL
-from .enums import ProbingState, MachineState
-
-
-class SJISProber(MultiByteCharSetProber):
- def __init__(self):
- super(SJISProber, self).__init__()
- self.coding_sm = CodingStateMachine(SJIS_SM_MODEL)
- self.distribution_analyzer = SJISDistributionAnalysis()
- self.context_analyzer = SJISContextAnalysis()
- self.reset()
-
- def reset(self):
- super(SJISProber, self).reset()
- self.context_analyzer.reset()
-
- @property
- def charset_name(self):
- return self.context_analyzer.charset_name
-
- @property
- def language(self):
- return "Japanese"
-
- def feed(self, byte_str):
- for i in range(len(byte_str)):
- coding_state = self.coding_sm.next_state(byte_str[i])
- if coding_state == MachineState.ERROR:
- self.logger.debug('%s %s prober hit error at byte %s',
- self.charset_name, self.language, i)
- self._state = ProbingState.NOT_ME
- break
- elif coding_state == MachineState.ITS_ME:
- self._state = ProbingState.FOUND_IT
- break
- elif coding_state == MachineState.START:
- char_len = self.coding_sm.get_current_charlen()
- if i == 0:
- self._last_char[1] = byte_str[0]
- self.context_analyzer.feed(self._last_char[2 - char_len:],
- char_len)
- self.distribution_analyzer.feed(self._last_char, char_len)
- else:
- self.context_analyzer.feed(byte_str[i + 1 - char_len:i + 3
- - char_len], char_len)
- self.distribution_analyzer.feed(byte_str[i - 1:i + 1],
- char_len)
-
- self._last_char[0] = byte_str[-1]
-
- if self.state == ProbingState.DETECTING:
- if (self.context_analyzer.got_enough_data() and
- (self.get_confidence() > self.SHORTCUT_THRESHOLD)):
- self._state = ProbingState.FOUND_IT
-
- return self.state
-
- def get_confidence(self):
- context_conf = self.context_analyzer.get_confidence()
- distrib_conf = self.distribution_analyzer.get_confidence()
- return max(context_conf, distrib_conf)
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_loop.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_loop.py
deleted file mode 100644
index 01c6cafbe53f1fcb12f7b382b2b35e2fd2c69933..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_loop.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from typing import Iterable, Tuple, TypeVar
-
-T = TypeVar("T")
-
-
-def loop_first(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
- """Iterate and generate a tuple with a flag for first value."""
- iter_values = iter(values)
- try:
- value = next(iter_values)
- except StopIteration:
- return
- yield True, value
- for value in iter_values:
- yield False, value
-
-
-def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
- """Iterate and generate a tuple with a flag for last value."""
- iter_values = iter(values)
- try:
- previous_value = next(iter_values)
- except StopIteration:
- return
- for value in iter_values:
- yield False, previous_value
- previous_value = value
- yield True, previous_value
-
-
-def loop_first_last(values: Iterable[T]) -> Iterable[Tuple[bool, bool, T]]:
- """Iterate and generate a tuple with a flag for first and last value."""
- iter_values = iter(values)
- try:
- previous_value = next(iter_values)
- except StopIteration:
- return
- first = True
- for value in iter_values:
- yield first, False, previous_value
- first = False
- previous_value = value
- yield first, True, previous_value
diff --git a/spaces/ali-ghamdan/deoldify/fastai/vision/models/unet.py b/spaces/ali-ghamdan/deoldify/fastai/vision/models/unet.py
deleted file mode 100644
index 06ed75c4c10890086e07da775d50e690e91f1d88..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/vision/models/unet.py
+++ /dev/null
@@ -1,78 +0,0 @@
-from ...torch_core import *
-from ...layers import *
-from ...callbacks.hooks import *
-
-__all__ = ['DynamicUnet', 'UnetBlock']
-
-def _get_sfs_idxs(sizes:Sizes) -> List[int]:
- "Get the indexes of the layers where the size of the activation changes."
- feature_szs = [size[-1] for size in sizes]
- sfs_idxs = list(np.where(np.array(feature_szs[:-1]) != np.array(feature_szs[1:]))[0])
- if feature_szs[0] != feature_szs[1]: sfs_idxs = [0] + sfs_idxs
- return sfs_idxs
-
-class UnetBlock(Module):
- "A quasi-UNet block, using `PixelShuffle_ICNR upsampling`."
- def __init__(self, up_in_c:int, x_in_c:int, hook:Hook, final_div:bool=True, blur:bool=False, leaky:float=None,
- self_attention:bool=False, **kwargs):
- self.hook = hook
- self.shuf = PixelShuffle_ICNR(up_in_c, up_in_c//2, blur=blur, leaky=leaky, **kwargs)
- self.bn = batchnorm_2d(x_in_c)
- ni = up_in_c//2 + x_in_c
- nf = ni if final_div else ni//2
- self.conv1 = conv_layer(ni, nf, leaky=leaky, **kwargs)
- self.conv2 = conv_layer(nf, nf, leaky=leaky, self_attention=self_attention, **kwargs)
- self.relu = relu(leaky=leaky)
-
- def forward(self, up_in:Tensor) -> Tensor:
- s = self.hook.stored
- up_out = self.shuf(up_in)
- ssh = s.shape[-2:]
- if ssh != up_out.shape[-2:]:
- up_out = F.interpolate(up_out, s.shape[-2:], mode='nearest')
- cat_x = self.relu(torch.cat([up_out, self.bn(s)], dim=1))
- return self.conv2(self.conv1(cat_x))
-
-
-class DynamicUnet(SequentialEx):
- "Create a U-Net from a given architecture."
- def __init__(self, encoder:nn.Module, n_classes:int, img_size:Tuple[int,int]=(256,256), blur:bool=False, blur_final=True, self_attention:bool=False,
- y_range:Optional[Tuple[float,float]]=None,
- last_cross:bool=True, bottle:bool=False, **kwargs):
- imsize = img_size
- sfs_szs = model_sizes(encoder, size=imsize)
- sfs_idxs = list(reversed(_get_sfs_idxs(sfs_szs)))
- self.sfs = hook_outputs([encoder[i] for i in sfs_idxs])
- x = dummy_eval(encoder, imsize).detach()
-
- ni = sfs_szs[-1][1]
- middle_conv = nn.Sequential(conv_layer(ni, ni*2, **kwargs),
- conv_layer(ni*2, ni, **kwargs)).eval()
- x = middle_conv(x)
- layers = [encoder, batchnorm_2d(ni), nn.ReLU(), middle_conv]
-
- for i,idx in enumerate(sfs_idxs):
- not_final = i!=len(sfs_idxs)-1
- up_in_c, x_in_c = int(x.shape[1]), int(sfs_szs[idx][1])
- do_blur = blur and (not_final or blur_final)
- sa = self_attention and (i==len(sfs_idxs)-3)
- unet_block = UnetBlock(up_in_c, x_in_c, self.sfs[i], final_div=not_final, blur=do_blur, self_attention=sa,
- **kwargs).eval()
- layers.append(unet_block)
- x = unet_block(x)
-
- ni = x.shape[1]
- if imsize != sfs_szs[0][-2:]: layers.append(PixelShuffle_ICNR(ni, **kwargs))
- x = PixelShuffle_ICNR(ni)(x)
- if imsize != x.shape[-2:]: layers.append(Lambda(lambda x: F.interpolate(x, imsize, mode='nearest')))
- if last_cross:
- layers.append(MergeLayer(dense=True))
- ni += in_channels(encoder)
- layers.append(res_block(ni, bottle=bottle, **kwargs))
- layers += [conv_layer(ni, n_classes, ks=1, use_activ=False, **kwargs)]
- if y_range is not None: layers.append(SigmoidRange(*y_range))
- super().__init__(*layers)
-
- def __del__(self):
- if hasattr(self, "sfs"): self.sfs.remove()
-
diff --git a/spaces/allknowingroger/Image-Models-Test200/README.md b/spaces/allknowingroger/Image-Models-Test200/README.md
deleted file mode 100644
index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test200/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test71/README.md b/spaces/allknowingroger/Image-Models-Test71/README.md
deleted file mode 100644
index 89b5280627d2e22d3886b1ec804526e771816274..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test71/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test70
----
-
-
\ No newline at end of file
diff --git a/spaces/alvanlii/domain-expansion/expansion_utils/closed_form_factrorization.py b/spaces/alvanlii/domain-expansion/expansion_utils/closed_form_factrorization.py
deleted file mode 100644
index a3b017cc523cf7cbd9ff55536dc7f7ecd8a41b03..0000000000000000000000000000000000000000
--- a/spaces/alvanlii/domain-expansion/expansion_utils/closed_form_factrorization.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Based on a script from https://github.com/rosinality/stylegan2-pytorch
-
-# ==========================================================================================
-#
-# Adobe’s modifications are Copyright 2023 Adobe Research. All rights reserved.
-# Adobe’s modifications are licensed under the Adobe Research License. To view a copy of the license, visit
-# LICENSE.md.
-#
-# ==========================================================================================
-
-
-import argparse
-import numpy as np
-import torch
-from pathlib import Path
-
-import dnnlib
-
-import legacy
-
-
-def factorize(G):
- modulate = {
- k: v
- for k, v in G.named_parameters()
- if ('b4' in k or "torgb" not in k) and ("affine" in k and "weight" in k)
- }
-
- weight_mat = []
- for k, v in modulate.items():
- weight_mat.append(v)
-
- W = torch.cat(weight_mat, 0)
- eigvec = torch.svd(W).V
-
- return eigvec
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Extract factor/eigenvectors of latent spaces using closed form factorization"
- )
-
- parser.add_argument("--out", type=str, requited=True, help="path to output file")
- parser.add_argument("ckpt", type=str, help="name of the model checkpoint")
-
- args = parser.parse_args()
- device = 'cuda'
- with dnnlib.util.open_url(args.ckpt) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device)
-
- eigvec = factorize(G)
- torch.save(eigvec, args.out)
diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/encoders/modules_bak.py b/spaces/amankishore/sjc/sd1/ldm/modules/encoders/modules_bak.py
deleted file mode 100644
index 418fc52d6012a9e4acf6f2ba19ce4d038eb45be2..0000000000000000000000000000000000000000
--- a/spaces/amankishore/sjc/sd1/ldm/modules/encoders/modules_bak.py
+++ /dev/null
@@ -1,510 +0,0 @@
-import torch
-import torch.nn as nn
-from functools import partial
-import clip
-from einops import rearrange, repeat
-from transformers import CLIPTokenizer, CLIPTextModel
-import kornia
-
-from ldm.modules.x_transformer import Encoder, TransformerWrapper # TODO: can we directly rely on lucidrains code and simply add this as a reuirement? --> test
-
-def _expand_mask(mask, dtype, tgt_len = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-def _build_causal_attention_mask(bsz, seq_len, dtype):
- # lazily create causal attention mask, with full attention between the vision tokens
- # pytorch uses additive attention mask; fill with -inf
- mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
- mask.fill_(torch.tensor(torch.finfo(dtype).min))
- mask.triu_(1) # zero out the lower diagonal
- mask = mask.unsqueeze(1) # expand mask
- return mask
-
-class AbstractEncoder(nn.Module):
- def __init__(self):
- super().__init__()
-
- def encode(self, *args, **kwargs):
- raise NotImplementedError
-
-
-
-class ClassEmbedder(nn.Module):
- def __init__(self, embed_dim, n_classes=1000, key='class'):
- super().__init__()
- self.key = key
- self.embedding = nn.Embedding(n_classes, embed_dim)
-
- def forward(self, batch, key=None):
- if key is None:
- key = self.key
- # this is for use in crossattn
- c = batch[key][:, None]
- c = self.embedding(c)
- return c
-
-
-class TransformerEmbedder(AbstractEncoder):
- """Some transformer encoder layers"""
- def __init__(self, n_embed, n_layer, vocab_size, max_seq_len=77, device="cuda"):
- super().__init__()
- self.device = device
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
- attn_layers=Encoder(dim=n_embed, depth=n_layer))
-
- def forward(self, tokens):
- tokens = tokens.to(self.device) # meh
- z = self.transformer(tokens, return_embeddings=True)
- return z
-
- def encode(self, x):
- return self(x)
-
-
-class BERTTokenizer(AbstractEncoder):
- """ Uses a pretrained BERT tokenizer by huggingface. Vocab size: 30522 (?)"""
- def __init__(self, device="cuda", vq_interface=True, max_length=77):
- super().__init__()
- from transformers import BertTokenizerFast # TODO: add to reuquirements
- self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
- self.device = device
- self.vq_interface = vq_interface
- self.max_length = max_length
-
- def forward(self, text):
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"].to(self.device)
- return tokens
-
- @torch.no_grad()
- def encode(self, text):
- tokens = self(text)
- if not self.vq_interface:
- return tokens
- return None, None, [None, None, tokens]
-
- def decode(self, text):
- return text
-
-
-class BERTEmbedder(AbstractEncoder):
- """Uses the BERT tokenizr model and add some transformer encoder layers"""
- def __init__(self, n_embed, n_layer, vocab_size=30522, max_seq_len=77,
- device="cuda",use_tokenizer=True, embedding_dropout=0.0):
- super().__init__()
- self.use_tknz_fn = use_tokenizer
- if self.use_tknz_fn:
- self.tknz_fn = BERTTokenizer(vq_interface=False, max_length=max_seq_len)
- self.device = device
- self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len,
- attn_layers=Encoder(dim=n_embed, depth=n_layer),
- emb_dropout=embedding_dropout)
-
- def forward(self, text, embedding_manager=None):
- if self.use_tknz_fn:
- tokens = self.tknz_fn(text)#.to(self.device)
- else:
- tokens = text
- z = self.transformer(tokens, return_embeddings=True, embedding_manager=embedding_manager)
- return z
-
- def encode(self, text, **kwargs):
- # output of length 77
- return self(text, **kwargs)
-
-class SpatialRescaler(nn.Module):
- def __init__(self,
- n_stages=1,
- method='bilinear',
- multiplier=0.5,
- in_channels=3,
- out_channels=None,
- bias=False):
- super().__init__()
- self.n_stages = n_stages
- assert self.n_stages >= 0
- assert method in ['nearest','linear','bilinear','trilinear','bicubic','area']
- self.multiplier = multiplier
- self.interpolator = partial(torch.nn.functional.interpolate, mode=method)
- self.remap_output = out_channels is not None
- if self.remap_output:
- print(f'Spatial Rescaler mapping from {in_channels} to {out_channels} channels after resizing.')
- self.channel_mapper = nn.Conv2d(in_channels,out_channels,1,bias=bias)
-
- def forward(self,x):
- for stage in range(self.n_stages):
- x = self.interpolator(x, scale_factor=self.multiplier)
-
-
- if self.remap_output:
- x = self.channel_mapper(x)
- return x
-
- def encode(self, x):
- return self(x)
-
-class FrozenCLIPEmbedder(AbstractEncoder):
- """Uses the CLIP transformer encoder for text (from Hugging Face)"""
- def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77):
- super().__init__()
- self.tokenizer = CLIPTokenizer.from_pretrained(version)
- self.transformer = CLIPTextModel.from_pretrained(version)
- self.device = device
- self.max_length = max_length
- self.freeze()
-
- def embedding_forward(
- self,
- input_ids = None,
- position_ids = None,
- inputs_embeds = None,
- embedding_manager = None,
- ) -> torch.Tensor:
-
- seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]
-
- if position_ids is None:
- position_ids = self.position_ids[:, :seq_length]
-
- if inputs_embeds is None:
- inputs_embeds = self.token_embedding(input_ids)
-
- if embedding_manager is not None:
- inputs_embeds = embedding_manager(input_ids, inputs_embeds)
-
-
- position_embeddings = self.position_embedding(position_ids)
- embeddings = inputs_embeds + position_embeddings
-
- return embeddings
-
- self.transformer.text_model.embeddings.forward = embedding_forward.__get__(self.transformer.text_model.embeddings)
-
- def encoder_forward(
- self,
- inputs_embeds,
- attention_mask = None,
- causal_attention_mask = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- encoder_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- hidden_states = inputs_embeds
- for idx, encoder_layer in enumerate(self.layers):
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- layer_outputs = encoder_layer(
- hidden_states,
- attention_mask,
- causal_attention_mask,
- output_attentions=output_attentions,
- )
-
- hidden_states = layer_outputs[0]
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[1],)
-
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- return hidden_states
-
- # if not return_dict:
- # return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
- # return BaseModelOutput(
- # last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
- # )
-
- self.transformer.text_model.encoder.forward = encoder_forward.__get__(self.transformer.text_model.encoder)
-
-
- def text_encoder_forward(
- self,
- input_ids = None,
- attention_mask = None,
- position_ids = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- embedding_manager = None,
- ):
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is None:
- raise ValueError("You have to specify either input_ids")
-
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
-
- hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids, embedding_manager=embedding_manager)
-
- bsz, seq_len = input_shape
- # CLIP's text model uses causal mask, prepare it here.
- # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324
- causal_attention_mask = _build_causal_attention_mask(bsz, seq_len, hidden_states.dtype).to(
- hidden_states.device
- )
-
- # expand attention_mask
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
-
- last_hidden_state = self.encoder(
- inputs_embeds=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- # last_hidden_state = encoder_outputs[0]
- last_hidden_state = self.final_layer_norm(last_hidden_state)
-
- # text_embeds.shape = [batch_size, sequence_length, transformer.width]
- # take features from the eot embedding (eot_token is the highest number in each sequence)
- # pooled_output = last_hidden_state[torch.arange(last_hidden_state.shape[0]), input_ids.argmax(dim=-1)]
-
- # if not return_dict:
- # return (last_hidden_state, pooled_output) + encoder_outputs[1:]
-
- return last_hidden_state
-
- self.transformer.text_model.forward = text_encoder_forward.__get__(self.transformer.text_model)
-
- def transformer_forward(
- self,
- input_ids = None,
- attention_mask = None,
- position_ids = None,
- output_attentions = None,
- output_hidden_states = None,
- return_dict = None,
- embedding_manager = None,
- ):
- return self.text_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- embedding_manager = embedding_manager
- )
-
- self.transformer.forward = transformer_forward.__get__(self.transformer)
-
-
- # def update_embedding_func(self, embedding_manager):
- # text_model = self.transformer.text_model
- # # text_model.old_embeddings = text_model.embeddings
-
- # # def new_embeddings(
- # # input_ids = None,
- # # position_ids = None,
- # # inputs_embeds = None,
- # # ) -> torch.Tensor:
-
- # # seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]
-
- # # if position_ids is None:
- # # position_ids = text_model.old_embeddings.position_ids[:, :seq_length]
-
- # # if inputs_embeds is None:
- # # inputs_embeds = text_model.old_embeddings.token_embedding(input_ids)
-
-
- # # inputs_embeds = embedding_manager(input_ids, inputs_embeds)
-
- # # position_embeddings = text_model.old_embeddings.position_embedding(position_ids)
- # # embeddings = inputs_embeds + position_embeddings
-
- # # return embeddings
-
- # # del text_model.embeddings
- # # text_model.embeddings = new_embeddings
-
- # # class NewEmbeddings(torch.nn.Module):
-
- # # def __init__(self, orig_embedder):
- # # super().__init__()
- # # self.orig_embedder = orig_embedder
-
- # # def forward(
- # # self,
- # # input_ids = None,
- # # position_ids = None,
- # # inputs_embeds = None,
- # # ) -> torch.Tensor:
-
- # # seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]
-
- # # if position_ids is None:
- # # position_ids = self.orig_embedder.position_ids[:, :seq_length]
-
- # # if inputs_embeds is None:
- # # inputs_embeds = self.orig_embedder.token_embedding(input_ids)
-
- # # inputs_embeds = embedding_manager(input_ids, inputs_embeds)
-
- # # position_embeddings = self.orig_embedder.position_embedding(position_ids)
- # # embeddings = inputs_embeds + position_embeddings
-
- # # return embeddings
-
- # # # self.new_embeddings =
- # # # text_model.embeddings = new_embeddings.__call__.__get__(text_model)
- # # text_model.embeddings = NewEmbeddings(text_model.embeddings)
-
- # class NewEmbeddings(torch.nn.Module):
-
- # def __init__(self, orig_embedder, embedding_manager):
- # super().__init__()
- # self.embedding_manager = embedding_manager
- # self.orig_embedder = orig_embedder
-
- # def forward(
- # self,
- # input_ids = None,
- # position_ids = None,
- # inputs_embeds = None,
- # ) -> torch.Tensor:
-
- # seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2]
-
- # if position_ids is None:
- # position_ids = self.orig_embedder.position_ids[:, :seq_length]
-
- # if inputs_embeds is None:
- # inputs_embeds = self.orig_embedder.token_embedding(input_ids)
-
- # # init_embeds = inputs_embeds.clone()
- # inputs_embeds = self.embedding_manager(input_ids, inputs_embeds)
-
- # # print(inputs_embeds - init_embeds)
- # # print((inputs_embeds - init_embeds).max())
- # # exit(0)
-
- # position_embeddings = self.orig_embedder.position_embedding(position_ids)
- # embeddings = inputs_embeds + position_embeddings
-
- # return embeddings
-
- # # self.new_embeddings =
- # # text_model.embeddings = new_embeddings.__call__.__get__(text_model)
- # text_model.embeddings = NewEmbeddings(text_model.embeddings, embedding_manager)
-
- def freeze(self):
- self.transformer = self.transformer.eval()
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, text, **kwargs):
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"].to(self.device)
- z = self.transformer(input_ids=tokens, **kwargs)
-
- return z
-
- def encode(self, text, **kwargs):
- return self(text, **kwargs)
-
-
-class FrozenCLIPTextEmbedder(nn.Module):
- """
- Uses the CLIP transformer encoder for text.
- """
- def __init__(self, version='ViT-L/14', device="cuda", max_length=77, n_repeat=1, normalize=True):
- super().__init__()
- self.model, _ = clip.load(version, jit=False, device="cpu")
- self.device = device
- self.max_length = max_length
- self.n_repeat = n_repeat
- self.normalize = normalize
-
- def freeze(self):
- self.model = self.model.eval()
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, text):
- tokens = clip.tokenize(text).to(self.device)
- z = self.model.encode_text(tokens)
- if self.normalize:
- z = z / torch.linalg.norm(z, dim=1, keepdim=True)
- return z
-
- def encode(self, text):
- z = self(text)
- if z.ndim==2:
- z = z[:, None, :]
- z = repeat(z, 'b 1 d -> b k d', k=self.n_repeat)
- return z
-
-
-class FrozenClipImageEmbedder(nn.Module):
- """
- Uses the CLIP image encoder.
- """
- def __init__(
- self,
- model,
- jit=False,
- device='cuda' if torch.cuda.is_available() else 'cpu',
- antialias=False,
- ):
- super().__init__()
- self.model, _ = clip.load(name=model, device=device, jit=jit)
-
- self.antialias = antialias
-
- self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False)
- self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False)
-
- def preprocess(self, x):
- # normalize to [0,1]
- x = kornia.geometry.resize(x, (224, 224),
- interpolation='bicubic',align_corners=True,
- antialias=self.antialias)
- x = (x + 1.) / 2.
- # renormalize according to clip
- x = kornia.enhance.normalize(x, self.mean, self.std)
- return x
-
- def forward(self, x):
- # x is assumed to be in range [-1,1]
- return self.model.encode_image(self.preprocess(x))
-
-
-if __name__ == "__main__":
- from ldm.util import count_params
- model = FrozenCLIPEmbedder()
- count_params(model, verbose=True)
\ No newline at end of file
diff --git a/spaces/anaclaudia13ct/insect_detection/utils/general.py b/spaces/anaclaudia13ct/insect_detection/utils/general.py
deleted file mode 100644
index 99a96576c3fdda77710f42776a3b87f42ec78fd4..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/utils/general.py
+++ /dev/null
@@ -1,1140 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-General utils
-"""
-
-import contextlib
-import glob
-import inspect
-import logging
-import logging.config
-import math
-import os
-import platform
-import random
-import re
-import signal
-import sys
-import time
-import urllib
-from copy import deepcopy
-from datetime import datetime
-from itertools import repeat
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-from subprocess import check_output
-from tarfile import is_tarfile
-from typing import Optional
-from zipfile import ZipFile, is_zipfile
-
-import cv2
-import IPython
-import numpy as np
-import pandas as pd
-import pkg_resources as pkg
-import torch
-import torchvision
-import yaml
-
-from utils import TryExcept, emojis
-from utils.downloads import gsutil_getsize
-from utils.metrics import box_iou, fitness
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[1] # YOLOv5 root directory
-RANK = int(os.getenv('RANK', -1))
-
-# Settings
-NUM_THREADS = min(8, max(1, os.cpu_count() - 1)) # number of YOLOv5 multiprocessing threads
-DATASETS_DIR = Path(os.getenv('YOLOv5_DATASETS_DIR', ROOT.parent / 'datasets')) # global datasets directory
-AUTOINSTALL = str(os.getenv('YOLOv5_AUTOINSTALL', True)).lower() == 'true' # global auto-install mode
-VERBOSE = str(os.getenv('YOLOv5_VERBOSE', True)).lower() == 'true' # global verbose mode
-TQDM_BAR_FORMAT = '{l_bar}{bar:10}{r_bar}' # tqdm bar format
-FONT = 'Arial.ttf' # https://ultralytics.com/assets/Arial.ttf
-
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(NUM_THREADS) # NumExpr max threads
-os.environ['OMP_NUM_THREADS'] = '1' if platform.system() == 'darwin' else str(NUM_THREADS) # OpenMP (PyTorch and SciPy)
-
-
-def is_ascii(s=''):
- # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7)
- s = str(s) # convert list, tuple, None, etc. to str
- return len(s.encode().decode('ascii', 'ignore')) == len(s)
-
-
-def is_chinese(s='人工智能'):
- # Is string composed of any Chinese characters?
- return bool(re.search('[\u4e00-\u9fff]', str(s)))
-
-
-def is_colab():
- # Is environment a Google Colab instance?
- return 'google.colab' in sys.modules
-
-
-def is_notebook():
- # Is environment a Jupyter notebook? Verified on Colab, Jupyterlab, Kaggle, Paperspace
- ipython_type = str(type(IPython.get_ipython()))
- return 'colab' in ipython_type or 'zmqshell' in ipython_type
-
-
-def is_kaggle():
- # Is environment a Kaggle Notebook?
- return os.environ.get('PWD') == '/kaggle/working' and os.environ.get('KAGGLE_URL_BASE') == 'https://www.kaggle.com'
-
-
-def is_docker() -> bool:
- """Check if the process runs inside a docker container."""
- if Path("/.dockerenv").exists():
- return True
- try: # check if docker is in control groups
- with open("/proc/self/cgroup") as file:
- return any("docker" in line for line in file)
- except OSError:
- return False
-
-
-def is_writeable(dir, test=False):
- # Return True if directory has write permissions, test opening a file with write permissions if test=True
- if not test:
- return os.access(dir, os.W_OK) # possible issues on Windows
- file = Path(dir) / 'tmp.txt'
- try:
- with open(file, 'w'): # open file with write permissions
- pass
- file.unlink() # remove file
- return True
- except OSError:
- return False
-
-
-LOGGING_NAME = "yolov5"
-
-
-def set_logging(name=LOGGING_NAME, verbose=True):
- # sets up logging for the given name
- rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings
- level = logging.INFO if verbose and rank in {-1, 0} else logging.ERROR
- logging.config.dictConfig({
- "version": 1,
- "disable_existing_loggers": False,
- "formatters": {
- name: {
- "format": "%(message)s"}},
- "handlers": {
- name: {
- "class": "logging.StreamHandler",
- "formatter": name,
- "level": level,}},
- "loggers": {
- name: {
- "level": level,
- "handlers": [name],
- "propagate": False,}}})
-
-
-set_logging(LOGGING_NAME) # run before defining LOGGER
-LOGGER = logging.getLogger(LOGGING_NAME) # define globally (used in train.py, val.py, detect.py, etc.)
-if platform.system() == 'Windows':
- for fn in LOGGER.info, LOGGER.warning:
- setattr(LOGGER, fn.__name__, lambda x: fn(emojis(x))) # emoji safe logging
-
-
-def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):
- # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required.
- env = os.getenv(env_var)
- if env:
- path = Path(env) # use environment variable
- else:
- cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'} # 3 OS dirs
- path = Path.home() / cfg.get(platform.system(), '') # OS-specific config dir
- path = (path if is_writeable(path) else Path('/tmp')) / dir # GCP and AWS lambda fix, only /tmp is writeable
- path.mkdir(exist_ok=True) # make if required
- return path
-
-
-CONFIG_DIR = user_config_dir() # Ultralytics settings dir
-
-
-class Profile(contextlib.ContextDecorator):
- # YOLOv5 Profile class. Usage: @Profile() decorator or 'with Profile():' context manager
- def __init__(self, t=0.0):
- self.t = t
- self.cuda = torch.cuda.is_available()
-
- def __enter__(self):
- self.start = self.time()
- return self
-
- def __exit__(self, type, value, traceback):
- self.dt = self.time() - self.start # delta-time
- self.t += self.dt # accumulate dt
-
- def time(self):
- if self.cuda:
- torch.cuda.synchronize()
- return time.time()
-
-
-class Timeout(contextlib.ContextDecorator):
- # YOLOv5 Timeout class. Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager
- def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True):
- self.seconds = int(seconds)
- self.timeout_message = timeout_msg
- self.suppress = bool(suppress_timeout_errors)
-
- def _timeout_handler(self, signum, frame):
- raise TimeoutError(self.timeout_message)
-
- def __enter__(self):
- if platform.system() != 'Windows': # not supported on Windows
- signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM
- signal.alarm(self.seconds) # start countdown for SIGALRM to be raised
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- if platform.system() != 'Windows':
- signal.alarm(0) # Cancel SIGALRM if it's scheduled
- if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError
- return True
-
-
-class WorkingDirectory(contextlib.ContextDecorator):
- # Usage: @WorkingDirectory(dir) decorator or 'with WorkingDirectory(dir):' context manager
- def __init__(self, new_dir):
- self.dir = new_dir # new dir
- self.cwd = Path.cwd().resolve() # current dir
-
- def __enter__(self):
- os.chdir(self.dir)
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- os.chdir(self.cwd)
-
-
-def methods(instance):
- # Get class/instance methods
- return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith("__")]
-
-
-def print_args(args: Optional[dict] = None, show_file=True, show_func=False):
- # Print function arguments (optional args dict)
- x = inspect.currentframe().f_back # previous frame
- file, _, func, _, _ = inspect.getframeinfo(x)
- if args is None: # get args automatically
- args, _, _, frm = inspect.getargvalues(x)
- args = {k: v for k, v in frm.items() if k in args}
- try:
- file = Path(file).resolve().relative_to(ROOT).with_suffix('')
- except ValueError:
- file = Path(file).stem
- s = (f'{file}: ' if show_file else '') + (f'{func}: ' if show_func else '')
- LOGGER.info(colorstr(s) + ', '.join(f'{k}={v}' for k, v in args.items()))
-
-
-def init_seeds(seed=0, deterministic=False):
- # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed(seed)
- torch.cuda.manual_seed_all(seed) # for Multi-GPU, exception safe
- # torch.backends.cudnn.benchmark = True # AutoBatch problem https://github.com/ultralytics/yolov5/issues/9287
- if deterministic and check_version(torch.__version__, '1.12.0'): # https://github.com/ultralytics/yolov5/pull/8213
- torch.use_deterministic_algorithms(True)
- torch.backends.cudnn.deterministic = True
- os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8'
- os.environ['PYTHONHASHSEED'] = str(seed)
-
-
-def intersect_dicts(da, db, exclude=()):
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
- return {k: v for k, v in da.items() if k in db and all(x not in k for x in exclude) and v.shape == db[k].shape}
-
-
-def get_default_args(func):
- # Get func() default arguments
- signature = inspect.signature(func)
- return {k: v.default for k, v in signature.parameters.items() if v.default is not inspect.Parameter.empty}
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def file_age(path=__file__):
- # Return days since last file update
- dt = (datetime.now() - datetime.fromtimestamp(Path(path).stat().st_mtime)) # delta
- return dt.days # + dt.seconds / 86400 # fractional days
-
-
-def file_date(path=__file__):
- # Return human-readable file modification date, i.e. '2021-3-26'
- t = datetime.fromtimestamp(Path(path).stat().st_mtime)
- return f'{t.year}-{t.month}-{t.day}'
-
-
-def file_size(path):
- # Return file/dir size (MB)
- mb = 1 << 20 # bytes to MiB (1024 ** 2)
- path = Path(path)
- if path.is_file():
- return path.stat().st_size / mb
- elif path.is_dir():
- return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / mb
- else:
- return 0.0
-
-
-def check_online():
- # Check internet connectivity
- import socket
-
- def run_once():
- # Check once
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility
- return True
- except OSError:
- return False
-
- return run_once() or run_once() # check twice to increase robustness to intermittent connectivity issues
-
-
-def git_describe(path=ROOT): # path must be a directory
- # Return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
- try:
- assert (Path(path) / '.git').is_dir()
- return check_output(f'git -C {path} describe --tags --long --always', shell=True).decode()[:-1]
- except Exception:
- return ''
-
-
-@TryExcept()
-@WorkingDirectory(ROOT)
-def check_git_status(repo='ultralytics/yolov5', branch='master'):
- # YOLOv5 status check, recommend 'git pull' if code is out of date
- url = f'https://github.com/{repo}'
- msg = f', for updates see {url}'
- s = colorstr('github: ') # string
- assert Path('.git').exists(), s + 'skipping check (not a git repository)' + msg
- assert check_online(), s + 'skipping check (offline)' + msg
-
- splits = re.split(pattern=r'\s', string=check_output('git remote -v', shell=True).decode())
- matches = [repo in s for s in splits]
- if any(matches):
- remote = splits[matches.index(True) - 1]
- else:
- remote = 'ultralytics'
- check_output(f'git remote add {remote} {url}', shell=True)
- check_output(f'git fetch {remote}', shell=True, timeout=5) # git fetch
- local_branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(check_output(f'git rev-list {local_branch}..{remote}/{branch} --count', shell=True)) # commits behind
- if n > 0:
- pull = 'git pull' if remote == 'origin' else f'git pull {remote} {branch}'
- s += f"⚠️ YOLOv5 is out of date by {n} commit{'s' * (n > 1)}. Use `{pull}` or `git clone {url}` to update."
- else:
- s += f'up to date with {url} ✅'
- LOGGER.info(s)
-
-
-@WorkingDirectory(ROOT)
-def check_git_info(path='.'):
- # YOLOv5 git info check, return {remote, branch, commit}
- check_requirements('gitpython')
- import git
- try:
- repo = git.Repo(path)
- remote = repo.remotes.origin.url.replace('.git', '') # i.e. 'https://github.com/ultralytics/yolov5'
- commit = repo.head.commit.hexsha # i.e. '3134699c73af83aac2a481435550b968d5792c0d'
- try:
- branch = repo.active_branch.name # i.e. 'main'
- except TypeError: # not on any branch
- branch = None # i.e. 'detached HEAD' state
- return {'remote': remote, 'branch': branch, 'commit': commit}
- except git.exc.InvalidGitRepositoryError: # path is not a git dir
- return {'remote': None, 'branch': None, 'commit': None}
-
-
-def check_python(minimum='3.7.0'):
- # Check current python version vs. required python version
- check_version(platform.python_version(), minimum, name='Python ', hard=True)
-
-
-def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False, verbose=False):
- # Check version vs. required version
- current, minimum = (pkg.parse_version(x) for x in (current, minimum))
- result = (current == minimum) if pinned else (current >= minimum) # bool
- s = f'WARNING ⚠️ {name}{minimum} is required by YOLOv5, but {name}{current} is currently installed' # string
- if hard:
- assert result, emojis(s) # assert min requirements met
- if verbose and not result:
- LOGGER.warning(s)
- return result
-
-
-@TryExcept()
-def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True, cmds=''):
- # Check installed dependencies meet YOLOv5 requirements (pass *.txt file or list of packages or single package str)
- prefix = colorstr('red', 'bold', 'requirements:')
- check_python() # check python version
- if isinstance(requirements, Path): # requirements.txt file
- file = requirements.resolve()
- assert file.exists(), f"{prefix} {file} not found, check failed."
- with file.open() as f:
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(f) if x.name not in exclude]
- elif isinstance(requirements, str):
- requirements = [requirements]
-
- s = ''
- n = 0
- for r in requirements:
- try:
- pkg.require(r)
- except (pkg.VersionConflict, pkg.DistributionNotFound): # exception if requirements not met
- s += f'"{r}" '
- n += 1
-
- if s and install and AUTOINSTALL: # check environment variable
- LOGGER.info(f"{prefix} YOLOv5 requirement{'s' * (n > 1)} {s}not found, attempting AutoUpdate...")
- try:
- # assert check_online(), "AutoUpdate skipped (offline)"
- LOGGER.info(check_output(f'pip install {s} {cmds}', shell=True).decode())
- source = file if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- LOGGER.info(s)
- except Exception as e:
- LOGGER.warning(f'{prefix} ❌ {e}')
-
-
-def check_img_size(imgsz, s=32, floor=0):
- # Verify image size is a multiple of stride s in each dimension
- if isinstance(imgsz, int): # integer i.e. img_size=640
- new_size = max(make_divisible(imgsz, int(s)), floor)
- else: # list i.e. img_size=[640, 480]
- imgsz = list(imgsz) # convert to list if tuple
- new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]
- if new_size != imgsz:
- LOGGER.warning(f'WARNING ⚠️ --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}')
- return new_size
-
-
-def check_imshow(warn=False):
- # Check if environment supports image displays
- try:
- assert not is_notebook()
- assert not is_docker()
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- if warn:
- LOGGER.warning(f'WARNING ⚠️ Environment does not support cv2.imshow() or PIL Image.show()\n{e}')
- return False
-
-
-def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''):
- # Check file(s) for acceptable suffix
- if file and suffix:
- if isinstance(suffix, str):
- suffix = [suffix]
- for f in file if isinstance(file, (list, tuple)) else [file]:
- s = Path(f).suffix.lower() # file suffix
- if len(s):
- assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}"
-
-
-def check_yaml(file, suffix=('.yaml', '.yml')):
- # Search/download YAML file (if necessary) and return path, checking suffix
- return check_file(file, suffix)
-
-
-def check_file(file, suffix=''):
- # Search/download file (if necessary) and return path
- check_suffix(file, suffix) # optional
- file = str(file) # convert to str()
- if os.path.isfile(file) or not file: # exists
- return file
- elif file.startswith(('http:/', 'https:/')): # download
- url = file # warning: Pathlib turns :// -> :/
- file = Path(urllib.parse.unquote(file).split('?')[0]).name # '%2F' to '/', split https://url.com/file.txt?auth
- if os.path.isfile(file):
- LOGGER.info(f'Found {url} locally at {file}') # file already exists
- else:
- LOGGER.info(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, file)
- assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}' # check
- return file
- elif file.startswith('clearml://'): # ClearML Dataset ID
- assert 'clearml' in sys.modules, "ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
- return file
- else: # search
- files = []
- for d in 'data', 'models', 'utils': # search directories
- files.extend(glob.glob(str(ROOT / d / '**' / file), recursive=True)) # find file
- assert len(files), f'File not found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_font(font=FONT, progress=False):
- # Download font to CONFIG_DIR if necessary
- font = Path(font)
- file = CONFIG_DIR / font.name
- if not font.exists() and not file.exists():
- url = f'https://ultralytics.com/assets/{font.name}'
- LOGGER.info(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, str(file), progress=progress)
-
-
-def check_dataset(data, autodownload=True):
- # Download, check and/or unzip dataset if not found locally
-
- # Download (optional)
- extract_dir = ''
- if isinstance(data, (str, Path)) and (is_zipfile(data) or is_tarfile(data)):
- download(data, dir=f'{DATASETS_DIR}/{Path(data).stem}', unzip=True, delete=False, curl=False, threads=1)
- data = next((DATASETS_DIR / Path(data).stem).rglob('*.yaml'))
- extract_dir, autodownload = data.parent, False
-
- # Read yaml (optional)
- if isinstance(data, (str, Path)):
- data = yaml_load(data) # dictionary
-
- # Checks
- for k in 'train', 'val', 'names':
- assert k in data, emojis(f"data.yaml '{k}:' field missing ❌")
- if isinstance(data['names'], (list, tuple)): # old array format
- data['names'] = dict(enumerate(data['names'])) # convert to dict
- assert all(isinstance(k, int) for k in data['names'].keys()), 'data.yaml names keys must be integers, i.e. 2: car'
- data['nc'] = len(data['names'])
-
- # Resolve paths
- path = Path(extract_dir or data.get('path') or '') # optional 'path' default to '.'
- if not path.is_absolute():
- path = (ROOT / path).resolve()
- data['path'] = path # download scripts
- for k in 'train', 'val', 'test':
- if data.get(k): # prepend path
- if isinstance(data[k], str):
- x = (path / data[k]).resolve()
- if not x.exists() and data[k].startswith('../'):
- x = (path / data[k][3:]).resolve()
- data[k] = str(x)
- else:
- data[k] = [str((path / x).resolve()) for x in data[k]]
-
- # Parse yaml
- train, val, test, s = (data.get(x) for x in ('train', 'val', 'test', 'download'))
- if val:
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- LOGGER.info('\nDataset not found ⚠️, missing paths %s' % [str(x) for x in val if not x.exists()])
- if not s or not autodownload:
- raise Exception('Dataset not found ❌')
- t = time.time()
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- LOGGER.info(f'Downloading {s} to {f}...')
- torch.hub.download_url_to_file(s, f)
- Path(DATASETS_DIR).mkdir(parents=True, exist_ok=True) # create root
- unzip_file(f, path=DATASETS_DIR) # unzip
- Path(f).unlink() # remove zip
- r = None # success
- elif s.startswith('bash '): # bash script
- LOGGER.info(f'Running {s} ...')
- r = os.system(s)
- else: # python script
- r = exec(s, {'yaml': data}) # return None
- dt = f'({round(time.time() - t, 1)}s)'
- s = f"success ✅ {dt}, saved to {colorstr('bold', DATASETS_DIR)}" if r in (0, None) else f"failure {dt} ❌"
- LOGGER.info(f"Dataset download {s}")
- check_font('Arial.ttf' if is_ascii(data['names']) else 'Arial.Unicode.ttf', progress=True) # download fonts
- return data # dictionary
-
-
-def check_amp(model):
- # Check PyTorch Automatic Mixed Precision (AMP) functionality. Return True on correct operation
- from models.common import AutoShape, DetectMultiBackend
-
- def amp_allclose(model, im):
- # All close FP32 vs AMP results
- m = AutoShape(model, verbose=False) # model
- a = m(im).xywhn[0] # FP32 inference
- m.amp = True
- b = m(im).xywhn[0] # AMP inference
- return a.shape == b.shape and torch.allclose(a, b, atol=0.1) # close to 10% absolute tolerance
-
- prefix = colorstr('AMP: ')
- device = next(model.parameters()).device # get model device
- if device.type in ('cpu', 'mps'):
- return False # AMP only used on CUDA devices
- f = ROOT / 'data' / 'images' / 'bus.jpg' # image to check
- im = f if f.exists() else 'https://ultralytics.com/images/bus.jpg' if check_online() else np.ones((640, 640, 3))
- try:
- assert amp_allclose(deepcopy(model), im) or amp_allclose(DetectMultiBackend('yolov5n.pt', device), im)
- LOGGER.info(f'{prefix}checks passed ✅')
- return True
- except Exception:
- help_url = 'https://github.com/ultralytics/yolov5/issues/7908'
- LOGGER.warning(f'{prefix}checks failed ❌, disabling Automatic Mixed Precision. See {help_url}')
- return False
-
-
-def yaml_load(file='data.yaml'):
- # Single-line safe yaml loading
- with open(file, errors='ignore') as f:
- return yaml.safe_load(f)
-
-
-def yaml_save(file='data.yaml', data={}):
- # Single-line safe yaml saving
- with open(file, 'w') as f:
- yaml.safe_dump({k: str(v) if isinstance(v, Path) else v for k, v in data.items()}, f, sort_keys=False)
-
-
-def unzip_file(file, path=None, exclude=('.DS_Store', '__MACOSX')):
- # Unzip a *.zip file to path/, excluding files containing strings in exclude list
- if path is None:
- path = Path(file).parent # default path
- with ZipFile(file) as zipObj:
- for f in zipObj.namelist(): # list all archived filenames in the zip
- if all(x not in f for x in exclude):
- zipObj.extract(f, path=path)
-
-
-def url2file(url):
- # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt
- url = str(Path(url)).replace(':/', '://') # Pathlib turns :// -> :/
- return Path(urllib.parse.unquote(url)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth
-
-
-def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry=3):
- # Multithreaded file download and unzip function, used in data.yaml for autodownload
- def download_one(url, dir):
- # Download 1 file
- success = True
- if os.path.isfile(url):
- f = Path(url) # filename
- else: # does not exist
- f = dir / Path(url).name
- LOGGER.info(f'Downloading {url} to {f}...')
- for i in range(retry + 1):
- if curl:
- s = 'sS' if threads > 1 else '' # silent
- r = os.system(
- f'curl -# -{s}L "{url}" -o "{f}" --retry 9 -C -') # curl download with retry, continue
- success = r == 0
- else:
- torch.hub.download_url_to_file(url, f, progress=threads == 1) # torch download
- success = f.is_file()
- if success:
- break
- elif i < retry:
- LOGGER.warning(f'⚠️ Download failure, retrying {i + 1}/{retry} {url}...')
- else:
- LOGGER.warning(f'❌ Failed to download {url}...')
-
- if unzip and success and (f.suffix == '.gz' or is_zipfile(f) or is_tarfile(f)):
- LOGGER.info(f'Unzipping {f}...')
- if is_zipfile(f):
- unzip_file(f, dir) # unzip
- elif is_tarfile(f):
- os.system(f'tar xf {f} --directory {f.parent}') # unzip
- elif f.suffix == '.gz':
- os.system(f'tar xfz {f} --directory {f.parent}') # unzip
- if delete:
- f.unlink() # remove zip
-
- dir = Path(dir)
- dir.mkdir(parents=True, exist_ok=True) # make directory
- if threads > 1:
- pool = ThreadPool(threads)
- pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multithreaded
- pool.close()
- pool.join()
- else:
- for u in [url] if isinstance(url, (str, Path)) else url:
- download_one(u, dir)
-
-
-def make_divisible(x, divisor):
- # Returns nearest x divisible by divisor
- if isinstance(divisor, torch.Tensor):
- divisor = int(divisor.max()) # to int
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {
- 'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(int) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights).float()
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- # Usage: index = random.choices(range(n), weights=image_weights, k=1) # weighted image sample
- class_counts = np.array([np.bincount(x[:, 0].astype(int), minlength=nc) for x in labels])
- return (class_weights.reshape(1, nc) * class_counts).sum(1)
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- return [
- 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right
- if clip:
- clip_boxes(x, (h - eps, w - eps)) # warning: inplace clip
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w # x center
- y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h # y center
- y[:, 2] = (x[:, 2] - x[:, 0]) / w # width
- y[:, 3] = (x[:, 3] - x[:, 1]) / h # height
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- s = np.concatenate((s, s[0:1, :]), axis=0)
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_boxes(img1_shape, boxes, img0_shape, ratio_pad=None):
- # Rescale boxes (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- boxes[:, [0, 2]] -= pad[0] # x padding
- boxes[:, [1, 3]] -= pad[1] # y padding
- boxes[:, :4] /= gain
- clip_boxes(boxes, img0_shape)
- return boxes
-
-
-def scale_segments(img1_shape, segments, img0_shape, ratio_pad=None, normalize=False):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- segments[:, 0] -= pad[0] # x padding
- segments[:, 1] -= pad[1] # y padding
- segments /= gain
- clip_segments(segments, img0_shape)
- if normalize:
- segments[:, 0] /= img0_shape[1] # width
- segments[:, 1] /= img0_shape[0] # height
- return segments
-
-
-def clip_boxes(boxes, shape):
- # Clip boxes (xyxy) to image shape (height, width)
- if isinstance(boxes, torch.Tensor): # faster individually
- boxes[:, 0].clamp_(0, shape[1]) # x1
- boxes[:, 1].clamp_(0, shape[0]) # y1
- boxes[:, 2].clamp_(0, shape[1]) # x2
- boxes[:, 3].clamp_(0, shape[0]) # y2
- else: # np.array (faster grouped)
- boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1]) # x1, x2
- boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0]) # y1, y2
-
-
-def clip_segments(segments, shape):
- # Clip segments (xy1,xy2,...) to image shape (height, width)
- if isinstance(segments, torch.Tensor): # faster individually
- segments[:, 0].clamp_(0, shape[1]) # x
- segments[:, 1].clamp_(0, shape[0]) # y
- else: # np.array (faster grouped)
- segments[:, 0] = segments[:, 0].clip(0, shape[1]) # x
- segments[:, 1] = segments[:, 1].clip(0, shape[0]) # y
-
-
-def non_max_suppression(
- prediction,
- conf_thres=0.25,
- iou_thres=0.45,
- classes=None,
- agnostic=False,
- multi_label=False,
- labels=(),
- max_det=300,
- nm=0, # number of masks
-):
- """Non-Maximum Suppression (NMS) on inference results to reject overlapping detections
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- if isinstance(prediction, (list, tuple)): # YOLOv5 model in validation model, output = (inference_out, loss_out)
- prediction = prediction[0] # select only inference output
-
- device = prediction.device
- mps = 'mps' in device.type # Apple MPS
- if mps: # MPS not fully supported yet, convert tensors to CPU before NMS
- prediction = prediction.cpu()
- bs = prediction.shape[0] # batch size
- nc = prediction.shape[2] - nm - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Checks
- assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
- assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
-
- # Settings
- # min_wh = 2 # (pixels) minimum box width and height
- max_wh = 7680 # (pixels) maximum box width and height
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 0.5 + 0.05 * bs # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- mi = 5 + nc # mask start index
- output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- lb = labels[xi]
- v = torch.zeros((len(lb), nc + nm + 5), device=x.device)
- v[:, :4] = lb[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box/Mask
- box = xywh2xyxy(x[:, :4]) # center_x, center_y, width, height) to (x1, y1, x2, y2)
- mask = x[:, mi:] # zero columns if no masks
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:mi] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, 5 + j, None], j[:, None].float(), mask[i]), 1)
- else: # best class only
- conf, j = x[:, 5:mi].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), mask), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
- else:
- x = x[x[:, 4].argsort(descending=True)] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if mps:
- output[xi] = output[xi].to(device)
- if (time.time() - t) > time_limit:
- LOGGER.warning(f'WARNING ⚠️ NMS time limit {time_limit:.3f}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'best_fitness', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- LOGGER.info(f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(keys, results, hyp, save_dir, bucket, prefix=colorstr('evolve: ')):
- evolve_csv = save_dir / 'evolve.csv'
- evolve_yaml = save_dir / 'hyp_evolve.yaml'
- keys = tuple(keys) + tuple(hyp.keys()) # [results + hyps]
- keys = tuple(x.strip() for x in keys)
- vals = results + tuple(hyp.values())
- n = len(keys)
-
- # Download (optional)
- if bucket:
- url = f'gs://{bucket}/evolve.csv'
- if gsutil_getsize(url) > (evolve_csv.stat().st_size if evolve_csv.exists() else 0):
- os.system(f'gsutil cp {url} {save_dir}') # download evolve.csv if larger than local
-
- # Log to evolve.csv
- s = '' if evolve_csv.exists() else (('%20s,' * n % keys).rstrip(',') + '\n') # add header
- with open(evolve_csv, 'a') as f:
- f.write(s + ('%20.5g,' * n % vals).rstrip(',') + '\n')
-
- # Save yaml
- with open(evolve_yaml, 'w') as f:
- data = pd.read_csv(evolve_csv, skipinitialspace=True)
- data = data.rename(columns=lambda x: x.strip()) # strip keys
- i = np.argmax(fitness(data.values[:, :4])) #
- generations = len(data)
- f.write('# YOLOv5 Hyperparameter Evolution Results\n' + f'# Best generation: {i}\n' +
- f'# Last generation: {generations - 1}\n' + '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) +
- '\n' + '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\n\n')
- yaml.safe_dump(data.loc[i][7:].to_dict(), f, sort_keys=False)
-
- # Print to screen
- LOGGER.info(prefix + f'{generations} generations finished, current result:\n' + prefix +
- ', '.join(f'{x.strip():>20s}' for x in keys) + '\n' + prefix + ', '.join(f'{x:20.5g}'
- for x in vals) + '\n\n')
-
- if bucket:
- os.system(f'gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}') # upload
-
-
-def apply_classifier(x, model, img, im0):
- # Apply a second stage classifier to YOLO outputs
- # Example model = torchvision.models.__dict__['efficientnet_b0'](pretrained=True).to(device).eval()
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_boxes(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for a in d:
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=False, sep='', mkdir=False):
- # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
- path = Path(path) # os-agnostic
- if path.exists() and not exist_ok:
- path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
-
- # Method 1
- for n in range(2, 9999):
- p = f'{path}{sep}{n}{suffix}' # increment path
- if not os.path.exists(p): #
- break
- path = Path(p)
-
- # Method 2 (deprecated)
- # dirs = glob.glob(f"{path}{sep}*") # similar paths
- # matches = [re.search(rf"{path.stem}{sep}(\d+)", d) for d in dirs]
- # i = [int(m.groups()[0]) for m in matches if m] # indices
- # n = max(i) + 1 if i else 2 # increment number
- # path = Path(f"{path}{sep}{n}{suffix}") # increment path
-
- if mkdir:
- path.mkdir(parents=True, exist_ok=True) # make directory
-
- return path
-
-
-# OpenCV Chinese-friendly functions ------------------------------------------------------------------------------------
-imshow_ = cv2.imshow # copy to avoid recursion errors
-
-
-def imread(path, flags=cv2.IMREAD_COLOR):
- return cv2.imdecode(np.fromfile(path, np.uint8), flags)
-
-
-def imwrite(path, im):
- try:
- cv2.imencode(Path(path).suffix, im)[1].tofile(path)
- return True
- except Exception:
- return False
-
-
-def imshow(path, im):
- imshow_(path.encode('unicode_escape').decode(), im)
-
-
-cv2.imread, cv2.imwrite, cv2.imshow = imread, imwrite, imshow # redefine
-
-# Variables ------------------------------------------------------------------------------------------------------------
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/sd_models_config.py b/spaces/aodianyun/stable-diffusion-webui/modules/sd_models_config.py
deleted file mode 100644
index 222793d451b3659f7954c208260af71840b475a2..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/sd_models_config.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import re
-import os
-
-import torch
-
-from modules import shared, paths, sd_disable_initialization
-
-sd_configs_path = shared.sd_configs_path
-sd_repo_configs_path = os.path.join(paths.paths['Stable Diffusion'], "configs", "stable-diffusion")
-
-
-config_default = shared.sd_default_config
-config_sd2 = os.path.join(sd_repo_configs_path, "v2-inference.yaml")
-config_sd2v = os.path.join(sd_repo_configs_path, "v2-inference-v.yaml")
-config_sd2_inpainting = os.path.join(sd_repo_configs_path, "v2-inpainting-inference.yaml")
-config_depth_model = os.path.join(sd_repo_configs_path, "v2-midas-inference.yaml")
-config_inpainting = os.path.join(sd_configs_path, "v1-inpainting-inference.yaml")
-config_instruct_pix2pix = os.path.join(sd_configs_path, "instruct-pix2pix.yaml")
-config_alt_diffusion = os.path.join(sd_configs_path, "alt-diffusion-inference.yaml")
-
-
-def is_using_v_parameterization_for_sd2(state_dict):
- """
- Detects whether unet in state_dict is using v-parameterization. Returns True if it is. You're welcome.
- """
-
- import ldm.modules.diffusionmodules.openaimodel
- from modules import devices
-
- device = devices.cpu
-
- with sd_disable_initialization.DisableInitialization():
- unet = ldm.modules.diffusionmodules.openaimodel.UNetModel(
- use_checkpoint=True,
- use_fp16=False,
- image_size=32,
- in_channels=4,
- out_channels=4,
- model_channels=320,
- attention_resolutions=[4, 2, 1],
- num_res_blocks=2,
- channel_mult=[1, 2, 4, 4],
- num_head_channels=64,
- use_spatial_transformer=True,
- use_linear_in_transformer=True,
- transformer_depth=1,
- context_dim=1024,
- legacy=False
- )
- unet.eval()
-
- with torch.no_grad():
- unet_sd = {k.replace("model.diffusion_model.", ""): v for k, v in state_dict.items() if "model.diffusion_model." in k}
- unet.load_state_dict(unet_sd, strict=True)
- unet.to(device=device, dtype=torch.float)
-
- test_cond = torch.ones((1, 2, 1024), device=device) * 0.5
- x_test = torch.ones((1, 4, 8, 8), device=device) * 0.5
-
- out = (unet(x_test, torch.asarray([999], device=device), context=test_cond) - x_test).mean().item()
-
- return out < -1
-
-
-def guess_model_config_from_state_dict(sd, filename):
- sd2_cond_proj_weight = sd.get('cond_stage_model.model.transformer.resblocks.0.attn.in_proj_weight', None)
- diffusion_model_input = sd.get('model.diffusion_model.input_blocks.0.0.weight', None)
-
- if sd.get('depth_model.model.pretrained.act_postprocess3.0.project.0.bias', None) is not None:
- return config_depth_model
-
- if sd2_cond_proj_weight is not None and sd2_cond_proj_weight.shape[1] == 1024:
- if diffusion_model_input.shape[1] == 9:
- return config_sd2_inpainting
- elif is_using_v_parameterization_for_sd2(sd):
- return config_sd2v
- else:
- return config_sd2
-
- if diffusion_model_input is not None:
- if diffusion_model_input.shape[1] == 9:
- return config_inpainting
- if diffusion_model_input.shape[1] == 8:
- return config_instruct_pix2pix
-
- if sd.get('cond_stage_model.roberta.embeddings.word_embeddings.weight', None) is not None:
- return config_alt_diffusion
-
- return config_default
-
-
-def find_checkpoint_config(state_dict, info):
- if info is None:
- return guess_model_config_from_state_dict(state_dict, "")
-
- config = find_checkpoint_config_near_filename(info)
- if config is not None:
- return config
-
- return guess_model_config_from_state_dict(state_dict, info.filename)
-
-
-def find_checkpoint_config_near_filename(info):
- if info is None:
- return None
-
- config = os.path.splitext(info.filename)[0] + ".yaml"
- if os.path.exists(config):
- return config
-
- return None
-
diff --git a/spaces/arch-123/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/arch-123/bingo/src/lib/hooks/use-enter-submit.tsx
deleted file mode 100644
index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000
--- a/spaces/arch-123/bingo/src/lib/hooks/use-enter-submit.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import { useRef, type RefObject } from 'react'
-
-export function useEnterSubmit(): {
- formRef: RefObject
- onKeyDown: (event: React.KeyboardEvent) => void
-} {
- const formRef = useRef(null)
-
- const handleKeyDown = (
- event: React.KeyboardEvent
- ): void => {
- if (
- event.key === 'Enter' &&
- !event.shiftKey &&
- !event.nativeEvent.isComposing
- ) {
- formRef.current?.requestSubmit()
- event.preventDefault()
- }
- }
-
- return { formRef, onKeyDown: handleKeyDown }
-}
diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_melgan_generator.py b/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_melgan_generator.py
deleted file mode 100644
index f4958de427ece20296adbcec54441455de997518..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_melgan_generator.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import numpy as np
-import torch
-
-from TTS.vocoder.models.melgan_generator import MelganGenerator
-
-
-def test_melgan_generator():
- model = MelganGenerator()
- print(model)
- dummy_input = torch.rand((4, 80, 64))
- output = model(dummy_input)
- assert np.all(output.shape == (4, 1, 64 * 256))
- output = model.inference(dummy_input)
- assert np.all(output.shape == (4, 1, (64 + 4) * 256))
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_flag.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_flag.py
deleted file mode 100644
index 124f137166209878b645bdfd59aa20e8b21e8e2d..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/flags/_flag.py
+++ /dev/null
@@ -1,488 +0,0 @@
-# Copyright 2017 The Abseil Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Contains Flag class - information about single command-line flag.
-
-Do NOT import this module directly. Import the flags package and use the
-aliases defined at the package level instead.
-"""
-
-from collections import abc
-import copy
-import functools
-
-from absl.flags import _argument_parser
-from absl.flags import _exceptions
-from absl.flags import _helpers
-
-
-@functools.total_ordering
-class Flag(object):
- """Information about a command-line flag.
-
- Attributes:
- name: the name for this flag
- default: the default value for this flag
- default_unparsed: the unparsed default value for this flag.
- default_as_str: default value as repr'd string, e.g., "'true'"
- (or None)
- value: the most recent parsed value of this flag set by :meth:`parse`
- help: a help string or None if no help is available
- short_name: the single letter alias for this flag (or None)
- boolean: if 'true', this flag does not accept arguments
- present: true if this flag was parsed from command line flags
- parser: an :class:`~absl.flags.ArgumentParser` object
- serializer: an ArgumentSerializer object
- allow_override: the flag may be redefined without raising an error,
- and newly defined flag overrides the old one.
- allow_override_cpp: use the flag from C++ if available the flag
- definition is replaced by the C++ flag after init
- allow_hide_cpp: use the Python flag despite having a C++ flag with
- the same name (ignore the C++ flag)
- using_default_value: the flag value has not been set by user
- allow_overwrite: the flag may be parsed more than once without
- raising an error, the last set value will be used
- allow_using_method_names: whether this flag can be defined even if
- it has a name that conflicts with a FlagValues method.
- validators: list of the flag validators.
-
- The only public method of a ``Flag`` object is :meth:`parse`, but it is
- typically only called by a :class:`~absl.flags.FlagValues` object. The
- :meth:`parse` method is a thin wrapper around the
- :meth:`ArgumentParser.parse()` method. The
- parsed value is saved in ``.value``, and the ``.present`` attribute is
- updated. If this flag was already present, an Error is raised.
-
- :meth:`parse` is also called during ``__init__`` to parse the default value
- and initialize the ``.value`` attribute. This enables other python modules to
- safely use flags even if the ``__main__`` module neglects to parse the
- command line arguments. The ``.present`` attribute is cleared after
- ``__init__`` parsing. If the default value is set to ``None``, then the
- ``__init__`` parsing step is skipped and the ``.value`` attribute is
- initialized to None.
-
- Note: The default value is also presented to the user in the help
- string, so it is important that it be a legal value for this flag.
- """
-
- def __init__(self, parser, serializer, name, default, help_string,
- short_name=None, boolean=False, allow_override=False,
- allow_override_cpp=False, allow_hide_cpp=False,
- allow_overwrite=True, allow_using_method_names=False):
- self.name = name
-
- if not help_string:
- help_string = '(no help available)'
-
- self.help = help_string
- self.short_name = short_name
- self.boolean = boolean
- self.present = 0
- self.parser = parser
- self.serializer = serializer
- self.allow_override = allow_override
- self.allow_override_cpp = allow_override_cpp
- self.allow_hide_cpp = allow_hide_cpp
- self.allow_overwrite = allow_overwrite
- self.allow_using_method_names = allow_using_method_names
-
- self.using_default_value = True
- self._value = None
- self.validators = []
- if self.allow_hide_cpp and self.allow_override_cpp:
- raise _exceptions.Error(
- "Can't have both allow_hide_cpp (means use Python flag) and "
- 'allow_override_cpp (means use C++ flag after InitGoogle)')
-
- self._set_default(default)
-
- @property
- def value(self):
- return self._value
-
- @value.setter
- def value(self, value):
- self._value = value
-
- def __hash__(self):
- return hash(id(self))
-
- def __eq__(self, other):
- return self is other
-
- def __lt__(self, other):
- if isinstance(other, Flag):
- return id(self) < id(other)
- return NotImplemented
-
- def __bool__(self):
- raise TypeError('A Flag instance would always be True. '
- 'Did you mean to test the `.value` attribute?')
-
- def __getstate__(self):
- raise TypeError("can't pickle Flag objects")
-
- def __copy__(self):
- raise TypeError('%s does not support shallow copies. '
- 'Use copy.deepcopy instead.' % type(self).__name__)
-
- def __deepcopy__(self, memo):
- result = object.__new__(type(self))
- result.__dict__ = copy.deepcopy(self.__dict__, memo)
- return result
-
- def _get_parsed_value_as_string(self, value):
- """Returns parsed flag value as string."""
- if value is None:
- return None
- if self.serializer:
- return repr(self.serializer.serialize(value))
- if self.boolean:
- if value:
- return repr('true')
- else:
- return repr('false')
- return repr(str(value))
-
- def parse(self, argument):
- """Parses string and sets flag value.
-
- Args:
- argument: str or the correct flag value type, argument to be parsed.
- """
- if self.present and not self.allow_overwrite:
- raise _exceptions.IllegalFlagValueError(
- 'flag --%s=%s: already defined as %s' % (
- self.name, argument, self.value))
- self.value = self._parse(argument)
- self.present += 1
-
- def _parse(self, argument):
- """Internal parse function.
-
- It returns the parsed value, and does not modify class states.
-
- Args:
- argument: str or the correct flag value type, argument to be parsed.
-
- Returns:
- The parsed value.
- """
- try:
- return self.parser.parse(argument)
- except (TypeError, ValueError) as e: # Recast as IllegalFlagValueError.
- raise _exceptions.IllegalFlagValueError(
- 'flag --%s=%s: %s' % (self.name, argument, e))
-
- def unparse(self):
- self.value = self.default
- self.using_default_value = True
- self.present = 0
-
- def serialize(self):
- """Serializes the flag."""
- return self._serialize(self.value)
-
- def _serialize(self, value):
- """Internal serialize function."""
- if value is None:
- return ''
- if self.boolean:
- if value:
- return '--%s' % self.name
- else:
- return '--no%s' % self.name
- else:
- if not self.serializer:
- raise _exceptions.Error(
- 'Serializer not present for flag %s' % self.name)
- return '--%s=%s' % (self.name, self.serializer.serialize(value))
-
- def _set_default(self, value):
- """Changes the default value (and current value too) for this Flag."""
- self.default_unparsed = value
- if value is None:
- self.default = None
- else:
- self.default = self._parse_from_default(value)
- self.default_as_str = self._get_parsed_value_as_string(self.default)
- if self.using_default_value:
- self.value = self.default
-
- # This is split out so that aliases can skip regular parsing of the default
- # value.
- def _parse_from_default(self, value):
- return self._parse(value)
-
- def flag_type(self):
- """Returns a str that describes the type of the flag.
-
- NOTE: we use strings, and not the types.*Type constants because
- our flags can have more exotic types, e.g., 'comma separated list
- of strings', 'whitespace separated list of strings', etc.
- """
- return self.parser.flag_type()
-
- def _create_xml_dom_element(self, doc, module_name, is_key=False):
- """Returns an XML element that contains this flag's information.
-
- This is information that is relevant to all flags (e.g., name,
- meaning, etc.). If you defined a flag that has some other pieces of
- info, then please override _ExtraXMLInfo.
-
- Please do NOT override this method.
-
- Args:
- doc: minidom.Document, the DOM document it should create nodes from.
- module_name: str,, the name of the module that defines this flag.
- is_key: boolean, True iff this flag is key for main module.
-
- Returns:
- A minidom.Element instance.
- """
- element = doc.createElement('flag')
- if is_key:
- element.appendChild(_helpers.create_xml_dom_element(doc, 'key', 'yes'))
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'file', module_name))
- # Adds flag features that are relevant for all flags.
- element.appendChild(_helpers.create_xml_dom_element(doc, 'name', self.name))
- if self.short_name:
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'short_name', self.short_name))
- if self.help:
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'meaning', self.help))
- # The default flag value can either be represented as a string like on the
- # command line, or as a Python object. We serialize this value in the
- # latter case in order to remain consistent.
- if self.serializer and not isinstance(self.default, str):
- if self.default is not None:
- default_serialized = self.serializer.serialize(self.default)
- else:
- default_serialized = ''
- else:
- default_serialized = self.default
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'default', default_serialized))
- value_serialized = self._serialize_value_for_xml(self.value)
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'current', value_serialized))
- element.appendChild(_helpers.create_xml_dom_element(
- doc, 'type', self.flag_type()))
- # Adds extra flag features this flag may have.
- for e in self._extra_xml_dom_elements(doc):
- element.appendChild(e)
- return element
-
- def _serialize_value_for_xml(self, value):
- """Returns the serialized value, for use in an XML help text."""
- return value
-
- def _extra_xml_dom_elements(self, doc):
- """Returns extra info about this flag in XML.
-
- "Extra" means "not already included by _create_xml_dom_element above."
-
- Args:
- doc: minidom.Document, the DOM document it should create nodes from.
-
- Returns:
- A list of minidom.Element.
- """
- # Usually, the parser knows the extra details about the flag, so
- # we just forward the call to it.
- return self.parser._custom_xml_dom_elements(doc) # pylint: disable=protected-access
-
-
-class BooleanFlag(Flag):
- """Basic boolean flag.
-
- Boolean flags do not take any arguments, and their value is either
- ``True`` (1) or ``False`` (0). The false value is specified on the command
- line by prepending the word ``'no'`` to either the long or the short flag
- name.
-
- For example, if a Boolean flag was created whose long name was
- ``'update'`` and whose short name was ``'x'``, then this flag could be
- explicitly unset through either ``--noupdate`` or ``--nox``.
- """
-
- def __init__(self, name, default, help, short_name=None, **args): # pylint: disable=redefined-builtin
- p = _argument_parser.BooleanParser()
- super(BooleanFlag, self).__init__(
- p, None, name, default, help, short_name, 1, **args)
-
-
-class EnumFlag(Flag):
- """Basic enum flag; its value can be any string from list of enum_values."""
-
- def __init__(self, name, default, help, enum_values, # pylint: disable=redefined-builtin
- short_name=None, case_sensitive=True, **args):
- p = _argument_parser.EnumParser(enum_values, case_sensitive)
- g = _argument_parser.ArgumentSerializer()
- super(EnumFlag, self).__init__(
- p, g, name, default, help, short_name, **args)
- self.help = '<%s>: %s' % ('|'.join(enum_values), self.help)
-
- def _extra_xml_dom_elements(self, doc):
- elements = []
- for enum_value in self.parser.enum_values:
- elements.append(_helpers.create_xml_dom_element(
- doc, 'enum_value', enum_value))
- return elements
-
-
-class EnumClassFlag(Flag):
- """Basic enum flag; its value is an enum class's member."""
-
- def __init__(
- self,
- name,
- default,
- help, # pylint: disable=redefined-builtin
- enum_class,
- short_name=None,
- case_sensitive=False,
- **args):
- p = _argument_parser.EnumClassParser(
- enum_class, case_sensitive=case_sensitive)
- g = _argument_parser.EnumClassSerializer(lowercase=not case_sensitive)
- super(EnumClassFlag, self).__init__(
- p, g, name, default, help, short_name, **args)
- self.help = '<%s>: %s' % ('|'.join(p.member_names), self.help)
-
- def _extra_xml_dom_elements(self, doc):
- elements = []
- for enum_value in self.parser.enum_class.__members__.keys():
- elements.append(_helpers.create_xml_dom_element(
- doc, 'enum_value', enum_value))
- return elements
-
-
-class MultiFlag(Flag):
- """A flag that can appear multiple time on the command-line.
-
- The value of such a flag is a list that contains the individual values
- from all the appearances of that flag on the command-line.
-
- See the __doc__ for Flag for most behavior of this class. Only
- differences in behavior are described here:
-
- * The default value may be either a single value or an iterable of values.
- A single value is transformed into a single-item list of that value.
-
- * The value of the flag is always a list, even if the option was
- only supplied once, and even if the default value is a single
- value
- """
-
- def __init__(self, *args, **kwargs):
- super(MultiFlag, self).__init__(*args, **kwargs)
- self.help += ';\n repeat this option to specify a list of values'
-
- def parse(self, arguments):
- """Parses one or more arguments with the installed parser.
-
- Args:
- arguments: a single argument or a list of arguments (typically a
- list of default values); a single argument is converted
- internally into a list containing one item.
- """
- new_values = self._parse(arguments)
- if self.present:
- self.value.extend(new_values)
- else:
- self.value = new_values
- self.present += len(new_values)
-
- def _parse(self, arguments):
- if (isinstance(arguments, abc.Iterable) and
- not isinstance(arguments, str)):
- arguments = list(arguments)
-
- if not isinstance(arguments, list):
- # Default value may be a list of values. Most other arguments
- # will not be, so convert them into a single-item list to make
- # processing simpler below.
- arguments = [arguments]
-
- return [super(MultiFlag, self)._parse(item) for item in arguments]
-
- def _serialize(self, value):
- """See base class."""
- if not self.serializer:
- raise _exceptions.Error(
- 'Serializer not present for flag %s' % self.name)
- if value is None:
- return ''
-
- serialized_items = [
- super(MultiFlag, self)._serialize(value_item) for value_item in value
- ]
-
- return '\n'.join(serialized_items)
-
- def flag_type(self):
- """See base class."""
- return 'multi ' + self.parser.flag_type()
-
- def _extra_xml_dom_elements(self, doc):
- elements = []
- if hasattr(self.parser, 'enum_values'):
- for enum_value in self.parser.enum_values:
- elements.append(_helpers.create_xml_dom_element(
- doc, 'enum_value', enum_value))
- return elements
-
-
-class MultiEnumClassFlag(MultiFlag):
- """A multi_enum_class flag.
-
- See the __doc__ for MultiFlag for most behaviors of this class. In addition,
- this class knows how to handle enum.Enum instances as values for this flag
- type.
- """
-
- def __init__(self,
- name,
- default,
- help_string,
- enum_class,
- case_sensitive=False,
- **args):
- p = _argument_parser.EnumClassParser(
- enum_class, case_sensitive=case_sensitive)
- g = _argument_parser.EnumClassListSerializer(
- list_sep=',', lowercase=not case_sensitive)
- super(MultiEnumClassFlag, self).__init__(
- p, g, name, default, help_string, **args)
- self.help = (
- '<%s>: %s;\n repeat this option to specify a list of values' %
- ('|'.join(p.member_names), help_string or '(no help available)'))
-
- def _extra_xml_dom_elements(self, doc):
- elements = []
- for enum_value in self.parser.enum_class.__members__.keys():
- elements.append(_helpers.create_xml_dom_element(
- doc, 'enum_value', enum_value))
- return elements
-
- def _serialize_value_for_xml(self, value):
- """See base class."""
- if value is not None:
- value_serialized = self.serializer.serialize(value)
- else:
- value_serialized = ''
- return value_serialized
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/nested_dictionary_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/nested_dictionary_dataset.py
deleted file mode 100644
index 52e74abddacc923c5e29b0a0c41d7efc85482d3b..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/nested_dictionary_dataset.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import OrderedDict
-
-import torch
-from torch.utils.data.dataloader import default_collate
-
-from . import FairseqDataset
-
-
-def _flatten(dico, prefix=None):
- """Flatten a nested dictionary."""
- new_dico = OrderedDict()
- if isinstance(dico, dict):
- prefix = prefix + "." if prefix is not None else ""
- for k, v in dico.items():
- if v is None:
- continue
- new_dico.update(_flatten(v, prefix + k))
- elif isinstance(dico, list):
- for i, v in enumerate(dico):
- new_dico.update(_flatten(v, prefix + ".[" + str(i) + "]"))
- else:
- new_dico = OrderedDict({prefix: dico})
- return new_dico
-
-
-def _unflatten(dico):
- """Unflatten a flattened dictionary into a nested dictionary."""
- new_dico = OrderedDict()
- for full_k, v in dico.items():
- full_k = full_k.split(".")
- node = new_dico
- for k in full_k[:-1]:
- if k.startswith("[") and k.endswith("]"):
- k = int(k[1:-1])
- if k not in node:
- node[k] = OrderedDict()
- node = node[k]
- node[full_k[-1]] = v
- return new_dico
-
-
-class NestedDictionaryDataset(FairseqDataset):
- def __init__(self, defn, sizes=None):
- super().__init__()
- self.defn = _flatten(defn)
- self.sizes = [sizes] if not isinstance(sizes, (list, tuple)) else sizes
-
- first = None
- for v in self.defn.values():
- if not isinstance(
- v,
- (
- FairseqDataset,
- torch.utils.data.Dataset,
- ),
- ):
- raise ValueError("Expected Dataset but found: {}".format(v.__class__))
- first = first or v
- if len(v) > 0:
- assert len(v) == len(first), "dataset lengths must match"
-
- self._len = len(first)
-
- def __getitem__(self, index):
- return OrderedDict((k, ds[index]) for k, ds in self.defn.items())
-
- def __len__(self):
- return self._len
-
- def collater(self, samples):
- """Merge a list of samples to form a mini-batch.
-
- Args:
- samples (List[dict]): samples to collate
-
- Returns:
- dict: a mini-batch suitable for forwarding with a Model
- """
- if len(samples) == 0:
- return {}
- sample = OrderedDict()
- for k, ds in self.defn.items():
- try:
- sample[k] = ds.collater([s[k] for s in samples])
- except NotImplementedError:
- sample[k] = default_collate([s[k] for s in samples])
- return _unflatten(sample)
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- return max(s[index] for s in self.sizes)
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- if len(self.sizes) == 1:
- return self.sizes[0][index]
- else:
- return (s[index] for s in self.sizes)
-
- @property
- def supports_prefetch(self):
- """Whether this dataset supports prefetching."""
- return any(ds.supports_prefetch for ds in self.defn.values())
-
- def prefetch(self, indices):
- """Prefetch the data required for this epoch."""
- for ds in self.defn.values():
- if getattr(ds, "supports_prefetch", False):
- ds.prefetch(indices)
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return all(ds.can_reuse_epoch_itr_across_epochs for ds in self.defn.values())
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- for ds in self.defn.values():
- ds.set_epoch(epoch)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py
deleted file mode 100644
index ac266e844c86246bbfce02b9e6a2999353661df9..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) Howto100M authors and Facebook, Inc. All Rights Reserved
-
-import torch as th
-
-from torch import nn
-
-
-class GlobalAvgPool(nn.Module):
- def __init__(self):
- super(GlobalAvgPool, self).__init__()
-
- def forward(self, x):
- return th.mean(x, dim=[-2, -1])
-
-
-def get_model(args):
- assert args.type in ['2d', '3d', 'vmz', 's3d', 'vae']
- if args.type == '2d':
- print('Loading 2D-ResNet-152 ...')
- import torchvision.models as models
- model = models.resnet152(pretrained=True)
- model = nn.Sequential(*list(model.children())[:-2], GlobalAvgPool())
- model = model.cuda()
- elif args.type == 'vmz':
- print('Loading VMZ ...')
- from vmz34 import r2plus1d_34
- model = r2plus1d_34(pretrained_path=args.vmz_model_path, pretrained_num_classes=487)
- model = model.cuda()
- elif args.type == 's3d':
- # we use one copy of s3d instead of dup another one for feature extraction.
- from mmpt.processors.models.s3dg import S3D
- model = S3D('pretrained_models/s3d_dict.npy', 512)
- model.load_state_dict(th.load('pretrained_models/s3d_howto100m.pth'))
- model = model.cuda()
-
- elif args.type == '3d':
- print('Loading 3D-ResneXt-101 ...')
- from videocnn.models import resnext
- model = resnext.resnet101(
- num_classes=400,
- shortcut_type='B',
- cardinality=32,
- sample_size=112,
- sample_duration=16,
- last_fc=False)
- model = model.cuda()
- model_data = th.load(args.resnext101_model_path)
- model.load_state_dict(model_data)
- elif args.type == 'vae':
- from openaivae import OpenAIParallelDiscreteVAE
- model = OpenAIParallelDiscreteVAE()
- model = model.cuda()
- else:
- raise ValueError("model not supported yet.")
-
- model.eval()
- print('loaded')
- return model
diff --git a/spaces/ashercn97/AsherTesting/modules/relative_imports.py b/spaces/ashercn97/AsherTesting/modules/relative_imports.py
deleted file mode 100644
index 3c0eb56b77c6cb6b38fdbdeebabe9ad3b8d91b97..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/modules/relative_imports.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import sys
-from pathlib import Path
-
-
-class RelativeImport:
- def __init__(self, path):
- self.import_path = Path(path)
-
- def __enter__(self):
- sys.path.insert(0, str(self.import_path))
-
- def __exit__(self, exc_type, exc_value, traceback):
- sys.path.remove(str(self.import_path))
diff --git "a/spaces/ashhadahsan/summarizer-space/pages/1_\360\237\223\210_predict.py" "b/spaces/ashhadahsan/summarizer-space/pages/1_\360\237\223\210_predict.py"
deleted file mode 100644
index 3065be26fb735dbaa66fed82a4a0491a54c913d4..0000000000000000000000000000000000000000
--- "a/spaces/ashhadahsan/summarizer-space/pages/1_\360\237\223\210_predict.py"
+++ /dev/null
@@ -1,560 +0,0 @@
-import streamlit as st
-import pandas as pd
-from transformers import BertTokenizer, TFBertForSequenceClassification
-from transformers import TextClassificationPipeline
-from transformers import pipeline
-from stqdm import stqdm
-from simplet5 import SimpleT5
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-from transformers import BertTokenizer, TFBertForSequenceClassification
-import logging
-from datasets import load_dataset
-import gc
-from typing import List
-from collections import OrderedDict
-from datetime import datetime
-
-tokenizer_kwargs = dict(max_length=128, truncation=True, padding=True)
-
-
-flan_t5_kwargs = dict(repetition_penalty=1.2)
-SLEEP = 2
-
-
-date = datetime.now().strftime(r"%Y-%m-%d")
-
-
-def clean_memory(obj: TextClassificationPipeline):
- del obj
- gc.collect()
-
-
-@st.cache_data
-def get_all_cats():
- data = load_dataset("ashhadahsan/amazon_theme")
- data = data["train"].to_pandas()
- labels = [x for x in list(set(data.iloc[:, 1].values.tolist())) if x != "Unknown"]
- del data
- return labels
-
-
-@st.cache_data
-def get_all_subcats():
- data = load_dataset("ashhadahsan/amazon_subtheme")
- data = data["train"].to_pandas()
- labels = [x for x in list(set(data.iloc[:, 1].values.tolist())) if x != "Unknown"]
- del data
- return labels
-
-
-@st.cache_resource
-def load_zero_shot_classification_large():
- classifier_zero = pipeline(
- "zero-shot-classification",
- model="facebook/bart-large-mnli",
- )
- return classifier_zero
-
-
-def assign_label_zeroshot(zero, to: str, old: List):
- assigned = zero(to, old)
- assigned_dict = dict(zip(assigned["labels"], assigned["scores"]))
- od = OrderedDict(sorted(assigned_dict.items(), key=lambda x: x[1], reverse=True))
- print(list(od.keys())[0])
- print(type(list(od.keys())[0]))
-
- return list(od.keys())[0]
-
-
-def assign_labels_flant5(pipe, what: str, to: str, old: List):
- old = ", ".join(old)
-
- return pipe(
- f"""'Generate a new one word {what} to this summary of the text of a review
- {to} for context
- already assigned {what} are , {themes}
- theme:"""
- )[0]["generated_text"]
-
-
-@st.cache_resource
-def load_t5() -> (AutoModelForSeq2SeqLM, AutoTokenizer):
- model = AutoModelForSeq2SeqLM.from_pretrained(
- "t5-base",
- )
-
- tokenizer = AutoTokenizer.from_pretrained(
- pretrained_model_name_or_path="t5-base",
- )
- return model, tokenizer
-
-
-@st.cache_resource
-def load_flan_t5_large():
- return pipeline(
- task="text2text-generation",
- model="google/flan-t5-large",
- model_kwargs=flan_t5_kwargs,
- )
-
-
-@st.cache_resource
-def summarizationModel():
- return pipeline(
- task="summarization",
- model="my_awesome_sum/",
- )
-
-
-@st.cache_resource
-def convert_df(df: pd.DataFrame):
- return df.to_csv(index=False).encode("utf-8")
-
-
-def load_one_line_summarizer(model):
- return model.load_model(
- "t5",
- "snrspeaks/t5-one-line-summary",
- )
-
-
-@st.cache_resource
-def classify_theme() -> TextClassificationPipeline:
- tokenizer = BertTokenizer.from_pretrained(
- "ashhadahsan/amazon-theme-bert-base-finetuned",
- )
- model = TFBertForSequenceClassification.from_pretrained(
- "ashhadahsan/amazon-theme-bert-base-finetuned",
- )
- pipeline = TextClassificationPipeline(
- model=model,
- tokenizer=tokenizer,
- **tokenizer_kwargs,
- )
- return pipeline
-
-
-@st.cache_resource
-def classify_sub_theme() -> TextClassificationPipeline:
- tokenizer = BertTokenizer.from_pretrained(
- "ashhadahsan/amazon-subtheme-bert-base-finetuned",
- )
- model = TFBertForSequenceClassification.from_pretrained(
- "ashhadahsan/amazon-subtheme-bert-base-finetuned",
- )
- pipeline = TextClassificationPipeline(
- model=model, tokenizer=tokenizer, **tokenizer_kwargs
- )
- return pipeline
-
-
-st.set_page_config(layout="wide", page_title="Amazon Review | Summarizer")
-st.title(body="Amazon Review Summarizer")
-
-uploaded_file = st.file_uploader(label="Choose a file", type=["xlsx", "xls", "csv"])
-
-
-summarizer_option = st.selectbox(
- label="Select Summarizer",
- options=("Custom trained on the dataset", "t5-base", "t5-one-line-summary"),
-)
-col1, col2, col3 = st.columns(spec=[1, 1, 1])
-
-with col1:
- summary_yes = st.checkbox(label="Summrization", value=False)
-
-with col2:
- classification = st.checkbox(label="Classify Category", value=True)
-
-with col3:
- sub_theme = st.checkbox(label="Sub theme classification", value=True)
-
-treshold = st.slider(
- label="Model Confidence value",
- min_value=0.1,
- max_value=0.8,
- step=0.1,
- value=0.6,
- help="If the model has a confidence score below this number , then a new label is assigned (0.6) means 60 percent and so on",
-)
-
-ps = st.empty()
-
-if st.button("Process", type="primary"):
- themes = get_all_cats()
- subthemes = get_all_subcats()
-
- oneline = SimpleT5()
- load_one_line_summarizer(model=oneline)
- zeroline = load_zero_shot_classification_large()
- bot = load_flan_t5_large()
-
- cancel_button = st.empty()
- cancel_button2 = st.empty()
- cancel_button3 = st.empty()
- if uploaded_file is not None:
- if uploaded_file.name.split(".")[-1] in ["xls", "xlsx"]:
- df = pd.read_excel(io=uploaded_file, engine="openpyxl")
- if uploaded_file.name.split(".")[-1] in [".csv"]:
- df = pd.read_csv(filepath_or_buffer=uploaded_file)
- columns = df.columns.values.tolist()
- columns = [x.lower() for x in columns]
- df.columns = columns
- print(summarizer_option)
- outputdf = pd.DataFrame()
- try:
- text = df["text"].values.tolist()
- outputdf["text"] = text
- if summarizer_option == "Custom trained on the dataset":
- if summary_yes:
- model = summarizationModel()
-
- progress_text = "Summarization in progress. Please wait."
- summary = []
-
- for x in stqdm(iterable=range(len(text))):
- if cancel_button.button("Cancel", key=x):
- del model
- break
- try:
- summary.append(
- model(
- f"summarize: {text[x]}",
- max_length=50,
- early_stopping=True,
- )[0]["summary_text"]
- )
- except:
- pass
- outputdf["summary"] = summary
- del model
- if classification:
- themePipe = classify_theme()
- classes = []
- classesUnlabel = []
- classesUnlabelZero = []
- for x in stqdm(
- iterable=text,
- desc="Assigning Themes ...",
- total=len(text),
- colour="#BF1A1A",
- ):
- output = themePipe(x)[0]["label"]
- classes.append(output)
- score = round(number=themePipe(x)[0]["score"], ndigits=2)
- if score <= treshold:
- onelineoutput = oneline.predict(source_text=x)[0]
-
- print("hit")
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="theme",
- to=onelineoutput,
- old=themes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline, to=onelineoutput, old=themes
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
-
- outputdf["Review Theme"] = classes
- outputdf["Review Theme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
- clean_memory(themePipe)
- if sub_theme:
- subThemePipe = classify_sub_theme()
- classes = []
- classesUnlabel = []
- classesUnlabelZero = []
- for x in stqdm(
- iterable=text,
- desc="Assigning Subthemes ...",
- total=len(text),
- colour="green",
- ):
- output = subThemePipe(x)[0]["label"]
- classes.append(output)
- score = round(subThemePipe(x)[0]["score"], 2)
- if score <= treshold:
- onelineoutput = oneline.predict(x)[0]
-
- print("hit")
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="subtheme",
- to=onelineoutput,
- old=subthemes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline,
- to=onelineoutput,
- old=subthemes,
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
-
- outputdf["Review SubTheme"] = classes
- outputdf["Review SubTheme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
-
- clean_memory(subThemePipe)
-
- csv = convert_df(outputdf)
- st.download_button(
- label="Download output as CSV",
- data=csv,
- file_name=f"{summarizer_option}_{date}_df.csv",
- mime="text/csv",
- use_container_width=True,
- )
- if summarizer_option == "t5-base":
- if summary_yes:
- model, tokenizer = load_t5()
- summary = []
- for x in stqdm(range(len(text))):
- if cancel_button2.button("Cancel", key=x):
- del model, tokenizer
- break
- tokens_input = tokenizer.encode(
- "summarize: " + text[x],
- return_tensors="pt",
- max_length=tokenizer.model_max_length,
- truncation=True,
- )
- summary_ids = model.generate(
- tokens_input,
- min_length=80,
- max_length=150,
- length_penalty=20,
- num_beams=2,
- )
- summary_gen = tokenizer.decode(
- summary_ids[0], skip_special_tokens=True
- )
- summary.append(summary_gen)
- del model, tokenizer
- outputdf["summary"] = summary
-
- if classification:
- themePipe = classify_theme()
- classes = []
- classesUnlabel = []
- classesUnlabelZero = []
- for x in stqdm(
- text, desc="Assigning Themes ...", total=len(text), colour="red"
- ):
- output = themePipe(x)[0]["label"]
- classes.append(output)
- score = round(themePipe(x)[0]["score"], 2)
- if score <= treshold:
- onelineoutput = oneline.predict(x)[0]
-
- print("hit")
-
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="theme",
- to=onelineoutput,
- old=themes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline, to=onelineoutput, old=themes
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
- outputdf["Review Theme"] = classes
- outputdf["Review Theme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
- clean_memory(themePipe)
-
- if sub_theme:
- subThemePipe = classify_sub_theme()
- classes = []
- classesUnlabelZero = []
-
- for x in stqdm(
- text,
- desc="Assigning Subthemes ...",
- total=len(text),
- colour="green",
- ):
- output = subThemePipe(x)[0]["label"]
- classes.append(output)
- score = round(subThemePipe(x)[0]["score"], 2)
- if score <= treshold:
- onelineoutput = oneline.predict(x)[0]
-
- print("hit")
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="subtheme",
- to=onelineoutput,
- old=subthemes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline,
- to=onelineoutput,
- old=subthemes,
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
-
- outputdf["Review SubTheme"] = classes
- outputdf["Review SubTheme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
-
- clean_memory(subThemePipe)
-
- csv = convert_df(outputdf)
- st.download_button(
- label="Download output as CSV",
- data=csv,
- file_name=f"{summarizer_option}_{date}_df.csv",
- mime="text/csv",
- use_container_width=True,
- )
-
- if summarizer_option == "t5-one-line-summary":
- if summary_yes:
- model = SimpleT5()
- load_one_line_summarizer(model=model)
-
- summary = []
- for x in stqdm(iterable=range(len(text))):
- if cancel_button3.button(label="Cancel", key=x):
- del model
- break
- try:
- summary.append(model.predict(source_text=text[x])[0])
- except:
- pass
- outputdf["summary"] = summary
- del model
-
- if classification:
- themePipe = classify_theme()
- classes = []
- classesUnlabel = []
- classesUnlabelZero = []
- for x in stqdm(
- iterable=text,
- desc="Assigning Themes ...",
- total=len(text),
- colour="red",
- ):
- output = themePipe(x)[0]["label"]
- classes.append(output)
- score = round(number=themePipe(x)[0]["score"], ndigits=2)
- if score <= treshold:
- onelineoutput = oneline.predict(x)[0]
-
- print("hit")
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="theme",
- to=onelineoutput,
- old=themes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline, to=onelineoutput, old=themes
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
- outputdf["Review Theme"] = classes
- outputdf["Review Theme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
-
- if sub_theme:
- subThemePipe = classify_sub_theme()
- classes = []
- classesUnlabelZero = []
-
- for x in stqdm(
- iterable=text,
- desc="Assigning Subthemes ...",
- total=len(text),
- colour="green",
- ):
- output = subThemePipe(x)[0]["label"]
- classes.append(output)
- score = round(subThemePipe(x)[0]["score"], 2)
- if score <= treshold:
- print("hit")
- onelineoutput = oneline.predict(source_text=x)[0]
-
- classesUnlabel.append(
- assign_labels_flant5(
- bot,
- what="subtheme",
- to=onelineoutput,
- old=subthemes,
- )
- )
- classesUnlabelZero.append(
- assign_label_zeroshot(
- zero=zeroline,
- to=onelineoutput,
- old=subthemes,
- )
- )
-
- else:
- classesUnlabel.append("")
- classesUnlabelZero.append("")
-
- outputdf["Review SubTheme"] = classes
- outputdf["Review SubTheme-issue-new"] = classesUnlabel
- outputdf["Review SubTheme-issue-zero"] = classesUnlabelZero
-
- clean_memory(subThemePipe)
-
- csv = convert_df(outputdf)
- st.download_button(
- label="Download output as CSV",
- data=csv,
- file_name=f"{summarizer_option}_{date}_df.csv",
- mime="text/csv",
- use_container_width=True,
- )
-
- except KeyError as e:
- st.error(
- body="Please Make sure that your data must have a column named text",
- icon="🚨",
- )
- st.info(body="Text column must have amazon reviews", icon="ℹ️")
- st.exception(e)
-
- except BaseException as e:
- logging.exception(msg="An exception was occurred")
diff --git a/spaces/attention-refocusing/Attention-refocusing/dataset/tsv_dataset.py b/spaces/attention-refocusing/Attention-refocusing/dataset/tsv_dataset.py
deleted file mode 100644
index dc2db59faf1254970b35d2fc8dec78afde4f6918..0000000000000000000000000000000000000000
--- a/spaces/attention-refocusing/Attention-refocusing/dataset/tsv_dataset.py
+++ /dev/null
@@ -1,326 +0,0 @@
-from tkinter.messagebox import NO
-import torch
-import json
-from collections import defaultdict
-from PIL import Image, ImageDraw
-from copy import deepcopy
-import os
-import torchvision.transforms as transforms
-import torchvision
-from .base_dataset import BaseDataset, check_filenames_in_zipdata, recalculate_box_and_verify_if_valid
-from io import BytesIO
-import random
-
-from .tsv import TSVFile
-
-from io import BytesIO
-import base64
-from PIL import Image
-import numpy as np
-
-
-def decode_base64_to_pillow(image_b64):
- return Image.open(BytesIO(base64.b64decode(image_b64))).convert('RGB')
-
-def decode_tensor_from_string(arr_str, use_tensor=True):
- arr = np.frombuffer(base64.b64decode(arr_str), dtype='float32')
- if use_tensor:
- arr = torch.from_numpy(arr)
- return arr
-
-def decode_item(item):
- item = json.loads(item)
- item['image'] = decode_base64_to_pillow(item['image'])
-
- for anno in item['annos']:
- anno['image_embedding_before'] = decode_tensor_from_string(anno['image_embedding_before'])
- anno['text_embedding_before'] = decode_tensor_from_string(anno['text_embedding_before'])
- anno['image_embedding_after'] = decode_tensor_from_string(anno['image_embedding_after'])
- anno['text_embedding_after'] = decode_tensor_from_string(anno['text_embedding_after'])
- return item
-
-def check_unique(images, fields):
- for field in fields:
- temp_list = []
- for img_info in images:
- temp_list.append(img_info[field])
- assert len(set(temp_list)) == len(temp_list), field
-
-def clean_data(data):
- for data_info in data:
- data_info.pop("original_img_id", None)
- data_info.pop("original_id", None)
- data_info.pop("sentence_id", None) # sentence id for each image (multiple sentences for one image)
- data_info.pop("dataset_name", None)
- data_info.pop("data_source", None)
- data_info["data_id"] = data_info.pop("id")
-
-
-def clean_annotations(annotations):
- for anno_info in annotations:
- anno_info.pop("iscrowd", None) # I have checked that all 0 for flickr, vg, coco
- anno_info.pop("category_id", None) # I have checked that all 1 for flickr vg. This is not always 1 for coco, but I do not think we need this annotation
- anno_info.pop("area", None)
- # anno_info.pop("id", None)
- anno_info["data_id"] = anno_info.pop("image_id")
-
-
-def draw_box(img, boxes):
- draw = ImageDraw.Draw(img)
- for box in boxes:
- draw.rectangle([box[0], box[1], box[2], box[3]], outline ="red", width=2) # x0 y0 x1 y1
- return img
-
-
-def xyhw2xyxy(box):
- x0, y0, w, h = box
- return [ x0, y0, x0+w, y0+h ]
-
-
-def make_a_sentence(obj_names, clean=False):
-
- if clean:
- obj_names = [ name[:-6] if ("-other" in name) else name for name in obj_names]
-
- caption = ""
- tokens_positive = []
- for obj_name in obj_names:
- start_len = len(caption)
- caption += obj_name
- end_len = len(caption)
- caption += ", "
- tokens_positive.append(
- [[start_len, end_len]] # in real caption, positive tokens can be disjoint, thus using list of list
- )
- caption = caption[:-2] # remove last ", "
-
- return caption #, tokens_positive
-
-
-def mask_for_random_drop_text_or_image_feature(masks, random_drop_embedding):
- """
- input masks tell how many valid grounding tokens for this image
- e.g., 1,1,1,1,0,0,0,0,0,0...
-
- If random_drop_embedding=both. we will random drop either image or
- text feature for each token,
- but we always make sure there is at least one feature used.
- In other words, the following masks are not valid
- (because for the second obj, no feature at all):
- image: 1,0,1,1,0,0,0,0,0
- text: 1,0,0,0,0,0,0,0,0
-
- if random_drop_embedding=image. we will random drop image feature
- and always keep the text one.
-
- """
- N = masks.shape[0]
-
- if random_drop_embedding=='both':
- temp_mask = torch.ones(2,N)
- for i in range(N):
- if random.uniform(0, 1) < 0.5: # else keep both features
- idx = random.sample([0,1], 1)[0] # randomly choose to drop image or text feature
- temp_mask[idx,i] = 0
- image_masks = temp_mask[0]*masks
- text_masks = temp_mask[1]*masks
-
- if random_drop_embedding=='image':
- image_masks = masks*(torch.rand(N)>0.5)*1
- text_masks = masks
-
- return image_masks, text_masks
-
-
-
-
-
-def project(x, projection_matrix):
- """
- x (Batch*768) should be the penultimate feature of CLIP (before projection)
- projection_matrix (768*768) is the CLIP projection matrix, which should be weight.data of Linear layer
- defined in CLIP (out_dim, in_dim), thus we need to apply transpose below.
- this function will return the CLIP feature (without normalziation)
- """
- return x@torch.transpose(projection_matrix, 0, 1)
-
-
-def inv_project(y, projection_matrix):
- """
- y (Batch*768) should be the CLIP feature (after projection)
- projection_matrix (768*768) is the CLIP projection matrix, which should be weight.data of Linear layer
- defined in CLIP (out_dim, in_dim).
- this function will return the CLIP penultimate feature.
-
- Note: to make sure getting the correct penultimate feature, the input y should not be normalized.
- If it is normalized, then the result will be scaled by CLIP feature norm, which is unknown.
- """
- return y@torch.transpose(torch.linalg.inv(projection_matrix), 0, 1)
-
-
-
-
-class TSVDataset(BaseDataset):
- def __init__(self,
- tsv_path,
- which_embedder='clip',
- which_layer=['after','after'], # text and image
- prob_use_caption=1,
- random_drop_embedding='none',
- image_size=256,
- min_box_size=0.01,
- max_boxes_per_data=8,
- max_images=None, # set as 30K used to eval
- random_crop = False,
- random_flip = True,
- ):
- image_root = "a placeholder path as we are using tsv here"
- super().__init__(image_root, random_crop, random_flip, image_size)
- self.tsv_path = tsv_path
- self.which_embedder = which_embedder
- self.prob_use_caption = prob_use_caption
- self.random_drop_embedding = random_drop_embedding
- self.min_box_size = min_box_size
- self.max_boxes_per_data = max_boxes_per_data
- self.max_images = max_images
-
- assert which_layer in [ ['after','after'], ['before','after_renorm'], ['before','after_reproject'] ]
- assert random_drop_embedding in ['none', 'both', 'image']
- self.which_layer_text = which_layer[0]
- self.which_layer_image = which_layer[1]
-
- #self.projection_matrix = torch.load(os.path.join(os.path.dirname(__file__), 'projection_matrix') )
- self.projection_matrix = torch.load('projection_matrix.pth')
-
- # Load tsv data
- self.tsv_file = TSVFile(self.tsv_path)
-
-
- # Load preprocessed name embedding
- if which_embedder == 'bert':
- self.embedding_len = 1280
- elif which_embedder == 'clip':
- self.embedding_len = 768
- else:
- assert False
-
- def total_images(self):
- return len(self)
-
- def get_item_from_tsv(self, index):
- _, item = self.tsv_file[index]
- item = decode_item(item)
- return item
-
-
- def mapping(self, image_embedding):
- if self.which_layer_image == 'after':
- # both use CLIP aligned feature
- return image_embedding
- elif self.which_layer_image == 'after_renorm':
- # text use before, but image use after projection but normalize to 28.7
- return image_embedding*28.7
- elif self.which_layer_image == 'after_reproject':
- image_embedding = project( image_embedding.unsqueeze(0), self.projection_matrix.T )
- image_embedding = image_embedding.squeeze(0)
- image_embedding = image_embedding / image_embedding.norm()
- image_embedding = image_embedding * 28.7
- return image_embedding
-
-
-
- def __getitem__(self, index):
- if self.max_boxes_per_data > 99:
- assert False, "Are you sure setting such large number of boxes?"
-
- raw_item = self.get_item_from_tsv(index)
- is_det = raw_item.get('is_det', False) # if it is from detection (such as o365), then we will make a caption
-
- out = {}
-
- # -------------------- id and image ------------------- #
- out['id'] = raw_item['data_id']
- image = raw_item['image']
- image_tensor, trans_info = self.transform_image(image)
- out["image"] = image_tensor
-
-
-
- # -------------------- grounding token ------------------- #
- annos = raw_item['annos']
-
- areas = []
- all_boxes = []
- all_masks = []
- all_text_embeddings = []
- all_image_embeddings = []
- if is_det:
- all_category_names = []
-
- text_embedding_name = 'text_embedding_before' if self.which_layer_text == 'before' else 'text_embedding_after'
- image_embedding_name = 'image_embedding_after'
-
- for anno in annos:
- x, y, w, h = anno['bbox']
- valid, (x0, y0, x1, y1) = recalculate_box_and_verify_if_valid(x, y, w, h, trans_info, self.image_size, self.min_box_size)
-
- if valid:
- areas.append( (x1-x0)*(y1-y0) )
- all_boxes.append( torch.tensor([x0,y0,x1,y1]) / self.image_size ) # scale to 0-1
- all_masks.append(1)
- all_text_embeddings.append(anno[text_embedding_name])
- all_image_embeddings.append( self.mapping(anno[image_embedding_name]) )
- if is_det:
- all_category_names.append(anno["category_name"])
-
-
- wanted_idxs = torch.tensor(areas).sort(descending=True)[1]
- wanted_idxs = wanted_idxs[0:self.max_boxes_per_data]
-
- boxes = torch.zeros(self.max_boxes_per_data, 4)
- masks = torch.zeros(self.max_boxes_per_data)
- text_embeddings = torch.zeros(self.max_boxes_per_data, self.embedding_len)
- image_embeddings = torch.zeros(self.max_boxes_per_data, self.embedding_len)
- if is_det:
- category_names = []
- for i, idx in enumerate(wanted_idxs):
- boxes[i] = all_boxes[idx]
- masks[i] = all_masks[idx]
- text_embeddings[i] = all_text_embeddings[idx]
- image_embeddings[i] = all_image_embeddings[idx]
- if is_det:
- category_names.append(all_category_names[idx])
-
- if self.random_drop_embedding != 'none':
- image_masks, text_masks = mask_for_random_drop_text_or_image_feature(masks, self.random_drop_embedding)
- else:
- image_masks = masks
- text_masks = masks
-
-
- out["boxes"] = boxes
- out["masks"] = masks
- out["image_masks"] = image_masks
- out["text_masks"] = text_masks
- out["text_embeddings"] = text_embeddings
- out["image_embeddings"] = image_embeddings
-
-
-
- # -------------------- caption ------------------- #
- if random.uniform(0, 1) < self.prob_use_caption:
- if is_det:
- out["caption"] = make_a_sentence(category_names)
- else:
- out["caption"] = raw_item["caption"]
- else:
- out["caption"] = ""
-
- return out
-
-
-
- def __len__(self):
- return len(self.tsv_file)
-
-
diff --git a/spaces/avirathtibrewala/YTToText/README.md b/spaces/avirathtibrewala/YTToText/README.md
deleted file mode 100644
index e34dd85173150d561a6e143487387a16558e5a62..0000000000000000000000000000000000000000
--- a/spaces/avirathtibrewala/YTToText/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: YTToText
-emoji: 🚀
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.13.0
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/ArtStyleLineDrawing/app.py b/spaces/awacke1/ArtStyleLineDrawing/app.py
deleted file mode 100644
index 2fa07247e19bb4ec8f1317443122703951315286..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ArtStyleLineDrawing/app.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import gradio as gr
-from PIL import Image
-import torchvision.transforms as transforms
-norm_layer = nn.InstanceNorm2d
-
-class ResidualBlock(nn.Module):
- def __init__(self, in_features):
- super(ResidualBlock, self).__init__()
- conv_block = [ nn.ReflectionPad2d(1),
- nn.Conv2d(in_features, in_features, 3),
- norm_layer(in_features),
- nn.ReLU(inplace=True),
- nn.ReflectionPad2d(1),
- nn.Conv2d(in_features, in_features, 3),
- norm_layer(in_features)
- ]
- self.conv_block = nn.Sequential(*conv_block)
- def forward(self, x):
- return x + self.conv_block(x)
-
-
-class Generator(nn.Module):
- def __init__(self, input_nc, output_nc, n_residual_blocks=9, sigmoid=True):
- super(Generator, self).__init__()
- model0 = [ nn.ReflectionPad2d(3),
- nn.Conv2d(input_nc, 64, 7),
- norm_layer(64),
- nn.ReLU(inplace=True) ]
- self.model0 = nn.Sequential(*model0)
- model1 = []
- in_features = 64
- out_features = in_features*2
- for _ in range(2):
- model1 += [ nn.Conv2d(in_features, out_features, 3, stride=2, padding=1),
- norm_layer(out_features),
- nn.ReLU(inplace=True) ]
- in_features = out_features
- out_features = in_features*2
- self.model1 = nn.Sequential(*model1)
- model2 = []
- for _ in range(n_residual_blocks):
- model2 += [ResidualBlock(in_features)]
- self.model2 = nn.Sequential(*model2)
- model3 = []
- out_features = in_features//2
- for _ in range(2):
- model3 += [ nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1),
- norm_layer(out_features),
- nn.ReLU(inplace=True) ]
- in_features = out_features
- out_features = in_features//2
- self.model3 = nn.Sequential(*model3)
- model4 = [ nn.ReflectionPad2d(3),
- nn.Conv2d(64, output_nc, 7)]
- if sigmoid:
- model4 += [nn.Sigmoid()]
- self.model4 = nn.Sequential(*model4)
-
- def forward(self, x, cond=None):
- out = self.model0(x)
- out = self.model1(out)
- out = self.model2(out)
- out = self.model3(out)
- out = self.model4(out)
- return out
-
-model1 = Generator(3, 1, 3)
-model1.load_state_dict(torch.load('model.pth', map_location=torch.device('cpu')))
-model1.eval()
-
-model2 = Generator(3, 1, 3)
-model2.load_state_dict(torch.load('model2.pth', map_location=torch.device('cpu')))
-model2.eval()
-
-def predict(input_img, ver):
- input_img = Image.open(input_img)
- transform = transforms.Compose([transforms.Resize(256, Image.BICUBIC), transforms.ToTensor()])
- input_img = transform(input_img)
- input_img = torch.unsqueeze(input_img, 0)
-
- drawing = 0
- with torch.no_grad():
- if ver == 'Simple Lines':
- drawing = model2(input_img)[0].detach()
- else:
- drawing = model1(input_img)[0].detach()
-
- drawing = transforms.ToPILImage()(drawing)
- return drawing
-
-title="Art Style Line Drawings - Complex and Simple Portraits and Landscapes"
-description="Art Style Line Drawings 🦀🦁🦂🦃🦄🦅🦆🦇🦈🦉🦊🦋🦌🦍🦎🦏 🦐🦑🦒🦓🦔🦕🦖🦗🦘🦙🦚🦛🦜🦝🦞🦟🦠🦡🦢🦣🦤🦥🦦🦧🦨🦩🦪🦫🦬🦭🦮"
-# article = ""
-examples=[
-['QSHYNkOyhArcsgDrSFqq_15.625x.jpg', 'Simple Lines'],
-['Xenomporh-art-scale-6_00x-gigapixel.png', 'Simple Lines'],
-['Alien Chairs-art-scale-6_00x-gigapixel.png', 'Complex Lines'],
-['Brain Coral B-gigapixel-art-scale-6_00x.jpg', 'Simple Lines'],
-['Brain Coral-gigapixel-art-scale-6_00x.jpg', 'Complex Lines'],
-['Dark Ritual Wisp Loop-art-scale-6_00x-gigapixel.png', 'Simple Lines'],
-['Dungeons and Dragons Cartoon-art-scale-6_00x-gigapixel.png', 'Complex Lines'],
-['Fantasy Art 2-art-scale-6_00x-gigapixel.png', 'Simple Lines']
-]
-
-
-iface = gr.Interface(predict, [gr.inputs.Image(type='filepath'),
- gr.inputs.Radio(['Complex Lines','Simple Lines'], type="value", default='Simple Lines', label='version')],
- gr.outputs.Image(type="pil"), title=title,description=description,examples=examples)
-
-#iface.launch()
-iface.launch()
-
diff --git a/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/style.css b/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/awacke1/Positive.Reframing.Organization.Culture/README.md b/spaces/awacke1/Positive.Reframing.Organization.Culture/README.md
deleted file mode 100644
index 7d5f611de3d72bda0d790b492bbd78ac10836c8d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Positive.Reframing.Organization.Culture/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Positive.Reframing.Organization.Culture
-emoji: 💻
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-duplicated_from: dominguesm/positive-reframing-en
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/baixing/hackathon_test/README.md b/spaces/baixing/hackathon_test/README.md
deleted file mode 100644
index 2abf8756d2642d8eff00b253e9a35085e8180ee5..0000000000000000000000000000000000000000
--- a/spaces/baixing/hackathon_test/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Hackathon Test
-emoji: 🌖
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/LightningStrike.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/LightningStrike.js
deleted file mode 100644
index 79ef155cc606ad8c944a25f5ca3d6fe6cedeb42e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/LightningStrike.js
+++ /dev/null
@@ -1,1005 +0,0 @@
-/**
- * @author yomboprime https://github.com/yomboprime
- *
- * @fileoverview LightningStrike object for creating lightning strikes and voltaic arcs.
- *
- *
- * Usage
- *
- * var myRay = new THREE.LightningStrike( paramsObject );
- * var myRayMesh = new THREE.Mesh( myRay, myMaterial );
- * scene.add( myRayMesh );
- * ...
- * myRay.update( currentTime );
- *
- * The "currentTime" can vary its rate, go forwards, backwards or even jump, but it cannot be negative.
- *
- * You should normally leave the ray position to (0, 0, 0). You should control it by changing the sourceOffset and destOffset parameters.
- *
- *
- * LightningStrike parameters
- *
- * The paramsObject can contain any of the following parameters.
- *
- * Legend:
- * 'LightningStrike' (also called 'ray'): An independent voltaic arc with its ramifications and defined with a set of parameters.
- * 'Subray': A ramification of the ray. It is not a LightningStrike object.
- * 'Segment': A linear segment piece of a subray.
- * 'Leaf segment': A ray segment which cannot be smaller.
- *
- *
- * The following parameters can be changed any time and if they vary smoothly, the ray form will also change smoothly:
- *
- * @param {Vector3} sourceOffset The point where the ray starts.
- *
- * @param {Vector3} destOffset The point where the ray ends.
- *
- * @param {double} timeScale The rate at wich the ray form changes in time. Default: 1
- *
- * @param {double} roughness From 0 to 1. The higher the value, the more wrinkled is the ray. Default: 0.9
- *
- * @param {double} straightness From 0 to 1. The higher the value, the more straight will be a subray path. Default: 0.7
- *
- * @param {Vector3} up0 Ray 'up' direction at the ray starting point. Must be normalized. It should be perpendicular to the ray forward direction but it doesn't matter much.
- *
- * @param {Vector3} up1 Like the up0 parameter but at the end of the ray. Must be normalized.
- *
- * @param {double} radius0 Radius of the main ray trunk at the start point. Default: 1
- *
- * @param {double} radius1 Radius of the main ray trunk at the end point. Default: 1
- *
- * @param {double} radius0Factor The radius0 of a subray is this factor times the radius0 of its parent subray. Default: 0.5
- *
- * @param {double} radius1Factor The radius1 of a subray is this factor times the radius1 of its parent subray. Default: 0.2
- *
- * @param {minRadius} Minimum value a subray radius0 or radius1 can get. Default: 0.1
- *
- *
- * The following parameters should not be changed after lightning creation. They can be changed but the ray will change its form abruptly:
- *
- * @param {boolean} isEternal If true the ray never extinguishes. Otherwise its life is controlled by the 'birthTime' and 'deathTime' parameters. Default: true if any of those two parameters is undefined.
- *
- * @param {double} birthTime The time at which the ray starts its life and begins propagating. Only if isEternal is false. Default: None.
- *
- * @param {double} deathTime The time at which the ray ends vanishing and its life. Only if isEternal is false. Default: None.
- *
- * @param {double} propagationTimeFactor From 0 to 1. Lifetime factor at which the ray ends propagating and enters the steady phase. For example, 0.1 means it is propagating 1/10 of its lifetime. Default: 0.1
- *
- * @param {double} vanishingTimeFactor From 0 to 1. Lifetime factor at which the ray ends the steady phase and begins vanishing. For example, 0.9 means it is vanishing 1/10 of its lifetime. Default: 0.9
- *
- * @param {double} subrayPeriod Subrays cycle periodically. This is their time period. Default: 4
- *
- * @param {double} subrayDutyCycle From 0 to 1. This is the fraction of time a subray is active. Default: 0.6
- *
- *
- * These parameters cannot change after lightning creation:
- *
- * @param {integer} maxIterations: Greater than 0. The number of ray's leaf segments is 2**maxIterations. Default: 9
- *
- * @param {boolean} isStatic Set to true only for rays which won't change over time and are not attached to moving objects (Rare case). It is used to set the vertex buffers non-dynamic. You can omit calling update() for these rays.
- *
- * @param {integer} ramification Greater than 0. Maximum number of child subrays a subray can have. Default: 5
- *
- * @param {integer} maxSubrayRecursion Greater than 0. Maximum level of recursion (subray descendant generations). Default: 3
- *
- * @param {double} recursionProbability From 0 to 1. The lower the value, the less chance each new generation of subrays has to generate new subrays. Default: 0.6
- *
- * @param {boolean} generateUVs If true, the ray geometry will have uv coordinates generated. u runs along the ray, and v across its perimeter. Default: false.
- *
- * @param {Object} randomGenerator Set here your random number generator which will seed the SimplexNoise and other decisions during ray tree creation.
- * It can be used to generate repeatable rays. For that, set also the noiseSeed parameter, and each ray created with that generator and seed pair will be identical in time.
- * The randomGenerator parameter should be an object with a random() function similar to Math.random, but seedable.
- * It must have also a getSeed() method, which returns the current seed, and a setSeed( seed ) method, which accepts as seed a fractional number from 0 to 1, as well as any other number.
- * The default value is an internal generator for some uses and Math.random for others (It is non-repeatable even if noiseSeed is supplied)
- *
- * @param {double} noiseSeed Seed used to make repeatable rays (see the randomGenerator)
- *
- * @param {function} onDecideSubrayCreation Set this to change the callback which decides subray creation. You can look at the default callback in the code (createDefaultSubrayCreationCallbacks)for more info.
- *
- * @param {function} onSubrayCreation This is another callback, more simple than the previous one. It can be used to adapt the form of subrays or other parameters once a subray has been created and initialized. It is used in the examples to adapt subrays to a sphere or to a plane.
- *
- *
-*/
-
-THREE.LightningStrike = function ( rayParameters ) {
-
- THREE.BufferGeometry.call( this );
-
- this.type = 'LightningStrike';
-
- // Set parameters, and set undefined parameters to default values
- rayParameters = rayParameters || {};
- this.init( THREE.LightningStrike.copyParameters( rayParameters, rayParameters ) );
-
- // Creates and populates the mesh
- this.createMesh();
-
-};
-
-THREE.LightningStrike.prototype = Object.create( THREE.BufferGeometry.prototype );
-
-THREE.LightningStrike.prototype.constructor = THREE.LightningStrike;
-
-THREE.LightningStrike.prototype.isLightningStrike = true;
-
-// Ray states
-THREE.LightningStrike.RAY_INITIALIZED = 0;
-THREE.LightningStrike.RAY_UNBORN = 1;
-THREE.LightningStrike.RAY_PROPAGATING = 2;
-THREE.LightningStrike.RAY_STEADY = 3;
-THREE.LightningStrike.RAY_VANISHING = 4;
-THREE.LightningStrike.RAY_EXTINGUISHED = 5;
-
-THREE.LightningStrike.COS30DEG = Math.cos( 30 * Math.PI / 180 );
-THREE.LightningStrike.SIN30DEG = Math.sin( 30 * Math.PI / 180 );
-
-THREE.LightningStrike.createRandomGenerator = function () {
-
- var numSeeds = 2053;
- var seeds = [];
-
- for ( var i = 0; i < numSeeds; i++ ) {
-
- seeds.push( Math.random() );
-
- }
-
- var generator = {
-
- currentSeed: 0,
-
- random: function () {
-
- var value = seeds[ generator.currentSeed ];
-
- generator.currentSeed = ( generator.currentSeed + 1 ) % numSeeds;
-
- return value;
-
- },
-
- getSeed: function () {
-
- return generator.currentSeed / numSeeds;
-
- },
-
- setSeed: function ( seed ) {
-
- generator.currentSeed = Math.floor( seed * numSeeds ) % numSeeds;
-
- }
-
- };
-
- return generator;
-
-};
-
-THREE.LightningStrike.copyParameters = function ( dest, source) {
-
- source = source || {};
- dest = dest || {};
-
- var vecCopy = function( v ) {
-
- if ( source === dest ) {
-
- return v;
-
- }
- else {
-
- return v.clone();
-
- }
-
- }
-
- dest.sourceOffset = source.sourceOffset !== undefined ? vecCopy( source.sourceOffset ) : new THREE.Vector3( 0, 100, 0 ),
- dest.destOffset = source.destOffset !== undefined ? vecCopy( source.destOffset ) : new THREE.Vector3( 0, 0, 0 ),
-
- dest.timeScale = source.timeScale !== undefined ? source.timeScale : 1,
- dest.roughness = source.roughness !== undefined ? source.roughness : 0.9,
- dest.straightness = source.straightness !== undefined ? source.straightness : 0.7,
-
- dest.up0 = source.up0 !== undefined ? vecCopy( source.up0 ) : new THREE.Vector3( 0, 0, 1 );
- dest.up1 = source.up1 !== undefined ? vecCopy( source.up1 ) : new THREE.Vector3( 0, 0, 1 ),
- dest.radius0 = source.radius0 !== undefined ? source.radius0 : 1,
- dest.radius1 = source.radius1 !== undefined ? source.radius1 : 1,
- dest.radius0Factor = source.radius0Factor !== undefined ? source.radius0Factor : 0.5,
- dest.radius1Factor = source.radius1Factor !== undefined ? source.radius1Factor : 0.2,
- dest.minRadius = source.minRadius !== undefined ? source.minRadius : 0.2,
-
- // These parameters should not be changed after lightning creation. They can be changed but the ray will change its form abruptly:
-
- dest.isEternal = source.isEternal !== undefined ? source.isEternal : ( source.birthTime === undefined || source.deathTime === undefined ),
- dest.birthTime = source.birthTime,
- dest.deathTime = source.deathTime,
- dest.propagationTimeFactor = source.propagationTimeFactor !== undefined ? source.propagationTimeFactor : 0.1,
- dest.vanishingTimeFactor = source.vanishingTimeFactor !== undefined ? source.vanishingTimeFactor : 0.9,
- dest.subrayPeriod = source.subrayPeriod !== undefined ? source.subrayPeriod : 4,
- dest.subrayDutyCycle = source.subrayDutyCycle !== undefined ? source.subrayDutyCycle : 0.6;
-
- // These parameters cannot change after lightning creation:
-
- dest.maxIterations = source.maxIterations !== undefined ? source.maxIterations : 9;
- dest.isStatic = source.isStatic !== undefined ? source.isStatic : false;
- dest.ramification = source.ramification !== undefined ? source.ramification : 5;
- dest.maxSubrayRecursion = source.maxSubrayRecursion !== undefined ? source.maxSubrayRecursion : 3;
- dest.recursionProbability = source.recursionProbability !== undefined ? source.recursionProbability : 0.6;
- dest.generateUVs = source.generateUVs !== undefined ? source.generateUVs : false;
- dest.randomGenerator = source.randomGenerator,
- dest.noiseSeed = source.noiseSeed,
- dest.onDecideSubrayCreation = source.onDecideSubrayCreation,
- dest.onSubrayCreation = source.onSubrayCreation;
-
- return dest;
-
-};
-
-THREE.LightningStrike.prototype.update = function ( time ) {
-
- if ( this.isStatic ) {
- return;
- }
-
- if ( this.rayParameters.isEternal || ( this.rayParameters.birthTime <= time && time <= this.rayParameters.deathTime ) ) {
-
- this.updateMesh( time );
-
- if ( time < this.subrays[ 0 ].endPropagationTime ) {
-
- this.state = THREE.LightningStrike.RAY_PROPAGATING;
-
- }
- else if ( time > this.subrays[ 0 ].beginVanishingTime ) {
-
- this.state = THREE.LightningStrike.RAY_VANISHING;
-
- }
- else {
-
- this.state = THREE.LightningStrike.RAY_STEADY;
-
- }
-
- this.visible = true;
-
- }
- else {
-
- this.visible = false;
-
- if ( time < this.rayParameters.birthTime ) {
-
- this.state = THREE.LightningStrike.RAY_UNBORN;
-
- }
- else {
-
- this.state = THREE.LightningStrike.RAY_EXTINGUISHED;
-
- }
-
- }
-
-};
-
-THREE.LightningStrike.prototype.init = function ( rayParameters ) {
-
- // Init all the state from the parameters
-
- this.rayParameters = rayParameters;
-
- // These parameters cannot change after lightning creation:
-
- this.maxIterations = rayParameters.maxIterations !== undefined ? Math.floor( rayParameters.maxIterations ) : 9;
- rayParameters.maxIterations = this.maxIterations;
- this.isStatic = rayParameters.isStatic !== undefined ? rayParameters.isStatic : false;
- rayParameters.isStatic = this.isStatic;
- this.ramification = rayParameters.ramification !== undefined ? Math.floor( rayParameters.ramification ) : 5;
- rayParameters.ramification = this.ramification;
- this.maxSubrayRecursion = rayParameters.maxSubrayRecursion !== undefined ? Math.floor( rayParameters.maxSubrayRecursion ) : 3;
- rayParameters.maxSubrayRecursion = this.maxSubrayRecursion;
- this.recursionProbability = rayParameters.recursionProbability !== undefined ? rayParameters.recursionProbability : 0.6;
- rayParameters.recursionProbability = this.recursionProbability;
- this.generateUVs = rayParameters.generateUVs !== undefined ? rayParameters.generateUVs : false;
- rayParameters.generateUVs = this.generateUVs;
-
- // Random generator
- if ( rayParameters.randomGenerator !== undefined ) {
-
- this.randomGenerator = rayParameters.randomGenerator;
- this.seedGenerator = rayParameters.randomGenerator;
-
- if ( rayParameters.noiseSeed !== undefined ) {
-
- this.seedGenerator.setSeed( rayParameters.noiseSeed );
-
- }
-
- }
- else {
-
- this.randomGenerator = THREE.LightningStrike.createRandomGenerator();
- this.seedGenerator = Math;
-
- }
-
- // Ray creation callbacks
- if ( rayParameters.onDecideSubrayCreation !== undefined ) {
-
- this.onDecideSubrayCreation = rayParameters.onDecideSubrayCreation;
-
- }
- else {
-
- this.createDefaultSubrayCreationCallbacks();
-
- if ( rayParameters.onSubrayCreation !== undefined ) {
-
- this.onSubrayCreation = rayParameters.onSubrayCreation;
-
- }
-
- }
-
- // Internal state
-
- this.state = THREE.LightningStrike.RAY_INITIALIZED;
-
- this.maxSubrays = Math.ceil( 1 + Math.pow( this.ramification, Math.max( 0, this.maxSubrayRecursion - 1 ) ) );
- rayParameters.maxSubrays = this.maxSubrays;
-
- this.maxRaySegments = 2 * ( 1 << this.maxIterations );
-
- this.subrays = [];
-
- for ( var i = 0; i < this.maxSubrays; i++ ) {
-
- this.subrays.push( this.createSubray() );
-
- }
-
- this.raySegments = [];
-
- for ( var i = 0; i < this.maxRaySegments; i++ ) {
-
- this.raySegments.push( this.createSegment() );
-
- }
-
- this.time = 0;
- this.timeFraction = 0;
- this.currentSegmentCallback = null;
- this.currentCreateTriangleVertices = this.generateUVs ? this.createTriangleVerticesWithUVs : this.createTriangleVerticesWithoutUVs;
- this.numSubrays = 0;
- this.currentSubray = null;
- this.currentSegmentIndex = 0;
- this.isInitialSegment = false;
- this.subrayProbability = 0;
-
- this.currentVertex = 0;
- this.currentIndex = 0;
- this.currentCoordinate = 0;
- this.currentUVCoordinate = 0;
- this.vertices = null;
- this.uvs = null;
- this.indices = null;
- this.positionAttribute = null;
- this.uvsAttribute = null;
-
- this.simplexX = new SimplexNoise( this.seedGenerator );
- this.simplexY = new SimplexNoise( this.seedGenerator );
- this.simplexZ = new SimplexNoise( this.seedGenerator );
-
- // Temp vectors
- this.forwards = new THREE.Vector3();
- this.forwardsFill = new THREE.Vector3();
- this.side = new THREE.Vector3();
- this.down = new THREE.Vector3();
- this.middlePos = new THREE.Vector3();
- this.middleLinPos = new THREE.Vector3();
- this.newPos = new THREE.Vector3();
- this.vPos = new THREE.Vector3();
- this.cross1 = new THREE.Vector3();
-
-};
-
-THREE.LightningStrike.prototype.createMesh = function () {
-
- var maxDrawableSegmentsPerSubRay = 1 << this.maxIterations;
-
- var maxVerts = 3 * ( maxDrawableSegmentsPerSubRay + 1 ) * this.maxSubrays;
- var maxIndices = 18 * maxDrawableSegmentsPerSubRay * this.maxSubrays;
-
- this.vertices = new Float32Array( maxVerts * 3 );
- this.indices = new Uint32Array( maxIndices );
- if ( this.generateUVs ) {
- this.uvs = new Float32Array( maxVerts * 2 );
- }
-
- // Populate the mesh
- this.fillMesh( 0 );
-
- this.setIndex( new THREE.Uint32BufferAttribute( this.indices, 1 ) );
-
- this.positionAttribute = new THREE.Float32BufferAttribute( this.vertices, 3 );
- this.addAttribute( 'position', this.positionAttribute );
-
- if ( this.generateUVs ) {1
- this.uvsAttribute = new THREE.Float32BufferAttribute( new Float32Array( this.uvs ), 2 );
- this.addAttribute( 'uv', this.uvsAttribute );
- }
-
- if ( ! this.isStatic ) {
- this.index.dynamic = true;
- this.positionAttribute.dynamic = true;
- if ( this.generateUVs ) {
- this.uvsAttribute.dynamic = true;
- }
- }
-
- // Store buffers for later modification
- this.vertices = this.positionAttribute.array;
- this.indices = this.index.array;
- if ( this.generateUVs ) {
- this.uvs = this.uvsAttribute.array;
- }
-
-};
-
-THREE.LightningStrike.prototype.updateMesh = function ( time ) {
-
- this.fillMesh( time );
-
- this.drawRange.count = this.currentIndex;
-
- this.index.needsUpdate = true;
-
- this.positionAttribute.needsUpdate = true;
-
- if ( this.generateUVs ) {
- this.uvsAttribute.needsUpdate = true;
- }
-
-};
-
-THREE.LightningStrike.prototype.fillMesh = function ( time ) {
-
- var scope = this;
-
- this.currentVertex = 0;
- this.currentIndex = 0;
- this.currentCoordinate = 0;
- this.currentUVCoordinate = 0;
-
- this.fractalRay( time, function fillVertices ( segment ) {
-
- var subray = scope.currentSubray;
-
- if ( time < subray.birthTime ) {//&& ( ! this.rayParameters.isEternal || scope.currentSubray.recursion > 0 ) ) {
-
- return;
-
- }
- else if ( this.rayParameters.isEternal && scope.currentSubray.recursion == 0 ) {
-
- // Eternal rays don't propagate nor vanish, but its subrays do
-
- scope.createPrism( segment );
-
- scope.onDecideSubrayCreation( segment, scope );
-
- }
- else if ( time < subray.endPropagationTime ) {
-
- if ( scope.timeFraction >= segment.fraction0 * subray.propagationTimeFactor ) {
-
- // Ray propagation has arrived to this segment
-
- scope.createPrism( segment );
-
- scope.onDecideSubrayCreation( segment, scope );
-
- }
-
- }
- else if ( time < subray.beginVanishingTime ) {
-
- // Ray is steady (nor propagating nor vanishing)
-
- scope.createPrism( segment );
-
- scope.onDecideSubrayCreation( segment, scope );
-
- }
- else {
-
- if ( scope.timeFraction <= subray.vanishingTimeFactor + segment.fraction1 * ( 1 - subray.vanishingTimeFactor ) ) {
-
- // Segment has not yet vanished
-
- scope.createPrism( segment );
-
- }
-
- scope.onDecideSubrayCreation( segment, scope );
-
- }
-
- } );
-
-};
-
-THREE.LightningStrike.prototype.addNewSubray = function ( rayParameters ) {
-
- return this.subrays[ this.numSubrays++ ];
-
-};
-
-THREE.LightningStrike.prototype.initSubray = function ( subray, rayParameters ) {
-
- subray.pos0.copy( rayParameters.sourceOffset );
- subray.pos1.copy( rayParameters.destOffset );
- subray.up0.copy( rayParameters.up0 );
- subray.up1.copy( rayParameters.up1 );
- subray.radius0 = rayParameters.radius0;
- subray.radius1 = rayParameters.radius1;
- subray.birthTime = rayParameters.birthTime;
- subray.deathTime = rayParameters.deathTime;
- subray.timeScale = rayParameters.timeScale;
- subray.roughness = rayParameters.roughness;
- subray.straightness = rayParameters.straightness;
- subray.propagationTimeFactor = rayParameters.propagationTimeFactor;
- subray.vanishingTimeFactor = rayParameters.vanishingTimeFactor;
-
- subray.maxIterations = this.maxIterations;
- subray.seed = rayParameters.noiseSeed !== undefined ? rayParameters.noiseSeed : 0;
- subray.recursion = 0;
-
-};
-
-THREE.LightningStrike.prototype.fractalRay = function ( time, segmentCallback ) {
-
- this.time = time;
- this.currentSegmentCallback = segmentCallback;
- this.numSubrays = 0;
-
- // Add the top level subray
- this.initSubray( this.addNewSubray(), this.rayParameters );
-
- // Process all subrays that are being generated until consuming all of them
- for ( var subrayIndex = 0; subrayIndex < this.numSubrays; subrayIndex++ ) {
-
- var subray = this.subrays[ subrayIndex ];
- this.currentSubray = subray;
-
- this.randomGenerator.setSeed( subray.seed );
-
- subray.endPropagationTime = THREE.Math.lerp( subray.birthTime, subray.deathTime, subray.propagationTimeFactor );
- subray.beginVanishingTime = THREE.Math.lerp( subray.deathTime, subray.birthTime, 1 - subray.vanishingTimeFactor );
-
- var random1 = this.randomGenerator.random;
- subray.linPos0.set( random1(), random1(), random1() ).multiplyScalar( 1000 );
- subray.linPos1.set( random1(), random1(), random1() ).multiplyScalar( 1000 );
-
- this.timeFraction = ( time - subray.birthTime ) / ( subray.deathTime - subray.birthTime );
-
- this.currentSegmentIndex = 0;
- this.isInitialSegment = true;
-
- var segment = this.getNewSegment();
- segment.iteration = 0;
- segment.pos0.copy( subray.pos0 );
- segment.pos1.copy( subray.pos1 );
- segment.linPos0.copy( subray.linPos0 );
- segment.linPos1.copy( subray.linPos1 );
- segment.up0.copy( subray.up0 );
- segment.up1.copy( subray.up1 );
- segment.radius0 = subray.radius0;
- segment.radius1 = subray.radius1;
- segment.fraction0 = 0;
- segment.fraction1 = 1;
- segment.positionVariationFactor = 1 - subray.straightness;
-
- this.subrayProbability = this.ramification * Math.pow( this.recursionProbability, subray.recursion ) / ( 1 << subray.maxIterations );
-
- this.fractalRayRecursive( segment );
-
- }
-
- this.currentSegmentCallback = null;
- this.currentSubray = null;
-
-};
-
-THREE.LightningStrike.prototype.fractalRayRecursive = function ( segment ) {
-
- // Leave recursion condition
- if ( segment.iteration >= this.currentSubray.maxIterations ) {
-
- this.currentSegmentCallback( segment );
-
- return;
-
- }
-
- // Interpolation
- this.forwards.subVectors( segment.pos1, segment.pos0 );
- var lForwards = this.forwards.length();
-
- if ( lForwards < 0.000001) {
- this.forwards.set( 0, 0, 0.01 );
- lForwards = this.forwards.length();
- }
-
- var middleRadius = ( segment.radius0 + segment.radius1 ) * 0.5;
- var middleFraction = ( segment.fraction0 + segment.fraction1 ) * 0.5;
-
- var timeDimension = this.time * this.currentSubray.timeScale * Math.pow( 2, segment.iteration );
-
- this.middlePos.lerpVectors( segment.pos0, segment.pos1, 0.5 );
- this.middleLinPos.lerpVectors( segment.linPos0, segment.linPos1, 0.5 );
- var p = this.middleLinPos;
-
- // Noise
- this.newPos.set( this.simplexX.noise4d( p.x, p.y, p.z, timeDimension ),
- this.simplexY.noise4d( p.x, p.y, p.z, timeDimension ),
- this.simplexZ.noise4d( p.x, p.y, p.z, timeDimension ) );
-
- this.newPos.multiplyScalar( segment.positionVariationFactor * lForwards );
- this.newPos.add( this.middlePos );
-
- // Recursion
-
- var newSegment1 = this.getNewSegment();
- newSegment1.pos0.copy( segment.pos0 );
- newSegment1.pos1.copy( this.newPos );
- newSegment1.linPos0.copy( segment.linPos0 );
- newSegment1.linPos1.copy( this.middleLinPos );
- newSegment1.up0.copy( segment.up0 );
- newSegment1.up1.copy( segment.up1 );
- newSegment1.radius0 = segment.radius0;
- newSegment1.radius1 = middleRadius;
- newSegment1.fraction0 = segment.fraction0;
- newSegment1.fraction1 = middleFraction;
- newSegment1.positionVariationFactor = segment.positionVariationFactor * this.currentSubray.roughness;
- newSegment1.iteration = segment.iteration + 1;
-
- var newSegment2 = this.getNewSegment();
- newSegment2.pos0.copy( this.newPos );
- newSegment2.pos1.copy( segment.pos1 );
- newSegment2.linPos0.copy( this.middleLinPos );
- newSegment2.linPos1.copy( segment.linPos1 );
- this.cross1.crossVectors( segment.up0, this.forwards.normalize() );
- newSegment2.up0.crossVectors( this.forwards, this.cross1 ).normalize();
- newSegment2.up1.copy( segment.up1 );
- newSegment2.radius0 = middleRadius;
- newSegment2.radius1 = segment.radius1;
- newSegment2.fraction0 = middleFraction;
- newSegment2.fraction1 = segment.fraction1;
- newSegment2.positionVariationFactor = segment.positionVariationFactor * this.currentSubray.roughness;
- newSegment2.iteration = segment.iteration + 1;
-
- this.fractalRayRecursive( newSegment1 );
-
- this.fractalRayRecursive( newSegment2 );
-
-};
-
-THREE.LightningStrike.prototype.createPrism = function ( segment ) {
-
- // Creates one triangular prism and its vertices at the segment
-
- this.forwardsFill.subVectors( segment.pos1, segment.pos0 ).normalize();
-
- if ( this.isInitialSegment ) {
-
- this.currentCreateTriangleVertices( segment.pos0, segment.up0, this.forwardsFill, segment.radius0, 0 );
-
- this.isInitialSegment = false;
-
- }
-
- this.currentCreateTriangleVertices( segment.pos1, segment.up0, this.forwardsFill, segment.radius1, segment.fraction1 );
-
- this.createPrismFaces();
-
-};
-
-THREE.LightningStrike.prototype.createTriangleVerticesWithoutUVs = function ( pos, up, forwards, radius ) {
-
- // Create an equilateral triangle (only vertices)
-
- this.side.crossVectors( up, forwards ).multiplyScalar( radius * THREE.LightningStrike.COS30DEG );
- this.down.copy( up ).multiplyScalar( - radius * THREE.LightningStrike.SIN30DEG );
-
- var p = this.vPos;
- var v = this.vertices;
-
- p.copy( pos ).sub( this.side ).add( this.down );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- p.copy( pos ).add( this.side ).add( this.down );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- p.copy( up ).multiplyScalar( radius ).add( pos );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- this.currentVertex += 3;
-
-};
-
-THREE.LightningStrike.prototype.createTriangleVerticesWithUVs = function ( pos, up, forwards, radius, u ) {
-
- // Create an equilateral triangle (only vertices)
-
- this.side.crossVectors( up, forwards ).multiplyScalar( radius * THREE.LightningStrike.COS30DEG );
- this.down.copy( up ).multiplyScalar( - radius * THREE.LightningStrike.SIN30DEG );
-
- var p = this.vPos;
- var v = this.vertices;
- var uv = this.uvs;
-
- p.copy( pos ).sub( this.side ).add( this.down );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- uv[ this.currentUVCoordinate++ ] = u;
- uv[ this.currentUVCoordinate++ ] = 0;
-
- p.copy( pos ).add( this.side ).add( this.down );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- uv[ this.currentUVCoordinate++ ] = u;
- uv[ this.currentUVCoordinate++ ] = 0.5;
-
- p.copy( up ).multiplyScalar( radius ).add( pos );
-
- v[ this.currentCoordinate++ ] = p.x;
- v[ this.currentCoordinate++ ] = p.y;
- v[ this.currentCoordinate++ ] = p.z;
-
- uv[ this.currentUVCoordinate++ ] = u;
- uv[ this.currentUVCoordinate++ ] = 1;
-
- this.currentVertex += 3;
-
-};
-
-THREE.LightningStrike.prototype.createPrismFaces = function ( vertex, index ) {
-
- var indices = this.indices;
- var vertex = this.currentVertex - 6;
-
- indices[ this.currentIndex++ ] = vertex + 1;
- indices[ this.currentIndex++ ] = vertex + 2;
- indices[ this.currentIndex++ ] = vertex + 5;
- indices[ this.currentIndex++ ] = vertex + 1;
- indices[ this.currentIndex++ ] = vertex + 5;
- indices[ this.currentIndex++ ] = vertex + 4;
- indices[ this.currentIndex++ ] = vertex + 0;
- indices[ this.currentIndex++ ] = vertex + 1;
- indices[ this.currentIndex++ ] = vertex + 4;
- indices[ this.currentIndex++ ] = vertex + 0;
- indices[ this.currentIndex++ ] = vertex + 4;
- indices[ this.currentIndex++ ] = vertex + 3;
- indices[ this.currentIndex++ ] = vertex + 2;
- indices[ this.currentIndex++ ] = vertex + 0;
- indices[ this.currentIndex++ ] = vertex + 3;
- indices[ this.currentIndex++ ] = vertex + 2;
- indices[ this.currentIndex++ ] = vertex + 3;
- indices[ this.currentIndex++ ] = vertex + 5;
-
-};
-
-THREE.LightningStrike.prototype.createDefaultSubrayCreationCallbacks = function () {
-
- var random1 = this.randomGenerator.random;
-
- this.onDecideSubrayCreation = function ( segment, lightningStrike ) {
-
- // Decide subrays creation at parent (sub)ray segment
-
- var subray = lightningStrike.currentSubray;
-
- var period = lightningStrike.rayParameters.subrayPeriod;
- var dutyCycle = lightningStrike.rayParameters.subrayDutyCycle;
-
- var phase0 = ( lightningStrike.rayParameters.isEternal && subray.recursion == 0 ) ? - random1() * period : THREE.Math.lerp( subray.birthTime, subray.endPropagationTime, segment.fraction0 ) - random1() * period;
-
- var phase = lightningStrike.time - phase0;
- var currentCycle = Math.floor( phase / period );
-
- var childSubraySeed = random1() * ( currentCycle + 1 );
-
- var isActive = phase % period <= dutyCycle * period;
-
- probability = lightningStrike.subrayProbability;
- var probability = 0;
- if ( isActive ) {
- probability = lightningStrike.subrayProbability;
- // Distribution test: probability *= segment.fraction0 > 0.5 && segment.fraction0 < 0.9 ? 1 / 0.4 : 0;
- }
-
- if ( subray.recursion < lightningStrike.maxSubrayRecursion && lightningStrike.numSubrays < lightningStrike.maxSubrays && random1() < probability ) {
-
- var childSubray = lightningStrike.addNewSubray();
-
- var parentSeed = lightningStrike.randomGenerator.getSeed();
- childSubray.seed = childSubraySeed;
- lightningStrike.randomGenerator.setSeed( childSubraySeed );
-
- childSubray.recursion = subray.recursion + 1;
- childSubray.maxIterations = Math.max( 1, subray.maxIterations - 1 );
-
- childSubray.linPos0.set( random1(), random1(), random1() ).multiplyScalar( 1000 );
- childSubray.linPos1.set( random1(), random1(), random1() ).multiplyScalar( 1000 );;
- childSubray.up0.copy( subray.up0 );
- childSubray.up1.copy( subray.up1 );
- childSubray.radius0 = segment.radius0 * lightningStrike.rayParameters.radius0Factor;
- childSubray.radius1 = Math.min( lightningStrike.rayParameters.minRadius, segment.radius1 * lightningStrike.rayParameters.radius1Factor );
-
- childSubray.birthTime = phase0 + ( currentCycle ) * period;
- childSubray.deathTime = childSubray.birthTime + period * dutyCycle;
-
- if ( ! lightningStrike.rayParameters.isEternal && subray.recursion == 0 ) {
-
- childSubray.birthTime = Math.max( childSubray.birthTime, subray.birthTime );
- childSubray.deathTime = Math.min( childSubray.deathTime, subray.deathTime );
-
- }
-
- childSubray.timeScale = subray.timeScale * 2;
- childSubray.roughness = subray.roughness;
- childSubray.straightness = subray.straightness;
- childSubray.propagationTimeFactor = subray.propagationTimeFactor;
- childSubray.vanishingTimeFactor = subray.vanishingTimeFactor;
-
- lightningStrike.onSubrayCreation( segment, subray, childSubray, lightningStrike );
-
- lightningStrike.randomGenerator.setSeed( parentSeed );
-
- }
-
- };
-
- var vec1Pos = new THREE.Vector3();
- var vec2Forward = new THREE.Vector3();
- var vec3Side = new THREE.Vector3();
- var vec4Up = new THREE.Vector3();
-
- this.onSubrayCreation = function ( segment, parentSubray, childSubray, lightningStrike ) {
-
- // Decide childSubray origin and destination positions (pos0 and pos1) and possibly other properties of childSubray
-
- // Just use the default cone position generator
- lightningStrike.subrayCylinderPosition( segment, parentSubray, childSubray, 0.5, 0.6, 0.2 );
-
- };
-
- this.subrayConePosition = function ( segment, parentSubray, childSubray, heightFactor, sideWidthFactor, minSideWidthFactor ) {
-
- // Sets childSubray pos0 and pos1 in a cone
-
- childSubray.pos0.copy( segment.pos0 );
-
- vec1Pos.subVectors( parentSubray.pos1, parentSubray.pos0 );
- vec2Forward.copy( vec1Pos ).normalize();
- vec1Pos.multiplyScalar( segment.fraction0 + ( 1 - segment.fraction0 ) * ( random1() * heightFactor ) );
- var length = vec1Pos.length();
- vec3Side.crossVectors( parentSubray.up0, vec2Forward );
- var angle = 2 * Math.PI * random1();
- vec3Side.multiplyScalar( Math.cos ( angle ) );
- vec4Up.copy( parentSubray.up0 ).multiplyScalar( Math.sin ( angle ) );
-
- childSubray.pos1.copy( vec3Side ).add( vec4Up ).multiplyScalar( length * sideWidthFactor * ( minSideWidthFactor + random1() * ( 1 - minSideWidthFactor ) ) ).add( vec1Pos ).add( parentSubray.pos0 );
-
- }
-
- this.subrayCylinderPosition = function ( segment, parentSubray, childSubray, heightFactor, sideWidthFactor, minSideWidthFactor ) {
-
- // Sets childSubray pos0 and pos1 in a cylinder
-
- childSubray.pos0.copy( segment.pos0 );
-
- vec1Pos.subVectors( parentSubray.pos1, parentSubray.pos0 );
- vec2Forward.copy( vec1Pos ).normalize();
- vec1Pos.multiplyScalar( segment.fraction0 + ( 1 - segment.fraction0 ) * ( ( 2 * random1() - 1 ) * heightFactor ) );
- var length = vec1Pos.length();
- vec3Side.crossVectors( parentSubray.up0, vec2Forward );
- var angle = 2 * Math.PI * random1();
- vec3Side.multiplyScalar( Math.cos ( angle ) );
- vec4Up.copy( parentSubray.up0 ).multiplyScalar( Math.sin ( angle ) );
-
- childSubray.pos1.copy( vec3Side ).add( vec4Up ).multiplyScalar( length * sideWidthFactor * ( minSideWidthFactor + random1() * ( 1 - minSideWidthFactor ) ) ).add( vec1Pos ).add( parentSubray.pos0 );
-
- }
-
-};
-
-THREE.LightningStrike.prototype.createSubray = function () {
-
- return {
-
- seed: 0,
- maxIterations: 0,
- recursion: 0,
- pos0: new THREE.Vector3(),
- pos1: new THREE.Vector3(),
- linPos0: new THREE.Vector3(),
- linPos1: new THREE.Vector3(),
- up0: new THREE.Vector3(),
- up1: new THREE.Vector3(),
- radius0: 0,
- radius1: 0,
- birthTime: 0,
- deathTime: 0,
- timeScale: 0,
- roughness: 0,
- straightness: 0,
- propagationTimeFactor: 0,
- vanishingTimeFactor: 0,
- endPropagationTime: 0,
- beginVanishingTime: 0
-
- };
-
-};
-
-THREE.LightningStrike.prototype.createSegment = function () {
-
- return {
- iteration: 0,
- pos0: new THREE.Vector3(),
- pos1: new THREE.Vector3(),
- linPos0: new THREE.Vector3(),
- linPos1: new THREE.Vector3(),
- up0: new THREE.Vector3(),
- up1: new THREE.Vector3(),
- radius0: 0,
- radius1: 0,
- fraction0: 0,
- fraction1: 0,
- positionVariationFactor: 0
- }
-
-};
-
-THREE.LightningStrike.prototype.getNewSegment = function () {
-
- return this.raySegments[ this.currentSegmentIndex++ ];
-
-};
-
-THREE.LightningStrike.prototype.copy = function ( source ) {
-
- BufferGeometry.prototype.copy.call( this, source );
-
- this.init( THREE.LightningStrike.copyParameters( {}, source.rayParameters ) );
-
- return this;
-
-};
-
-THREE.LightningStrike.prototype.clone = function () {
-
- return new this.constructor( THREE.LightningStrike.copyParameters( {}, this.rayParameters ) );
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/VerticalTiltShiftShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/VerticalTiltShiftShader.js
deleted file mode 100644
index ad8ff70c9024ee27aa7436702143751491e49927..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/VerticalTiltShiftShader.js
+++ /dev/null
@@ -1,65 +0,0 @@
-/**
- * @author alteredq / http://alteredqualia.com/
- *
- * Simple fake tilt-shift effect, modulating two pass Gaussian blur (see above) by vertical position
- *
- * - 9 samples per pass
- * - standard deviation 2.7
- * - "h" and "v" parameters should be set to "1 / width" and "1 / height"
- * - "r" parameter control where "focused" horizontal line lies
- */
-
-THREE.VerticalTiltShiftShader = {
-
- uniforms: {
-
- "tDiffuse": { value: null },
- "v": { value: 1.0 / 512.0 },
- "r": { value: 0.35 }
-
- },
-
- vertexShader: [
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vUv = uv;",
- "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
-
- "}"
-
- ].join( "\n" ),
-
- fragmentShader: [
-
- "uniform sampler2D tDiffuse;",
- "uniform float v;",
- "uniform float r;",
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vec4 sum = vec4( 0.0 );",
-
- "float vv = v * abs( r - vUv.y );",
-
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y - 4.0 * vv ) ) * 0.051;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y - 3.0 * vv ) ) * 0.0918;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y - 2.0 * vv ) ) * 0.12245;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y - 1.0 * vv ) ) * 0.1531;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y ) ) * 0.1633;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y + 1.0 * vv ) ) * 0.1531;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y + 2.0 * vv ) ) * 0.12245;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y + 3.0 * vv ) ) * 0.0918;",
- "sum += texture2D( tDiffuse, vec2( vUv.x, vUv.y + 4.0 * vv ) ) * 0.051;",
-
- "gl_FragColor = sum;",
-
- "}"
-
- ].join( "\n" )
-
-};
diff --git a/spaces/benthecoder/news-summarizer/app.py b/spaces/benthecoder/news-summarizer/app.py
deleted file mode 100644
index 085addefff091b068b27c38ece1c4cd97b45f27b..0000000000000000000000000000000000000000
--- a/spaces/benthecoder/news-summarizer/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from newspaper import Article
-from newspaper import Config
-import nltk
-nltk.download('punkt')
-
-from transformers import pipeline
-import gradio as gr
-from gradio.mix import Parallel, Series
-
-
-def extract_article_text(url):
- USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
- config = Config()
- config.browser_user_agent = USER_AGENT
- config.request_timeout = 10
-
- article = Article(url, config=config)
- article.download()
- article.parse()
- text = article.text
- return text
-
-extractor = gr.Interface(extract_article_text, 'text', 'text')
-summarizer = gr.Interface.load("huggingface/facebook/bart-large-cnn")
-
-sample_url = [['https://www.technologyreview.com/2021/07/22/1029973/deepmind-alphafold-protein-folding-biology-disease-drugs-proteome/'],
- ['https://www.technologyreview.com/2021/07/21/1029860/disability-rights-employment-discrimination-ai-hiring/'],
- ['https://www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human/']]
-
-desc = '''
- Let Hugging Face models summarize articles for you.
- Note: Shorter articles generate faster summaries.
- This summarizer uses bart-large-cnn model by Facebook
- '''
-
-iface = Series(extractor, summarizer,
- inputs = gr.inputs.Textbox(
- lines = 2,
- label = 'URL'
- ),
- outputs = 'text',
- title = 'News Summarizer',
- theme = 'huggingface',
- description = desc,
- examples=sample_url)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/utils/autoanchor.py b/spaces/bhasker412/IDD-YOLO-Tracking/utils/autoanchor.py
deleted file mode 100644
index f491032e53ab43cd81d966d127bd92f9b414b9fe..0000000000000000000000000000000000000000
--- a/spaces/bhasker412/IDD-YOLO-Tracking/utils/autoanchor.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Auto-anchor utils
-
-import numpy as np
-import torch
-import yaml
-from scipy.cluster.vq import kmeans
-from tqdm import tqdm
-
-from utils.general import colorstr
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary
- a = m.anchor_grid.prod(-1).view(-1) # anchor area
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da.sign() != ds.sign(): # same order
- print('Reversing anchor order')
- m.anchors[:] = m.anchors.flip(0)
- m.anchor_grid[:] = m.anchor_grid.flip(0)
-
-
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- prefix = colorstr('autoanchor: ')
- print(f'\n{prefix}Analyzing anchors... ', end='')
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1. / thr).float().mean() # best possible recall
- return bpr, aat
-
- anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors
- bpr, aat = metric(anchors)
- print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
- if bpr < 0.98: # threshold to recompute
- print('. Attempting to improve anchors, please wait...')
- na = m.anchor_grid.numel() // 2 # number of anchors
- try:
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
- except Exception as e:
- print(f'{prefix}ERROR: {e}')
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
- m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference
- check_anchor_order(m)
- m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
- print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
- else:
- print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
- print('') # newline
-
-
-def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
- """ Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- path: path to dataset *.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- thr = 1. / thr
- prefix = colorstr('autoanchor: ')
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
- print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
- print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
- f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
- for i, x in enumerate(k):
- print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
- return k
-
- if isinstance(path, str): # *.yaml file
- with open(path) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict
- from utils.datasets import LoadImagesAndLabels
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
- else:
- dataset = path # dataset
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
- wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
- # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans calculation
- print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
- s = wh.std(0) # sigmas for whitening
- k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
- assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}')
- k *= s
- wh = torch.tensor(wh, dtype=torch.float32) # filtered
- wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
- k = print_results(k)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- npr = np.random
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
- v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
- if verbose:
- print_results(k)
-
- return print_results(k)
diff --git a/spaces/bioriAsaeru/text-to-voice/Authentec Fingerprint Driver W7 64bit W7wbf64 Exe [CRACKED].md b/spaces/bioriAsaeru/text-to-voice/Authentec Fingerprint Driver W7 64bit W7wbf64 Exe [CRACKED].md
deleted file mode 100644
index 5f2e1adb5d4f5b5d2fed9e1af480799f25262e47..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Authentec Fingerprint Driver W7 64bit W7wbf64 Exe [CRACKED].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- d5da3c52bf
-
-
-
diff --git a/spaces/blmdsydm/faster-whisper-webui/src/vad.py b/spaces/blmdsydm/faster-whisper-webui/src/vad.py
deleted file mode 100644
index e68ee7391e93f539a05d548601f2d87168bb1282..0000000000000000000000000000000000000000
--- a/spaces/blmdsydm/faster-whisper-webui/src/vad.py
+++ /dev/null
@@ -1,568 +0,0 @@
-from abc import ABC, abstractmethod
-from collections import Counter, deque
-import time
-
-from typing import Any, Deque, Iterator, List, Dict
-
-from pprint import pprint
-from src.hooks.progressListener import ProgressListener
-from src.hooks.subTaskProgressListener import SubTaskProgressListener
-from src.hooks.whisperProgressHook import create_progress_listener_handle
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-from src.segments import merge_timestamps
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback
-
-# Workaround for https://github.com/tensorflow/tensorflow/issues/48797
-try:
- import tensorflow as tf
-except ModuleNotFoundError:
- # Error handling
- pass
-
-import torch
-
-import ffmpeg
-import numpy as np
-
-from src.utils import format_timestamp
-from enum import Enum
-
-class NonSpeechStrategy(Enum):
- """
- Ignore non-speech frames segments.
- """
- SKIP = 1
- """
- Just treat non-speech segments as speech.
- """
- CREATE_SEGMENT = 2
- """
- Expand speech segments into subsequent non-speech segments.
- """
- EXPAND_SEGMENT = 3
-
-# Defaults for Silero
-SPEECH_TRESHOLD = 0.3
-
-# Minimum size of segments to process
-MIN_SEGMENT_DURATION = 1
-
-# The maximum time for texts from old segments to be used in the next segment
-MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled)
-PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this
-
-VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio
-
-class TranscriptionConfig(ABC):
- def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- self.non_speech_strategy = non_speech_strategy
- self.segment_padding_left = segment_padding_left
- self.segment_padding_right = segment_padding_right
- self.max_silent_period = max_silent_period
- self.max_merge_size = max_merge_size
- self.max_prompt_window = max_prompt_window
- self.initial_segment_index = initial_segment_index
-
-class PeriodicTranscriptionConfig(TranscriptionConfig):
- def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1):
- super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index)
- self.periodic_duration = periodic_duration
-
-class AbstractTranscription(ABC):
- def __init__(self, sampling_rate: int = 16000):
- self.sampling_rate = sampling_rate
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- return load_audio(str, self.sampling_rate, start_time, duration)
-
- def is_transcribe_timestamps_fast(self):
- """
- Determine if get_transcribe_timestamps is fast enough to not need parallelization.
- """
- return False
-
- @abstractmethod
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- return
-
- def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method,
- after merging the given segments using the specified configuration.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size,
- config.segment_padding_left, config.segment_padding_right)
-
- if config.non_speech_strategy != NonSpeechStrategy.SKIP:
- # Expand segments to include the gaps between them
- if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT):
- # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size
- merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size)
- elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT:
- # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment)
- merged = self.expand_gaps(merged, total_duration=total_duration)
- else:
- raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy))
-
- print("Transcribing non-speech:")
- pprint(merged)
- return merged
-
- def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig,
- progressListener: ProgressListener = None):
- """
- Transcribe the given audo file.
-
- Parameters
- ----------
- audio: str
- The audio file.
- whisperCallable: WhisperCallback
- A callback object to call to transcribe each segment.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
-
- try:
- max_audio_duration = self.get_audio_duration(audio, config)
- timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration)
-
- # Get speech timestamps from full audio file
- merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration)
-
- # A deque of transcribed segments that is passed to the next segment as a prompt
- prompt_window = deque()
-
- print("Processing timestamps:")
- pprint(merged)
-
- result = {
- 'text': "",
- 'segments': [],
- 'language': ""
- }
- languageCounter = Counter()
- detected_language = None
-
- segment_index = config.initial_segment_index
-
- # Calculate progress
- progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0
- progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged])
-
- # For each time segment, run whisper
- for segment in merged:
- segment_index += 1
- segment_start = segment['start']
- segment_end = segment['end']
- segment_expand_amount = segment.get('expand_amount', 0)
- segment_gap = segment.get('gap', False)
-
- segment_duration = segment_end - segment_start
-
- if segment_duration < MIN_SEGMENT_DURATION:
- continue
-
- # Audio to run on Whisper
- segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration))
- # Previous segments to use as a prompt
- segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None
-
- # Detected language
- detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None
-
- print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ",
- segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language)
-
- perf_start_time = time.perf_counter()
-
- scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration,
- sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration)
- segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener)
-
- perf_end_time = time.perf_counter()
- print("Whisper took {} seconds".format(perf_end_time - perf_start_time))
-
- adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration)
-
- # Propagate expand amount to the segments
- if (segment_expand_amount > 0):
- segment_without_expansion = segment_duration - segment_expand_amount
-
- for adjusted_segment in adjusted_segments:
- adjusted_segment_end = adjusted_segment['end']
-
- # Add expand amount if the segment got expanded
- if (adjusted_segment_end > segment_without_expansion):
- adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion
-
- # Append to output
- result['text'] += segment_result['text']
- result['segments'].extend(adjusted_segments)
-
- # Increment detected language
- if not segment_gap:
- languageCounter[segment_result['language']] += 1
-
- # Update prompt window
- self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config)
-
- if detected_language is not None:
- result['language'] = detected_language
- finally:
- # Notify progress listener that we are done
- if progressListener is not None:
- progressListener.on_finished()
- return result
-
- def get_audio_duration(self, audio: str, config: TranscriptionConfig):
- return get_audio_duration(audio)
-
- def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig):
- if (config.max_prompt_window is not None and config.max_prompt_window > 0):
- # Add segments to the current prompt window (unless it is a speech gap)
- if not segment_gap:
- for segment in adjusted_segments:
- if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB:
- prompt_window.append(segment)
-
- while (len(prompt_window) > 0):
- first_end_time = prompt_window[0].get('end', 0)
- # Time expanded in the segments should be discounted from the prompt window
- first_expand_time = prompt_window[0].get('expand_amount', 0)
-
- if (first_end_time - first_expand_time < segment_end - config.max_prompt_window):
- prompt_window.popleft()
- else:
- break
-
- def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float):
- result = []
- last_end_time = 0
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- if (last_end_time != segment_start):
- delta = segment_start - last_end_time
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } )
-
- last_end_time = segment_end
- result.append(segment)
-
- # Also include total duration if specified
- if (total_duration is not None and last_end_time < total_duration):
- delta = total_duration - segment_start
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } )
-
- return result
-
- # Expand the end time of each segment to the start of the next segment
- def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- # Expand if the gap actually exists
- if (delta >= 0):
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
-
- result.append(current_segment)
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- if (last_segment['end'] < total_duration):
- last_segment = last_segment.copy()
- last_segment['end'] = total_duration
- result[-1] = last_segment
-
- return result
-
- def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- expanded = False
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- if (max_expand_size is not None and delta <= max_expand_size):
- # Just expand the current segment
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
- expanded = True
-
- result.append(current_segment)
-
- # Add a gap to the next segment if needed
- if (delta >= 0 and not expanded):
- result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } )
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- delta = total_duration - last_segment['end']
-
- if (delta > 0):
- if (max_expand_size is not None and delta <= max_expand_size):
- # Expand the last segment
- last_segment = last_segment.copy()
- last_segment['expand_amount'] = delta
- last_segment['end'] = total_duration
- result[-1] = last_segment
- else:
- result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } )
-
- return result
-
- def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None):
- result = []
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- # Filter segments?
- if (max_source_time is not None):
- if (segment_start > max_source_time):
- continue
- segment_end = min(max_source_time, segment_end)
-
- new_segment = segment.copy()
-
- # Add to start and end
- new_segment['start'] = segment_start + adjust_seconds
- new_segment['end'] = segment_end + adjust_seconds
-
- # Handle words
- if ('words' in new_segment):
- for word in new_segment['words']:
- # Adjust start and end
- word['start'] = word['start'] + adjust_seconds
- word['end'] = word['end'] + adjust_seconds
-
- result.append(new_segment)
- return result
-
- def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float):
- result = []
-
- for entry in timestamps:
- start = entry['start']
- end = entry['end']
-
- result.append({
- 'start': start * factor,
- 'end': end * factor
- })
- return result
-
-
-class VadSileroTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None):
- super().__init__(sampling_rate=sampling_rate)
- self.model = None
- self.cache = cache
- self._initialize_model()
-
- def _initialize_model(self):
- if (self.cache is not None):
- model_key = "VadSileroTranscription"
- self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model)
- print("Loaded Silerio model from cache.")
- else:
- self.model, self.get_speech_timestamps = self._create_model()
- print("Created Silerio model")
-
- def _create_model(self):
- model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad')
-
- # Silero does not benefit from multi-threading
- torch.set_num_threads(1) # JIT
- (get_speech_timestamps, _, _, _, _) = utils
-
- return model, get_speech_timestamps
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time))
- perf_start_time = time.perf_counter()
-
- # Divide procesisng of audio into chunks
- chunk_start = start_time
-
- while (chunk_start < end_time):
- chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK)
-
- print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration)))
- wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration))
-
- sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD)
- seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate)
- adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration)
-
- #pprint(adjusted)
-
- result.extend(adjusted)
- chunk_start += chunk_duration
-
- perf_end_time = time.perf_counter()
- print("VAD processing took {} seconds".format(perf_end_time - perf_start_time))
-
- return result
-
- def __getstate__(self):
- # We only need the sampling rate
- return { 'sampling_rate': self.sampling_rate }
-
- def __setstate__(self, state):
- self.sampling_rate = state['sampling_rate']
- self.model = None
- # Use the global cache
- self.cache = GLOBAL_MODEL_CACHE
- self._initialize_model()
-
-# A very simple VAD that just marks every N seconds as speech
-class VadPeriodicTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def is_transcribe_timestamps_fast(self):
- # This is a very fast VAD - no need to parallelize it
- return True
-
- def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float):
- result = []
-
- # Generate a timestamp every N seconds
- start_timestamp = start_time
-
- while (start_timestamp < end_time):
- end_timestamp = min(start_timestamp + config.periodic_duration, end_time)
- segment_duration = end_timestamp - start_timestamp
-
- # Minimum duration is 1 second
- if (segment_duration >= 1):
- result.append( { 'start': start_timestamp, 'end': end_timestamp } )
-
- start_timestamp = end_timestamp
-
- return result
-
-def get_audio_duration(file: str):
- return float(ffmpeg.probe(file)["format"]["duration"])
-
-def load_audio(file: str, sample_rate: int = 16000,
- start_time: str = None, duration: str = None):
- """
- Open an audio file and read as mono waveform, resampling as necessary
-
- Parameters
- ----------
- file: str
- The audio file to open
-
- sr: int
- The sample rate to resample the audio if necessary
-
- start_time: str
- The start time, using the standard FFMPEG time duration syntax, or None to disable.
-
- duration: str
- The duration, using the standard FFMPEG time duration syntax, or None to disable.
-
- Returns
- -------
- A NumPy array containing the audio waveform, in float32 dtype.
- """
- try:
- inputArgs = {'threads': 0}
-
- if (start_time is not None):
- inputArgs['ss'] = start_time
- if (duration is not None):
- inputArgs['t'] = duration
-
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- out, _ = (
- ffmpeg.input(file, **inputArgs)
- .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate)
- .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True)
- )
- except ffmpeg.Error as e:
- raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}")
-
- return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
\ No newline at end of file
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/.github/SECURITY.md b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/.github/SECURITY.md
deleted file mode 100644
index aa3e8409da6b525245454ad0360642cbaead5569..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/.github/SECURITY.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Security Policy
-
-We aim to make YOLOv5 🚀 as secure as possible! If you find potential vulnerabilities or have any concerns please let us know so we can investigate and take corrective action if needed.
-
-### Reporting a Vulnerability
-
-To report vulnerabilities please email us at hello@ultralytics.com or visit https://ultralytics.com/contact. Thank you!
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/GETTING_STARTED.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/GETTING_STARTED.md
deleted file mode 100644
index 404b0c8f467264d1adf61e8274e5f864e24018e8..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/GETTING_STARTED.md
+++ /dev/null
@@ -1,79 +0,0 @@
-## Getting Started with Detectron2
-
-This document provides a brief intro of the usage of builtin command-line tools in detectron2.
-
-For a tutorial that involves actual coding with the API,
-see our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-which covers how to run inference with an
-existing model, and how to train a builtin model on a custom dataset.
-
-
-### Inference Demo with Pre-trained Models
-
-1. Pick a model and its config file from
- [model zoo](MODEL_ZOO.md),
- for example, `mask_rcnn_R_50_FPN_3x.yaml`.
-2. We provide `demo.py` that is able to demo builtin configs. Run it with:
-```
-cd demo/
-python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
- --input input1.jpg input2.jpg \
- [--other-options]
- --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
-```
-The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation.
-This command will run the inference and show visualizations in an OpenCV window.
-
-For details of the command line arguments, see `demo.py -h` or look at its source code
-to understand its behavior. Some common arguments are:
-* To run __on your webcam__, replace `--input files` with `--webcam`.
-* To run __on a video__, replace `--input files` with `--video-input video.mp4`.
-* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`.
-* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`.
-
-
-### Training & Evaluation in Command Line
-
-We provide two scripts in "tools/plain_train_net.py" and "tools/train_net.py",
-that are made to train all the configs provided in detectron2. You may want to
-use it as a reference to write your own training script.
-
-Compared to "train_net.py", "plain_train_net.py" supports fewer default
-features. It also includes fewer abstraction, therefore is easier to add custom
-logic.
-
-To train a model with "train_net.py", first
-setup the corresponding datasets following
-[datasets/README.md](./datasets/README.md),
-then run:
-```
-cd tools/
-./train_net.py --num-gpus 8 \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
-```
-
-The configs are made for 8-GPU training.
-To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.:
-```
-./train_net.py \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
- --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025
-```
-
-To evaluate a model's performance, use
-```
-./train_net.py \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
- --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
-```
-For more options, see `./train_net.py -h`.
-
-### Use Detectron2 APIs in Your Code
-
-See our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-to learn how to use detectron2 APIs to:
-1. run inference with an existing model
-2. train a builtin model on a custom dataset
-
-See [detectron2/projects](https://github.com/facebookresearch/detectron2/tree/main/projects)
-for more ways to build your project on detectron2.
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/checkpoint/c2_model_loading.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/checkpoint/c2_model_loading.py
deleted file mode 100644
index 8c8d181bd7200bd3fd38446e743f8f16780d6e76..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/checkpoint/c2_model_loading.py
+++ /dev/null
@@ -1,407 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import re
-from typing import Dict, List
-import torch
-from tabulate import tabulate
-
-
-def convert_basic_c2_names(original_keys):
- """
- Apply some basic name conversion to names in C2 weights.
- It only deals with typical backbone models.
-
- Args:
- original_keys (list[str]):
- Returns:
- list[str]: The same number of strings matching those in original_keys.
- """
- layer_keys = copy.deepcopy(original_keys)
- layer_keys = [
- {"pred_b": "linear_b", "pred_w": "linear_w"}.get(k, k) for k in layer_keys
- ] # some hard-coded mappings
-
- layer_keys = [k.replace("_", ".") for k in layer_keys]
- layer_keys = [re.sub("\\.b$", ".bias", k) for k in layer_keys]
- layer_keys = [re.sub("\\.w$", ".weight", k) for k in layer_keys]
- # Uniform both bn and gn names to "norm"
- layer_keys = [re.sub("bn\\.s$", "norm.weight", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.bias$", "norm.bias", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.rm", "norm.running_mean", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.running.mean$", "norm.running_mean", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.riv$", "norm.running_var", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.running.var$", "norm.running_var", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.gamma$", "norm.weight", k) for k in layer_keys]
- layer_keys = [re.sub("bn\\.beta$", "norm.bias", k) for k in layer_keys]
- layer_keys = [re.sub("gn\\.s$", "norm.weight", k) for k in layer_keys]
- layer_keys = [re.sub("gn\\.bias$", "norm.bias", k) for k in layer_keys]
-
- # stem
- layer_keys = [re.sub("^res\\.conv1\\.norm\\.", "conv1.norm.", k) for k in layer_keys]
- # to avoid mis-matching with "conv1" in other components (e.g. detection head)
- layer_keys = [re.sub("^conv1\\.", "stem.conv1.", k) for k in layer_keys]
-
- # layer1-4 is used by torchvision, however we follow the C2 naming strategy (res2-5)
- # layer_keys = [re.sub("^res2.", "layer1.", k) for k in layer_keys]
- # layer_keys = [re.sub("^res3.", "layer2.", k) for k in layer_keys]
- # layer_keys = [re.sub("^res4.", "layer3.", k) for k in layer_keys]
- # layer_keys = [re.sub("^res5.", "layer4.", k) for k in layer_keys]
-
- # blocks
- layer_keys = [k.replace(".branch1.", ".shortcut.") for k in layer_keys]
- layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys]
- layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys]
- layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys]
-
- # DensePose substitutions
- layer_keys = [re.sub("^body.conv.fcn", "body_conv_fcn", k) for k in layer_keys]
- layer_keys = [k.replace("AnnIndex.lowres", "ann_index_lowres") for k in layer_keys]
- layer_keys = [k.replace("Index.UV.lowres", "index_uv_lowres") for k in layer_keys]
- layer_keys = [k.replace("U.lowres", "u_lowres") for k in layer_keys]
- layer_keys = [k.replace("V.lowres", "v_lowres") for k in layer_keys]
- return layer_keys
-
-
-def convert_c2_detectron_names(weights):
- """
- Map Caffe2 Detectron weight names to Detectron2 names.
-
- Args:
- weights (dict): name -> tensor
-
- Returns:
- dict: detectron2 names -> tensor
- dict: detectron2 names -> C2 names
- """
- logger = logging.getLogger(__name__)
- logger.info("Renaming Caffe2 weights ......")
- original_keys = sorted(weights.keys())
- layer_keys = copy.deepcopy(original_keys)
-
- layer_keys = convert_basic_c2_names(layer_keys)
-
- # --------------------------------------------------------------------------
- # RPN hidden representation conv
- # --------------------------------------------------------------------------
- # FPN case
- # In the C2 model, the RPN hidden layer conv is defined for FPN level 2 and then
- # shared for all other levels, hence the appearance of "fpn2"
- layer_keys = [
- k.replace("conv.rpn.fpn2", "proposal_generator.rpn_head.conv") for k in layer_keys
- ]
- # Non-FPN case
- layer_keys = [k.replace("conv.rpn", "proposal_generator.rpn_head.conv") for k in layer_keys]
-
- # --------------------------------------------------------------------------
- # RPN box transformation conv
- # --------------------------------------------------------------------------
- # FPN case (see note above about "fpn2")
- layer_keys = [
- k.replace("rpn.bbox.pred.fpn2", "proposal_generator.rpn_head.anchor_deltas")
- for k in layer_keys
- ]
- layer_keys = [
- k.replace("rpn.cls.logits.fpn2", "proposal_generator.rpn_head.objectness_logits")
- for k in layer_keys
- ]
- # Non-FPN case
- layer_keys = [
- k.replace("rpn.bbox.pred", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys
- ]
- layer_keys = [
- k.replace("rpn.cls.logits", "proposal_generator.rpn_head.objectness_logits")
- for k in layer_keys
- ]
-
- # --------------------------------------------------------------------------
- # Fast R-CNN box head
- # --------------------------------------------------------------------------
- layer_keys = [re.sub("^bbox\\.pred", "bbox_pred", k) for k in layer_keys]
- layer_keys = [re.sub("^cls\\.score", "cls_score", k) for k in layer_keys]
- layer_keys = [re.sub("^fc6\\.", "box_head.fc1.", k) for k in layer_keys]
- layer_keys = [re.sub("^fc7\\.", "box_head.fc2.", k) for k in layer_keys]
- # 4conv1fc head tensor names: head_conv1_w, head_conv1_gn_s
- layer_keys = [re.sub("^head\\.conv", "box_head.conv", k) for k in layer_keys]
-
- # --------------------------------------------------------------------------
- # FPN lateral and output convolutions
- # --------------------------------------------------------------------------
- def fpn_map(name):
- """
- Look for keys with the following patterns:
- 1) Starts with "fpn.inner."
- Example: "fpn.inner.res2.2.sum.lateral.weight"
- Meaning: These are lateral pathway convolutions
- 2) Starts with "fpn.res"
- Example: "fpn.res2.2.sum.weight"
- Meaning: These are FPN output convolutions
- """
- splits = name.split(".")
- norm = ".norm" if "norm" in splits else ""
- if name.startswith("fpn.inner."):
- # splits example: ['fpn', 'inner', 'res2', '2', 'sum', 'lateral', 'weight']
- stage = int(splits[2][len("res") :])
- return "fpn_lateral{}{}.{}".format(stage, norm, splits[-1])
- elif name.startswith("fpn.res"):
- # splits example: ['fpn', 'res2', '2', 'sum', 'weight']
- stage = int(splits[1][len("res") :])
- return "fpn_output{}{}.{}".format(stage, norm, splits[-1])
- return name
-
- layer_keys = [fpn_map(k) for k in layer_keys]
-
- # --------------------------------------------------------------------------
- # Mask R-CNN mask head
- # --------------------------------------------------------------------------
- # roi_heads.StandardROIHeads case
- layer_keys = [k.replace(".[mask].fcn", "mask_head.mask_fcn") for k in layer_keys]
- layer_keys = [re.sub("^\\.mask\\.fcn", "mask_head.mask_fcn", k) for k in layer_keys]
- layer_keys = [k.replace("mask.fcn.logits", "mask_head.predictor") for k in layer_keys]
- # roi_heads.Res5ROIHeads case
- layer_keys = [k.replace("conv5.mask", "mask_head.deconv") for k in layer_keys]
-
- # --------------------------------------------------------------------------
- # Keypoint R-CNN head
- # --------------------------------------------------------------------------
- # interestingly, the keypoint head convs have blob names that are simply "conv_fcnX"
- layer_keys = [k.replace("conv.fcn", "roi_heads.keypoint_head.conv_fcn") for k in layer_keys]
- layer_keys = [
- k.replace("kps.score.lowres", "roi_heads.keypoint_head.score_lowres") for k in layer_keys
- ]
- layer_keys = [k.replace("kps.score.", "roi_heads.keypoint_head.score.") for k in layer_keys]
-
- # --------------------------------------------------------------------------
- # Done with replacements
- # --------------------------------------------------------------------------
- assert len(set(layer_keys)) == len(layer_keys)
- assert len(original_keys) == len(layer_keys)
-
- new_weights = {}
- new_keys_to_original_keys = {}
- for orig, renamed in zip(original_keys, layer_keys):
- new_keys_to_original_keys[renamed] = orig
- if renamed.startswith("bbox_pred.") or renamed.startswith("mask_head.predictor."):
- # remove the meaningless prediction weight for background class
- new_start_idx = 4 if renamed.startswith("bbox_pred.") else 1
- new_weights[renamed] = weights[orig][new_start_idx:]
- logger.info(
- "Remove prediction weight for background class in {}. The shape changes from "
- "{} to {}.".format(
- renamed, tuple(weights[orig].shape), tuple(new_weights[renamed].shape)
- )
- )
- elif renamed.startswith("cls_score."):
- # move weights of bg class from original index 0 to last index
- logger.info(
- "Move classification weights for background class in {} from index 0 to "
- "index {}.".format(renamed, weights[orig].shape[0] - 1)
- )
- new_weights[renamed] = torch.cat([weights[orig][1:], weights[orig][:1]])
- else:
- new_weights[renamed] = weights[orig]
-
- return new_weights, new_keys_to_original_keys
-
-
-# Note the current matching is not symmetric.
-# it assumes model_state_dict will have longer names.
-def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_conversion=True):
- """
- Match names between the two state-dict, and returns a new chkpt_state_dict with names
- converted to match model_state_dict with heuristics. The returned dict can be later
- loaded with fvcore checkpointer.
- If `c2_conversion==True`, `ckpt_state_dict` is assumed to be a Caffe2
- model and will be renamed at first.
-
- Strategy: suppose that the models that we will create will have prefixes appended
- to each of its keys, for example due to an extra level of nesting that the original
- pre-trained weights from ImageNet won't contain. For example, model.state_dict()
- might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains
- res2.conv1.weight. We thus want to match both parameters together.
- For that, we look for each model weight, look among all loaded keys if there is one
- that is a suffix of the current weight name, and use it if that's the case.
- If multiple matches exist, take the one with longest size
- of the corresponding name. For example, for the same model as before, the pretrained
- weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case,
- we want to match backbone[0].body.conv1.weight to conv1.weight, and
- backbone[0].body.res2.conv1.weight to res2.conv1.weight.
- """
- model_keys = sorted(model_state_dict.keys())
- if c2_conversion:
- ckpt_state_dict, original_keys = convert_c2_detectron_names(ckpt_state_dict)
- # original_keys: the name in the original dict (before renaming)
- else:
- original_keys = {x: x for x in ckpt_state_dict.keys()}
- ckpt_keys = sorted(ckpt_state_dict.keys())
-
- def match(a, b):
- # Matched ckpt_key should be a complete (starts with '.') suffix.
- # For example, roi_heads.mesh_head.whatever_conv1 does not match conv1,
- # but matches whatever_conv1 or mesh_head.whatever_conv1.
- return a == b or a.endswith("." + b)
-
- # get a matrix of string matches, where each (i, j) entry correspond to the size of the
- # ckpt_key string, if it matches
- match_matrix = [len(j) if match(i, j) else 0 for i in model_keys for j in ckpt_keys]
- match_matrix = torch.as_tensor(match_matrix).view(len(model_keys), len(ckpt_keys))
- # use the matched one with longest size in case of multiple matches
- max_match_size, idxs = match_matrix.max(1)
- # remove indices that correspond to no-match
- idxs[max_match_size == 0] = -1
-
- logger = logging.getLogger(__name__)
- # matched_pairs (matched checkpoint key --> matched model key)
- matched_keys = {}
- result_state_dict = {}
- for idx_model, idx_ckpt in enumerate(idxs.tolist()):
- if idx_ckpt == -1:
- continue
- key_model = model_keys[idx_model]
- key_ckpt = ckpt_keys[idx_ckpt]
- value_ckpt = ckpt_state_dict[key_ckpt]
- shape_in_model = model_state_dict[key_model].shape
-
- if shape_in_model != value_ckpt.shape:
- logger.warning(
- "Shape of {} in checkpoint is {}, while shape of {} in model is {}.".format(
- key_ckpt, value_ckpt.shape, key_model, shape_in_model
- )
- )
- logger.warning(
- "{} will not be loaded. Please double check and see if this is desired.".format(
- key_ckpt
- )
- )
- continue
-
- assert key_model not in result_state_dict
- result_state_dict[key_model] = value_ckpt
- if key_ckpt in matched_keys: # already added to matched_keys
- logger.error(
- "Ambiguity found for {} in checkpoint!"
- "It matches at least two keys in the model ({} and {}).".format(
- key_ckpt, key_model, matched_keys[key_ckpt]
- )
- )
- raise ValueError("Cannot match one checkpoint key to multiple keys in the model.")
-
- matched_keys[key_ckpt] = key_model
-
- # logging:
- matched_model_keys = sorted(matched_keys.values())
- if len(matched_model_keys) == 0:
- logger.warning("No weights in checkpoint matched with model.")
- return ckpt_state_dict
- common_prefix = _longest_common_prefix(matched_model_keys)
- rev_matched_keys = {v: k for k, v in matched_keys.items()}
- original_keys = {k: original_keys[rev_matched_keys[k]] for k in matched_model_keys}
-
- model_key_groups = _group_keys_by_module(matched_model_keys, original_keys)
- table = []
- memo = set()
- for key_model in matched_model_keys:
- if key_model in memo:
- continue
- if key_model in model_key_groups:
- group = model_key_groups[key_model]
- memo |= set(group)
- shapes = [tuple(model_state_dict[k].shape) for k in group]
- table.append(
- (
- _longest_common_prefix([k[len(common_prefix) :] for k in group]) + "*",
- _group_str([original_keys[k] for k in group]),
- " ".join([str(x).replace(" ", "") for x in shapes]),
- )
- )
- else:
- key_checkpoint = original_keys[key_model]
- shape = str(tuple(model_state_dict[key_model].shape))
- table.append((key_model[len(common_prefix) :], key_checkpoint, shape))
- table_str = tabulate(
- table, tablefmt="pipe", headers=["Names in Model", "Names in Checkpoint", "Shapes"]
- )
- logger.info(
- "Following weights matched with "
- + (f"submodule {common_prefix[:-1]}" if common_prefix else "model")
- + ":\n"
- + table_str
- )
-
- unmatched_ckpt_keys = [k for k in ckpt_keys if k not in set(matched_keys.keys())]
- for k in unmatched_ckpt_keys:
- result_state_dict[k] = ckpt_state_dict[k]
- return result_state_dict
-
-
-def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]):
- """
- Params in the same submodule are grouped together.
-
- Args:
- keys: names of all parameters
- original_names: mapping from parameter name to their name in the checkpoint
-
- Returns:
- dict[name -> all other names in the same group]
- """
-
- def _submodule_name(key):
- pos = key.rfind(".")
- if pos < 0:
- return None
- prefix = key[: pos + 1]
- return prefix
-
- all_submodules = [_submodule_name(k) for k in keys]
- all_submodules = [x for x in all_submodules if x]
- all_submodules = sorted(all_submodules, key=len)
-
- ret = {}
- for prefix in all_submodules:
- group = [k for k in keys if k.startswith(prefix)]
- if len(group) <= 1:
- continue
- original_name_lcp = _longest_common_prefix_str([original_names[k] for k in group])
- if len(original_name_lcp) == 0:
- # don't group weights if original names don't share prefix
- continue
-
- for k in group:
- if k in ret:
- continue
- ret[k] = group
- return ret
-
-
-def _longest_common_prefix(names: List[str]) -> str:
- """
- ["abc.zfg", "abc.zef"] -> "abc."
- """
- names = [n.split(".") for n in names]
- m1, m2 = min(names), max(names)
- ret = [a for a, b in zip(m1, m2) if a == b]
- ret = ".".join(ret) + "." if len(ret) else ""
- return ret
-
-
-def _longest_common_prefix_str(names: List[str]) -> str:
- m1, m2 = min(names), max(names)
- lcp = [a for a, b in zip(m1, m2) if a == b]
- lcp = "".join(lcp)
- return lcp
-
-
-def _group_str(names: List[str]) -> str:
- """
- Turn "common1", "common2", "common3" into "common{1,2,3}"
- """
- lcp = _longest_common_prefix_str(names)
- rest = [x[len(lcp) :] for x in names]
- rest = "{" + ",".join(rest) + "}"
- ret = lcp + rest
-
- # add some simplification for BN specifically
- ret = ret.replace("bn_{beta,running_mean,running_var,gamma}", "bn_*")
- ret = ret.replace("bn_beta,bn_running_mean,bn_running_var,bn_gamma", "bn_*")
- return ret
diff --git a/spaces/cat630/ChuanhuChatGPT/utils.py b/spaces/cat630/ChuanhuChatGPT/utils.py
deleted file mode 100644
index 2225759bfd19a0b8913608a611bb9cc03c31ebcd..0000000000000000000000000000000000000000
--- a/spaces/cat630/ChuanhuChatGPT/utils.py
+++ /dev/null
@@ -1,319 +0,0 @@
-"""Contains all of the components that can be used with Gradio Interface / Blocks.
-Along with the docs for each component, you can find the names of example demos that use
-each component. These demos are located in the `demo` directory."""
-
-from __future__ import annotations
-from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type
-import json
-import gradio as gr
-# import openai
-import os
-import traceback
-import requests
-# import markdown
-import csv
-import mdtex2html
-from pypinyin import lazy_pinyin
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class DataframeData(TypedDict):
- headers: List[str]
- data: List[List[str | int | bool]]
-
-initial_prompt = "You are a helpful assistant."
-API_URL = "https://api.openai.com/v1/chat/completions"
-HISTORY_DIR = "history"
-TEMPLATES_DIR = "templates"
-
-def postprocess(
- self, y: List[Tuple[str | None, str | None]]
- ) -> List[Tuple[str | None, str | None]]:
- """
- Parameters:
- y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format.
- Returns:
- List of tuples representing the message and response. Each message and response will be a string of HTML.
- """
- if y is None:
- return []
- for i, (message, response) in enumerate(y):
- y[i] = (
- # None if message is None else markdown.markdown(message),
- # None if response is None else markdown.markdown(response),
- None if message is None else mdtex2html.convert((message)),
- None if response is None else mdtex2html.convert(response),
- )
- return y
-
-def parse_text(text):
- lines = text.split("\n")
- lines = [line for line in lines if line != ""]
- count = 0
- firstline = False
- for i, line in enumerate(lines):
- if "```" in line:
- count += 1
- items = line.split('`')
- if count % 2 == 1:
- lines[i] = f'
'
- else:
- lines[i] = f'
'
- else:
- if i > 0:
- if count % 2 == 1:
- line = line.replace("`", "\`")
- line = line.replace("<", "<")
- line = line.replace(">", ">")
- line = line.replace(" ", " ")
- line = line.replace("*", "*")
- line = line.replace("_", "_")
- line = line.replace("-", "-")
- line = line.replace(".", ".")
- line = line.replace("!", "!")
- line = line.replace("(", "(")
- line = line.replace(")", ")")
- line = line.replace("$", "$")
- lines[i] = " "+line
- text = "".join(lines)
- return text
-
-def predict(inputs, top_p, temperature, openai_api_key, chatbot=[], history=[], system_prompt=initial_prompt, retry=False, summary=False, retry_on_crash = False, stream = True): # repetition_penalty, top_k
-
- if "猫娘" in inputs:
- chatbot.append((inputs, '喵~主人请点击这个链接查看刺激内容哦~'))
- yield chatbot, history, "status: 喵~"
- print(f"########## 有人上钩了: {inputs} ##########")
- return
- print(f"====== 收到问题: {inputs} =======")
-
- if retry_on_crash:
- retry = True
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}"
- }
-
- chat_counter = len(history) // 2
-
- print(f"chat_counter - {chat_counter}")
-
- messages = []
- if chat_counter:
- for index in range(0, 2*chat_counter, 2):
- temp1 = {}
- temp1["role"] = "user"
- temp1["content"] = history[index]
- temp2 = {}
- temp2["role"] = "assistant"
- temp2["content"] = history[index+1]
- if temp1["content"] != "":
- if temp2["content"] != "" or retry:
- messages.append(temp1)
- messages.append(temp2)
- else:
- messages[-1]['content'] = temp2['content']
- if retry and chat_counter:
- if retry_on_crash:
- messages = messages[-6:]
- messages.pop()
- elif summary:
- history = [*[i["content"] for i in messages[-2:]], "我们刚刚聊了什么?"]
- messages.append(compose_user(
- "请帮我总结一下上述对话的内容,实现减少字数的同时,保证对话的质量。在总结中不要加入这一句话。"))
- else:
- temp3 = {}
- temp3["role"] = "user"
- temp3["content"] = inputs
- messages.append(temp3)
- chat_counter += 1
- messages = [compose_system(system_prompt), *messages]
- # messages
- payload = {
- "model": "gpt-3.5-turbo",
- "messages": messages, # [{"role": "user", "content": f"{inputs}"}],
- "temperature": temperature, # 1.0,
- "top_p": top_p, # 1.0,
- "n": 1,
- "stream": stream,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- }
-
- if not summary:
- history.append(inputs)
- else:
- print("精简中...")
-
- print(f"payload: {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- try:
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- except:
- history.append("")
- chatbot.append((inputs, ""))
- yield history, chatbot, f"获取请求失败,请检查网络连接。"
- return
-
- token_counter = 0
- partial_words = ""
-
- counter = 0
- if stream:
- chatbot.append((parse_text(history[-1]), ""))
- for chunk in response.iter_lines():
- if counter == 0:
- counter += 1
- continue
- counter += 1
- # check whether each line is non-empty
- if chunk:
- # decode each line as response data is in bytes
- try:
- if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
- chunkjson = json.loads(chunk.decode()[6:])
- status_text = f"id: {chunkjson['id']}, finish_reason: {chunkjson['choices'][0]['finish_reason']}"
- yield chatbot, history, status_text
- break
- except Exception as e:
- if not retry_on_crash:
- print("正在尝试使用缩短的context重新生成……")
- chatbot.pop()
- history.append("")
- yield next(predict(inputs, top_p, temperature, openai_api_key, chatbot, history, system_prompt, retry, summary=False, retry_on_crash=True, stream=False))
- else:
- msg = "☹️发生了错误:生成失败,请检查网络"
- print(msg)
- history.append(inputs, "")
- chatbot.append(inputs, msg)
- yield chatbot, history, "status: ERROR"
- break
- chunkjson = json.loads(chunk.decode()[6:])
- status_text = f"id: {chunkjson['id']}, finish_reason: {chunkjson['choices'][0]['finish_reason']}"
- partial_words = partial_words + \
- json.loads(chunk.decode()[6:])[
- 'choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chatbot[-1] = (parse_text(history[-2]), parse_text(history[-1]))
- token_counter += 1
- yield chatbot, history, status_text
- else:
- try:
- responsejson = json.loads(response.text)
- content = responsejson["choices"][0]["message"]["content"]
- history.append(content)
- chatbot.append((parse_text(history[-2]), parse_text(content)))
- status_text = "精简完成"
- except:
- chatbot.append((parse_text(history[-1]), "☹️发生了错误,请检查网络连接或者稍后再试。"))
- status_text = "status: ERROR"
- yield chatbot, history, status_text
-
-
-
-def delete_last_conversation(chatbot, history):
- try:
- if "☹️发生了错误" in chatbot[-1][1]:
- chatbot.pop()
- print(history)
- return chatbot, history
- history.pop()
- history.pop()
- chatbot.pop()
- print(history)
- return chatbot, history
- except:
- return chatbot, history
-
-def save_chat_history(filename, system, history, chatbot):
- if filename == "":
- return
- if not filename.endswith(".json"):
- filename += ".json"
- os.makedirs(HISTORY_DIR, exist_ok=True)
- json_s = {"system": system, "history": history, "chatbot": chatbot}
- print(json_s)
- with open(os.path.join(HISTORY_DIR, filename), "w") as f:
- json.dump(json_s, f)
-
-
-def load_chat_history(filename, system, history, chatbot):
- try:
- print("Loading from history...")
- with open(os.path.join(HISTORY_DIR, filename), "r") as f:
- json_s = json.load(f)
- print(json_s)
- return filename, json_s["system"], json_s["history"], json_s["chatbot"]
- except FileNotFoundError:
- print("File not found.")
- return filename, system, history, chatbot
-
-def sorted_by_pinyin(list):
- return sorted(list, key=lambda char: lazy_pinyin(char)[0][0])
-
-def get_file_names(dir, plain=False, filetypes=[".json"]):
- # find all json files in the current directory and return their names
- files = []
- try:
- for type in filetypes:
- files += [f for f in os.listdir(dir) if f.endswith(type)]
- except FileNotFoundError:
- files = []
- files = sorted_by_pinyin(files)
- if files == []:
- files = [""]
- if plain:
- return files
- else:
- return gr.Dropdown.update(choices=files)
-
-def get_history_names(plain=False):
- return get_file_names(HISTORY_DIR, plain)
-
-def load_template(filename, mode=0):
- lines = []
- print("Loading template...")
- if filename.endswith(".json"):
- with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f:
- lines = json.load(f)
- lines = [[i["act"], i["prompt"]] for i in lines]
- else:
- with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as csvfile:
- reader = csv.reader(csvfile)
- lines = list(reader)
- lines = lines[1:]
- if mode == 1:
- return sorted_by_pinyin([row[0] for row in lines])
- elif mode == 2:
- return {row[0]:row[1] for row in lines}
- else:
- choices = sorted_by_pinyin([row[0] for row in lines])
- return {row[0]:row[1] for row in lines}, gr.Dropdown.update(choices=choices, value=choices[0])
-
-def get_template_names(plain=False):
- return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"])
-
-def get_template_content(templates, selection, original_system_prompt):
- try:
- return templates[selection]
- except:
- return original_system_prompt
-
-def reset_state():
- return [], []
-
-def compose_system(system_prompt):
- return {"role": "system", "content": system_prompt}
-
-
-def compose_user(user_input):
- return {"role": "user", "content": user_input}
-
-
-def reset_textbox():
- return gr.update(value='')
diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/japanese.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/audio2piano_solo_prob.py b/spaces/ccolas/TastyPiano/src/music/pipeline/audio2piano_solo_prob.py
deleted file mode 100644
index 7fd9f2854d83baa8648c0ace88e65d66bd2f0f98..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/pipeline/audio2piano_solo_prob.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import numpy as np
-import librosa
-import sys
-sys.path.append('../../../data/')
-from src.music.utilities.processing_models import piano_detection_model
-from src.music.config import CHKPT_PATH_PIANO_EVAL
-
-PIANO_SOLO_DETECTOR = piano_detection_model.PianoSoloDetector(CHKPT_PATH_PIANO_EVAL)
-exclude_playlist_folders = ['synth_audio_recorded', 'from_url']
-
-def clean_start_and_end_blanks(probs):
- if len(probs) > 20:
- # clean up to 10s in each direction
- n_zeros_start = 0
- for i in range(10):
- if probs[i] <= 0.001:
- n_zeros_start += 1
- else:
- break
- n_zeros_end = 0
- for i in range(10):
- if probs[-(i + 1)] <= 0.001:
- n_zeros_end += 1
- else:
- break
- if n_zeros_end == 0:
- return probs[n_zeros_start:]
- else:
- return probs[n_zeros_start:-n_zeros_end]
- else:
- return probs
-
-def calculate_piano_solo_prob(audio_path, verbose=False):
- """Calculate the piano solo probability of all downloaded mp3s, and append
- the probability to the meta csv file. Code from https://github.com/bytedance/GiantMIDI-Piano
- """
- try:
- error_msg = 'Error in audio loading?'
- (audio, _) = librosa.core.load(audio_path, sr=piano_detection_model.SR, mono=True)
- error_msg += ' Nope. Error in solo prediction?'
- probs = PIANO_SOLO_DETECTOR.predict(audio)
- # probs = clean_start_and_end_blanks(probs) # remove blanks at start and end (<=10s each way). If not piano, the rest of the song will be enough to tell.
- piano_solo_prob = np.mean(probs)
- error_msg += ' Nope. '
- return piano_solo_prob, ''
- except:
- return None, error_msg + 'Yes.'
diff --git a/spaces/chaocai/superbot/stream_output.py b/spaces/chaocai/superbot/stream_output.py
deleted file mode 100644
index 2950a3b4018199862576e1490cc400328c5d3d19..0000000000000000000000000000000000000000
--- a/spaces/chaocai/superbot/stream_output.py
+++ /dev/null
@@ -1,111 +0,0 @@
-"""Callback Handler streams to stdout on new llm token."""
-from abc import abstractmethod
-import sys
-from typing import Any, Dict, List, Optional
-
-from langchain.callbacks.base import BaseCallbackHandler
-from langchain.schema import AgentAction, AgentFinish, LLMResult
-from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
-DEFAULT_ANSWER_PREFIX_TOKENS = ["Final", "Answer", ":"]
-INTER_MID_PREFIX_TOKENS = ["Thought", ":" ]
-
-class MaxbotStreamCallbackHandler(BaseCallbackHandler):
- def append_to_last_tokens(self, token: str) -> None:
- self.last_tokens.append(token)
- self.last_tokens_stripped.append(token.strip())
- if len(self.last_tokens) > len(self.answer_prefix_tokens):
- self.last_tokens.pop(0)
- self.last_tokens_stripped.pop(0)
-
- def check_if_answer_reached(self) -> bool:
- if self.strip_tokens:
- return self.last_tokens_stripped == self.answer_prefix_tokens_stripped or\
- self.last_tokens_stripped[1:] == self.intermid_prefix_tokens_stripped
- else:
- return self.last_tokens == self.answer_prefix_tokens or\
- self.last_tokens[1:] == self.intermid_prefix_tokens
-
- def __init__(
- self,
- *,
- answer_prefix_tokens: Optional[List[str]] = None,
- strip_tokens: bool = True,
- stream_prefix: bool = False
- ) -> None:
- """Instantiate FinalStreamingStdOutCallbackHandler.
-
- Args:
- answer_prefix_tokens: Token sequence that prefixes the answer.
- Default is ["Final", "Answer", ":"]
- strip_tokens: Ignore white spaces and new lines when comparing
- answer_prefix_tokens to last tokens? (to determine if answer has been
- reached)
- stream_prefix: Should answer prefix itself also be streamed?
- """
- super().__init__()
- if answer_prefix_tokens is None:
- self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS
- self.intermid_prefix_tokens = INTER_MID_PREFIX_TOKENS
- else:
- self.answer_prefix_tokens = answer_prefix_tokens
- self.intermid_prefix_tokens = INTER_MID_PREFIX_TOKENS
- if strip_tokens:
- self.answer_prefix_tokens_stripped = [
- token.strip() for token in self.answer_prefix_tokens
- ]
- self.intermid_prefix_tokens_stripped = [
- token.strip() for token in self.intermid_prefix_tokens
- ]
- else:
- self.answer_prefix_tokens_stripped = self.answer_prefix_tokens
- self.intermid_prefix_tokens_stripped = self.intermid_prefix_tokens
- self.last_tokens = [""] * len(self.answer_prefix_tokens)
- self.last_tokens_stripped = [""] * len(self.answer_prefix_tokens)
- self.strip_tokens = strip_tokens
- self.stream_prefix = stream_prefix
- self.answer_reached = False
-
- def on_llm_start(
- self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
- ) -> None:
- """Run when LLM starts running."""
- self.answer_reached = False
-
- def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
- """Run on new LLM token. Only available when streaming is enabled."""
-
- # Remember the last n tokens, where n = len(answer_prefix_tokens)
- self.append_to_last_tokens(token)
-
- # Check if the last n tokens match the answer_prefix_tokens list ...
- if self.check_if_answer_reached():
- self.answer_reached = True
- if self.stream_prefix:
- for t in self.last_tokens:
- self.handle_incoming_token(t)
- return
-
- # ... if yes, then print tokens from now on
- if self.answer_reached:
- self.handle_incoming_token(token)
-
- def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
- """Run when LLM ends running."""
- if self.answer_reached:
- self.handle_converastion_end()
-
- def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:
- """Run on agent end."""
- print("\nagent end.")
-
- def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
- """Run when chain ends running."""
- print("\nchain end.")
-
- @abstractmethod
- def handle_incoming_token(self, token: str) -> None:
- pass
-
- @abstractmethod
- def handle_converastion_end(self) -> None:
- pass
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/tensorflow/token-classification/README.md b/spaces/chendl/compositional_test/transformers/examples/tensorflow/token-classification/README.md
deleted file mode 100644
index 0e5ec84528f8f20631e878cb8b10d4fba0377f08..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/tensorflow/token-classification/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-# Token classification
-
-Fine-tuning the library models for token classification task such as Named Entity Recognition (NER), Parts-of-speech
-tagging (POS) or phrase extraction (CHUNKS). The main script `run_ner.py` leverages the [🤗 Datasets](https://github.com/huggingface/datasets) library. You can easily
-customize it to your needs if you need extra processing on your datasets.
-
-It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for
-training and validation, you might just need to add some tweaks in the data preprocessing.
-
-The following example fine-tunes BERT on CoNLL-2003:
-
-```bash
-python run_ner.py \
- --model_name_or_path bert-base-uncased \
- --dataset_name conll2003 \
- --output_dir /tmp/test-ner
-```
-
-To run on your own training and validation files, use the following command:
-
-```bash
-python run_ner.py \
- --model_name_or_path bert-base-uncased \
- --train_file path_to_train_file \
- --validation_file path_to_validation_file \
- --output_dir /tmp/test-ner
-```
-
-**Note:** This script only works with models that have a fast tokenizer (backed by the [🤗 Tokenizers](https://github.com/huggingface/tokenizers) library) as it
-uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
-[this table](https://huggingface.co/transformers/index.html#supported-frameworks).
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TiffImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TiffImagePlugin.py
deleted file mode 100644
index d5148828506b36c72bac626b2032ebf129a62678..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TiffImagePlugin.py
+++ /dev/null
@@ -1,2163 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# TIFF file handling
-#
-# TIFF is a flexible, if somewhat aged, image file format originally
-# defined by Aldus. Although TIFF supports a wide variety of pixel
-# layouts and compression methods, the name doesn't really stand for
-# "thousands of incompatible file formats," it just feels that way.
-#
-# To read TIFF data from a stream, the stream must be seekable. For
-# progressive decoding, make sure to use TIFF files where the tag
-# directory is placed first in the file.
-#
-# History:
-# 1995-09-01 fl Created
-# 1996-05-04 fl Handle JPEGTABLES tag
-# 1996-05-18 fl Fixed COLORMAP support
-# 1997-01-05 fl Fixed PREDICTOR support
-# 1997-08-27 fl Added support for rational tags (from Perry Stoll)
-# 1998-01-10 fl Fixed seek/tell (from Jan Blom)
-# 1998-07-15 fl Use private names for internal variables
-# 1999-06-13 fl Rewritten for PIL 1.0 (1.0)
-# 2000-10-11 fl Additional fixes for Python 2.0 (1.1)
-# 2001-04-17 fl Fixed rewind support (seek to frame 0) (1.2)
-# 2001-05-12 fl Added write support for more tags (from Greg Couch) (1.3)
-# 2001-12-18 fl Added workaround for broken Matrox library
-# 2002-01-18 fl Don't mess up if photometric tag is missing (D. Alan Stewart)
-# 2003-05-19 fl Check FILLORDER tag
-# 2003-09-26 fl Added RGBa support
-# 2004-02-24 fl Added DPI support; fixed rational write support
-# 2005-02-07 fl Added workaround for broken Corel Draw 10 files
-# 2006-01-09 fl Added support for float/double tags (from Russell Nelson)
-#
-# Copyright (c) 1997-2006 by Secret Labs AB. All rights reserved.
-# Copyright (c) 1995-1997 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-import io
-import itertools
-import logging
-import math
-import os
-import struct
-import warnings
-from collections.abc import MutableMapping
-from fractions import Fraction
-from numbers import Number, Rational
-
-from . import ExifTags, Image, ImageFile, ImageOps, ImagePalette, TiffTags
-from ._binary import i16be as i16
-from ._binary import i32be as i32
-from ._binary import o8
-from .TiffTags import TYPES
-
-logger = logging.getLogger(__name__)
-
-# Set these to true to force use of libtiff for reading or writing.
-READ_LIBTIFF = False
-WRITE_LIBTIFF = False
-IFD_LEGACY_API = True
-STRIP_SIZE = 65536
-
-II = b"II" # little-endian (Intel style)
-MM = b"MM" # big-endian (Motorola style)
-
-#
-# --------------------------------------------------------------------
-# Read TIFF files
-
-# a few tag names, just to make the code below a bit more readable
-IMAGEWIDTH = 256
-IMAGELENGTH = 257
-BITSPERSAMPLE = 258
-COMPRESSION = 259
-PHOTOMETRIC_INTERPRETATION = 262
-FILLORDER = 266
-IMAGEDESCRIPTION = 270
-STRIPOFFSETS = 273
-SAMPLESPERPIXEL = 277
-ROWSPERSTRIP = 278
-STRIPBYTECOUNTS = 279
-X_RESOLUTION = 282
-Y_RESOLUTION = 283
-PLANAR_CONFIGURATION = 284
-RESOLUTION_UNIT = 296
-TRANSFERFUNCTION = 301
-SOFTWARE = 305
-DATE_TIME = 306
-ARTIST = 315
-PREDICTOR = 317
-COLORMAP = 320
-TILEWIDTH = 322
-TILELENGTH = 323
-TILEOFFSETS = 324
-TILEBYTECOUNTS = 325
-SUBIFD = 330
-EXTRASAMPLES = 338
-SAMPLEFORMAT = 339
-JPEGTABLES = 347
-YCBCRSUBSAMPLING = 530
-REFERENCEBLACKWHITE = 532
-COPYRIGHT = 33432
-IPTC_NAA_CHUNK = 33723 # newsphoto properties
-PHOTOSHOP_CHUNK = 34377 # photoshop properties
-ICCPROFILE = 34675
-EXIFIFD = 34665
-XMP = 700
-JPEGQUALITY = 65537 # pseudo-tag by libtiff
-
-# https://github.com/imagej/ImageJA/blob/master/src/main/java/ij/io/TiffDecoder.java
-IMAGEJ_META_DATA_BYTE_COUNTS = 50838
-IMAGEJ_META_DATA = 50839
-
-COMPRESSION_INFO = {
- # Compression => pil compression name
- 1: "raw",
- 2: "tiff_ccitt",
- 3: "group3",
- 4: "group4",
- 5: "tiff_lzw",
- 6: "tiff_jpeg", # obsolete
- 7: "jpeg",
- 8: "tiff_adobe_deflate",
- 32771: "tiff_raw_16", # 16-bit padding
- 32773: "packbits",
- 32809: "tiff_thunderscan",
- 32946: "tiff_deflate",
- 34676: "tiff_sgilog",
- 34677: "tiff_sgilog24",
- 34925: "lzma",
- 50000: "zstd",
- 50001: "webp",
-}
-
-COMPRESSION_INFO_REV = {v: k for k, v in COMPRESSION_INFO.items()}
-
-OPEN_INFO = {
- # (ByteOrder, PhotoInterpretation, SampleFormat, FillOrder, BitsPerSample,
- # ExtraSamples) => mode, rawmode
- (II, 0, (1,), 1, (1,), ()): ("1", "1;I"),
- (MM, 0, (1,), 1, (1,), ()): ("1", "1;I"),
- (II, 0, (1,), 2, (1,), ()): ("1", "1;IR"),
- (MM, 0, (1,), 2, (1,), ()): ("1", "1;IR"),
- (II, 1, (1,), 1, (1,), ()): ("1", "1"),
- (MM, 1, (1,), 1, (1,), ()): ("1", "1"),
- (II, 1, (1,), 2, (1,), ()): ("1", "1;R"),
- (MM, 1, (1,), 2, (1,), ()): ("1", "1;R"),
- (II, 0, (1,), 1, (2,), ()): ("L", "L;2I"),
- (MM, 0, (1,), 1, (2,), ()): ("L", "L;2I"),
- (II, 0, (1,), 2, (2,), ()): ("L", "L;2IR"),
- (MM, 0, (1,), 2, (2,), ()): ("L", "L;2IR"),
- (II, 1, (1,), 1, (2,), ()): ("L", "L;2"),
- (MM, 1, (1,), 1, (2,), ()): ("L", "L;2"),
- (II, 1, (1,), 2, (2,), ()): ("L", "L;2R"),
- (MM, 1, (1,), 2, (2,), ()): ("L", "L;2R"),
- (II, 0, (1,), 1, (4,), ()): ("L", "L;4I"),
- (MM, 0, (1,), 1, (4,), ()): ("L", "L;4I"),
- (II, 0, (1,), 2, (4,), ()): ("L", "L;4IR"),
- (MM, 0, (1,), 2, (4,), ()): ("L", "L;4IR"),
- (II, 1, (1,), 1, (4,), ()): ("L", "L;4"),
- (MM, 1, (1,), 1, (4,), ()): ("L", "L;4"),
- (II, 1, (1,), 2, (4,), ()): ("L", "L;4R"),
- (MM, 1, (1,), 2, (4,), ()): ("L", "L;4R"),
- (II, 0, (1,), 1, (8,), ()): ("L", "L;I"),
- (MM, 0, (1,), 1, (8,), ()): ("L", "L;I"),
- (II, 0, (1,), 2, (8,), ()): ("L", "L;IR"),
- (MM, 0, (1,), 2, (8,), ()): ("L", "L;IR"),
- (II, 1, (1,), 1, (8,), ()): ("L", "L"),
- (MM, 1, (1,), 1, (8,), ()): ("L", "L"),
- (II, 1, (2,), 1, (8,), ()): ("L", "L"),
- (MM, 1, (2,), 1, (8,), ()): ("L", "L"),
- (II, 1, (1,), 2, (8,), ()): ("L", "L;R"),
- (MM, 1, (1,), 2, (8,), ()): ("L", "L;R"),
- (II, 1, (1,), 1, (12,), ()): ("I;16", "I;12"),
- (II, 0, (1,), 1, (16,), ()): ("I;16", "I;16"),
- (II, 1, (1,), 1, (16,), ()): ("I;16", "I;16"),
- (MM, 1, (1,), 1, (16,), ()): ("I;16B", "I;16B"),
- (II, 1, (1,), 2, (16,), ()): ("I;16", "I;16R"),
- (II, 1, (2,), 1, (16,), ()): ("I", "I;16S"),
- (MM, 1, (2,), 1, (16,), ()): ("I", "I;16BS"),
- (II, 0, (3,), 1, (32,), ()): ("F", "F;32F"),
- (MM, 0, (3,), 1, (32,), ()): ("F", "F;32BF"),
- (II, 1, (1,), 1, (32,), ()): ("I", "I;32N"),
- (II, 1, (2,), 1, (32,), ()): ("I", "I;32S"),
- (MM, 1, (2,), 1, (32,), ()): ("I", "I;32BS"),
- (II, 1, (3,), 1, (32,), ()): ("F", "F;32F"),
- (MM, 1, (3,), 1, (32,), ()): ("F", "F;32BF"),
- (II, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"),
- (MM, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"),
- (II, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"),
- (MM, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"),
- (II, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"),
- (MM, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"),
- (II, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples
- (MM, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples
- (II, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGBX", "RGBX"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGBX", "RGBXX"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGBX", "RGBXXX"),
- (II, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"),
- (MM, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"),
- (II, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"),
- (MM, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"),
- (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"),
- (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"),
- (II, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10
- (MM, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10
- (II, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16L"),
- (MM, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16B"),
- (II, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16L"),
- (MM, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16B"),
- (II, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16L"),
- (MM, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGBX", "RGBX;16B"),
- (II, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16L"),
- (MM, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16B"),
- (II, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16L"),
- (MM, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16B"),
- (II, 3, (1,), 1, (1,), ()): ("P", "P;1"),
- (MM, 3, (1,), 1, (1,), ()): ("P", "P;1"),
- (II, 3, (1,), 2, (1,), ()): ("P", "P;1R"),
- (MM, 3, (1,), 2, (1,), ()): ("P", "P;1R"),
- (II, 3, (1,), 1, (2,), ()): ("P", "P;2"),
- (MM, 3, (1,), 1, (2,), ()): ("P", "P;2"),
- (II, 3, (1,), 2, (2,), ()): ("P", "P;2R"),
- (MM, 3, (1,), 2, (2,), ()): ("P", "P;2R"),
- (II, 3, (1,), 1, (4,), ()): ("P", "P;4"),
- (MM, 3, (1,), 1, (4,), ()): ("P", "P;4"),
- (II, 3, (1,), 2, (4,), ()): ("P", "P;4R"),
- (MM, 3, (1,), 2, (4,), ()): ("P", "P;4R"),
- (II, 3, (1,), 1, (8,), ()): ("P", "P"),
- (MM, 3, (1,), 1, (8,), ()): ("P", "P"),
- (II, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"),
- (MM, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"),
- (II, 3, (1,), 2, (8,), ()): ("P", "P;R"),
- (MM, 3, (1,), 2, (8,), ()): ("P", "P;R"),
- (II, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"),
- (MM, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"),
- (II, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"),
- (MM, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"),
- (II, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"),
- (MM, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"),
- (II, 5, (1,), 1, (16, 16, 16, 16), ()): ("CMYK", "CMYK;16L"),
- # JPEG compressed images handled by LibTiff and auto-converted to RGBX
- # Minimal Baseline TIFF requires YCbCr images to have 3 SamplesPerPixel
- (II, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"),
- (MM, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"),
- (II, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"),
- (MM, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"),
-}
-
-MAX_SAMPLESPERPIXEL = max(len(key_tp[4]) for key_tp in OPEN_INFO)
-
-PREFIXES = [
- b"MM\x00\x2A", # Valid TIFF header with big-endian byte order
- b"II\x2A\x00", # Valid TIFF header with little-endian byte order
- b"MM\x2A\x00", # Invalid TIFF header, assume big-endian
- b"II\x00\x2A", # Invalid TIFF header, assume little-endian
- b"MM\x00\x2B", # BigTIFF with big-endian byte order
- b"II\x2B\x00", # BigTIFF with little-endian byte order
-]
-
-
-def _accept(prefix):
- return prefix[:4] in PREFIXES
-
-
-def _limit_rational(val, max_val):
- inv = abs(val) > 1
- n_d = IFDRational(1 / val if inv else val).limit_rational(max_val)
- return n_d[::-1] if inv else n_d
-
-
-def _limit_signed_rational(val, max_val, min_val):
- frac = Fraction(val)
- n_d = frac.numerator, frac.denominator
-
- if min(n_d) < min_val:
- n_d = _limit_rational(val, abs(min_val))
-
- if max(n_d) > max_val:
- val = Fraction(*n_d)
- n_d = _limit_rational(val, max_val)
-
- return n_d
-
-
-##
-# Wrapper for TIFF IFDs.
-
-_load_dispatch = {}
-_write_dispatch = {}
-
-
-class IFDRational(Rational):
- """Implements a rational class where 0/0 is a legal value to match
- the in the wild use of exif rationals.
-
- e.g., DigitalZoomRatio - 0.00/0.00 indicates that no digital zoom was used
- """
-
- """ If the denominator is 0, store this as a float('nan'), otherwise store
- as a fractions.Fraction(). Delegate as appropriate
-
- """
-
- __slots__ = ("_numerator", "_denominator", "_val")
-
- def __init__(self, value, denominator=1):
- """
- :param value: either an integer numerator, a
- float/rational/other number, or an IFDRational
- :param denominator: Optional integer denominator
- """
- if isinstance(value, IFDRational):
- self._numerator = value.numerator
- self._denominator = value.denominator
- self._val = value._val
- return
-
- if isinstance(value, Fraction):
- self._numerator = value.numerator
- self._denominator = value.denominator
- else:
- self._numerator = value
- self._denominator = denominator
-
- if denominator == 0:
- self._val = float("nan")
- elif denominator == 1:
- self._val = Fraction(value)
- else:
- self._val = Fraction(value, denominator)
-
- @property
- def numerator(self):
- return self._numerator
-
- @property
- def denominator(self):
- return self._denominator
-
- def limit_rational(self, max_denominator):
- """
-
- :param max_denominator: Integer, the maximum denominator value
- :returns: Tuple of (numerator, denominator)
- """
-
- if self.denominator == 0:
- return self.numerator, self.denominator
-
- f = self._val.limit_denominator(max_denominator)
- return f.numerator, f.denominator
-
- def __repr__(self):
- return str(float(self._val))
-
- def __hash__(self):
- return self._val.__hash__()
-
- def __eq__(self, other):
- val = self._val
- if isinstance(other, IFDRational):
- other = other._val
- if isinstance(other, float):
- val = float(val)
- return val == other
-
- def __getstate__(self):
- return [self._val, self._numerator, self._denominator]
-
- def __setstate__(self, state):
- IFDRational.__init__(self, 0)
- _val, _numerator, _denominator = state
- self._val = _val
- self._numerator = _numerator
- self._denominator = _denominator
-
- def _delegate(op):
- def delegate(self, *args):
- return getattr(self._val, op)(*args)
-
- return delegate
-
- """ a = ['add','radd', 'sub', 'rsub', 'mul', 'rmul',
- 'truediv', 'rtruediv', 'floordiv', 'rfloordiv',
- 'mod','rmod', 'pow','rpow', 'pos', 'neg',
- 'abs', 'trunc', 'lt', 'gt', 'le', 'ge', 'bool',
- 'ceil', 'floor', 'round']
- print("\n".join("__%s__ = _delegate('__%s__')" % (s,s) for s in a))
- """
-
- __add__ = _delegate("__add__")
- __radd__ = _delegate("__radd__")
- __sub__ = _delegate("__sub__")
- __rsub__ = _delegate("__rsub__")
- __mul__ = _delegate("__mul__")
- __rmul__ = _delegate("__rmul__")
- __truediv__ = _delegate("__truediv__")
- __rtruediv__ = _delegate("__rtruediv__")
- __floordiv__ = _delegate("__floordiv__")
- __rfloordiv__ = _delegate("__rfloordiv__")
- __mod__ = _delegate("__mod__")
- __rmod__ = _delegate("__rmod__")
- __pow__ = _delegate("__pow__")
- __rpow__ = _delegate("__rpow__")
- __pos__ = _delegate("__pos__")
- __neg__ = _delegate("__neg__")
- __abs__ = _delegate("__abs__")
- __trunc__ = _delegate("__trunc__")
- __lt__ = _delegate("__lt__")
- __gt__ = _delegate("__gt__")
- __le__ = _delegate("__le__")
- __ge__ = _delegate("__ge__")
- __bool__ = _delegate("__bool__")
- __ceil__ = _delegate("__ceil__")
- __floor__ = _delegate("__floor__")
- __round__ = _delegate("__round__")
- # Python >= 3.11
- if hasattr(Fraction, "__int__"):
- __int__ = _delegate("__int__")
-
-
-class ImageFileDirectory_v2(MutableMapping):
- """This class represents a TIFF tag directory. To speed things up, we
- don't decode tags unless they're asked for.
-
- Exposes a dictionary interface of the tags in the directory::
-
- ifd = ImageFileDirectory_v2()
- ifd[key] = 'Some Data'
- ifd.tagtype[key] = TiffTags.ASCII
- print(ifd[key])
- 'Some Data'
-
- Individual values are returned as the strings or numbers, sequences are
- returned as tuples of the values.
-
- The tiff metadata type of each item is stored in a dictionary of
- tag types in
- :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v2.tagtype`. The types
- are read from a tiff file, guessed from the type added, or added
- manually.
-
- Data Structures:
-
- * ``self.tagtype = {}``
-
- * Key: numerical TIFF tag number
- * Value: integer corresponding to the data type from
- :py:data:`.TiffTags.TYPES`
-
- .. versionadded:: 3.0.0
-
- 'Internal' data structures:
-
- * ``self._tags_v2 = {}``
-
- * Key: numerical TIFF tag number
- * Value: decoded data, as tuple for multiple values
-
- * ``self._tagdata = {}``
-
- * Key: numerical TIFF tag number
- * Value: undecoded byte string from file
-
- * ``self._tags_v1 = {}``
-
- * Key: numerical TIFF tag number
- * Value: decoded data in the v1 format
-
- Tags will be found in the private attributes ``self._tagdata``, and in
- ``self._tags_v2`` once decoded.
-
- ``self.legacy_api`` is a value for internal use, and shouldn't be changed
- from outside code. In cooperation with
- :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`, if ``legacy_api``
- is true, then decoded tags will be populated into both ``_tags_v1`` and
- ``_tags_v2``. ``_tags_v2`` will be used if this IFD is used in the TIFF
- save routine. Tags should be read from ``_tags_v1`` if
- ``legacy_api == true``.
-
- """
-
- def __init__(self, ifh=b"II\052\0\0\0\0\0", prefix=None, group=None):
- """Initialize an ImageFileDirectory.
-
- To construct an ImageFileDirectory from a real file, pass the 8-byte
- magic header to the constructor. To only set the endianness, pass it
- as the 'prefix' keyword argument.
-
- :param ifh: One of the accepted magic headers (cf. PREFIXES); also sets
- endianness.
- :param prefix: Override the endianness of the file.
- """
- if not _accept(ifh):
- msg = f"not a TIFF file (header {repr(ifh)} not valid)"
- raise SyntaxError(msg)
- self._prefix = prefix if prefix is not None else ifh[:2]
- if self._prefix == MM:
- self._endian = ">"
- elif self._prefix == II:
- self._endian = "<"
- else:
- msg = "not a TIFF IFD"
- raise SyntaxError(msg)
- self._bigtiff = ifh[2] == 43
- self.group = group
- self.tagtype = {}
- """ Dictionary of tag types """
- self.reset()
- (self.next,) = (
- self._unpack("Q", ifh[8:]) if self._bigtiff else self._unpack("L", ifh[4:])
- )
- self._legacy_api = False
-
- prefix = property(lambda self: self._prefix)
- offset = property(lambda self: self._offset)
- legacy_api = property(lambda self: self._legacy_api)
-
- @legacy_api.setter
- def legacy_api(self, value):
- msg = "Not allowing setting of legacy api"
- raise Exception(msg)
-
- def reset(self):
- self._tags_v1 = {} # will remain empty if legacy_api is false
- self._tags_v2 = {} # main tag storage
- self._tagdata = {}
- self.tagtype = {} # added 2008-06-05 by Florian Hoech
- self._next = None
- self._offset = None
-
- def __str__(self):
- return str(dict(self))
-
- def named(self):
- """
- :returns: dict of name|key: value
-
- Returns the complete tag dictionary, with named tags where possible.
- """
- return {
- TiffTags.lookup(code, self.group).name: value
- for code, value in self.items()
- }
-
- def __len__(self):
- return len(set(self._tagdata) | set(self._tags_v2))
-
- def __getitem__(self, tag):
- if tag not in self._tags_v2: # unpack on the fly
- data = self._tagdata[tag]
- typ = self.tagtype[tag]
- size, handler = self._load_dispatch[typ]
- self[tag] = handler(self, data, self.legacy_api) # check type
- val = self._tags_v2[tag]
- if self.legacy_api and not isinstance(val, (tuple, bytes)):
- val = (val,)
- return val
-
- def __contains__(self, tag):
- return tag in self._tags_v2 or tag in self._tagdata
-
- def __setitem__(self, tag, value):
- self._setitem(tag, value, self.legacy_api)
-
- def _setitem(self, tag, value, legacy_api):
- basetypes = (Number, bytes, str)
-
- info = TiffTags.lookup(tag, self.group)
- values = [value] if isinstance(value, basetypes) else value
-
- if tag not in self.tagtype:
- if info.type:
- self.tagtype[tag] = info.type
- else:
- self.tagtype[tag] = TiffTags.UNDEFINED
- if all(isinstance(v, IFDRational) for v in values):
- self.tagtype[tag] = (
- TiffTags.RATIONAL
- if all(v >= 0 for v in values)
- else TiffTags.SIGNED_RATIONAL
- )
- elif all(isinstance(v, int) for v in values):
- if all(0 <= v < 2**16 for v in values):
- self.tagtype[tag] = TiffTags.SHORT
- elif all(-(2**15) < v < 2**15 for v in values):
- self.tagtype[tag] = TiffTags.SIGNED_SHORT
- else:
- self.tagtype[tag] = (
- TiffTags.LONG
- if all(v >= 0 for v in values)
- else TiffTags.SIGNED_LONG
- )
- elif all(isinstance(v, float) for v in values):
- self.tagtype[tag] = TiffTags.DOUBLE
- elif all(isinstance(v, str) for v in values):
- self.tagtype[tag] = TiffTags.ASCII
- elif all(isinstance(v, bytes) for v in values):
- self.tagtype[tag] = TiffTags.BYTE
-
- if self.tagtype[tag] == TiffTags.UNDEFINED:
- values = [
- v.encode("ascii", "replace") if isinstance(v, str) else v
- for v in values
- ]
- elif self.tagtype[tag] == TiffTags.RATIONAL:
- values = [float(v) if isinstance(v, int) else v for v in values]
-
- is_ifd = self.tagtype[tag] == TiffTags.LONG and isinstance(values, dict)
- if not is_ifd:
- values = tuple(info.cvt_enum(value) for value in values)
-
- dest = self._tags_v1 if legacy_api else self._tags_v2
-
- # Three branches:
- # Spec'd length == 1, Actual length 1, store as element
- # Spec'd length == 1, Actual > 1, Warn and truncate. Formerly barfed.
- # No Spec, Actual length 1, Formerly (<4.2) returned a 1 element tuple.
- # Don't mess with the legacy api, since it's frozen.
- if not is_ifd and (
- (info.length == 1)
- or self.tagtype[tag] == TiffTags.BYTE
- or (info.length is None and len(values) == 1 and not legacy_api)
- ):
- # Don't mess with the legacy api, since it's frozen.
- if legacy_api and self.tagtype[tag] in [
- TiffTags.RATIONAL,
- TiffTags.SIGNED_RATIONAL,
- ]: # rationals
- values = (values,)
- try:
- (dest[tag],) = values
- except ValueError:
- # We've got a builtin tag with 1 expected entry
- warnings.warn(
- f"Metadata Warning, tag {tag} had too many entries: "
- f"{len(values)}, expected 1"
- )
- dest[tag] = values[0]
-
- else:
- # Spec'd length > 1 or undefined
- # Unspec'd, and length > 1
- dest[tag] = values
-
- def __delitem__(self, tag):
- self._tags_v2.pop(tag, None)
- self._tags_v1.pop(tag, None)
- self._tagdata.pop(tag, None)
-
- def __iter__(self):
- return iter(set(self._tagdata) | set(self._tags_v2))
-
- def _unpack(self, fmt, data):
- return struct.unpack(self._endian + fmt, data)
-
- def _pack(self, fmt, *values):
- return struct.pack(self._endian + fmt, *values)
-
- def _register_loader(idx, size):
- def decorator(func):
- from .TiffTags import TYPES
-
- if func.__name__.startswith("load_"):
- TYPES[idx] = func.__name__[5:].replace("_", " ")
- _load_dispatch[idx] = size, func # noqa: F821
- return func
-
- return decorator
-
- def _register_writer(idx):
- def decorator(func):
- _write_dispatch[idx] = func # noqa: F821
- return func
-
- return decorator
-
- def _register_basic(idx_fmt_name):
- from .TiffTags import TYPES
-
- idx, fmt, name = idx_fmt_name
- TYPES[idx] = name
- size = struct.calcsize("=" + fmt)
- _load_dispatch[idx] = ( # noqa: F821
- size,
- lambda self, data, legacy_api=True: (
- self._unpack(f"{len(data) // size}{fmt}", data)
- ),
- )
- _write_dispatch[idx] = lambda self, *values: ( # noqa: F821
- b"".join(self._pack(fmt, value) for value in values)
- )
-
- list(
- map(
- _register_basic,
- [
- (TiffTags.SHORT, "H", "short"),
- (TiffTags.LONG, "L", "long"),
- (TiffTags.SIGNED_BYTE, "b", "signed byte"),
- (TiffTags.SIGNED_SHORT, "h", "signed short"),
- (TiffTags.SIGNED_LONG, "l", "signed long"),
- (TiffTags.FLOAT, "f", "float"),
- (TiffTags.DOUBLE, "d", "double"),
- (TiffTags.IFD, "L", "long"),
- (TiffTags.LONG8, "Q", "long8"),
- ],
- )
- )
-
- @_register_loader(1, 1) # Basic type, except for the legacy API.
- def load_byte(self, data, legacy_api=True):
- return data
-
- @_register_writer(1) # Basic type, except for the legacy API.
- def write_byte(self, data):
- if isinstance(data, IFDRational):
- data = int(data)
- if isinstance(data, int):
- data = bytes((data,))
- return data
-
- @_register_loader(2, 1)
- def load_string(self, data, legacy_api=True):
- if data.endswith(b"\0"):
- data = data[:-1]
- return data.decode("latin-1", "replace")
-
- @_register_writer(2)
- def write_string(self, value):
- # remerge of https://github.com/python-pillow/Pillow/pull/1416
- if isinstance(value, int):
- value = str(value)
- if not isinstance(value, bytes):
- value = value.encode("ascii", "replace")
- return value + b"\0"
-
- @_register_loader(5, 8)
- def load_rational(self, data, legacy_api=True):
- vals = self._unpack(f"{len(data) // 4}L", data)
-
- def combine(a, b):
- return (a, b) if legacy_api else IFDRational(a, b)
-
- return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2]))
-
- @_register_writer(5)
- def write_rational(self, *values):
- return b"".join(
- self._pack("2L", *_limit_rational(frac, 2**32 - 1)) for frac in values
- )
-
- @_register_loader(7, 1)
- def load_undefined(self, data, legacy_api=True):
- return data
-
- @_register_writer(7)
- def write_undefined(self, value):
- if isinstance(value, int):
- value = str(value).encode("ascii", "replace")
- return value
-
- @_register_loader(10, 8)
- def load_signed_rational(self, data, legacy_api=True):
- vals = self._unpack(f"{len(data) // 4}l", data)
-
- def combine(a, b):
- return (a, b) if legacy_api else IFDRational(a, b)
-
- return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2]))
-
- @_register_writer(10)
- def write_signed_rational(self, *values):
- return b"".join(
- self._pack("2l", *_limit_signed_rational(frac, 2**31 - 1, -(2**31)))
- for frac in values
- )
-
- def _ensure_read(self, fp, size):
- ret = fp.read(size)
- if len(ret) != size:
- msg = (
- "Corrupt EXIF data. "
- f"Expecting to read {size} bytes but only got {len(ret)}. "
- )
- raise OSError(msg)
- return ret
-
- def load(self, fp):
- self.reset()
- self._offset = fp.tell()
-
- try:
- tag_count = (
- self._unpack("Q", self._ensure_read(fp, 8))
- if self._bigtiff
- else self._unpack("H", self._ensure_read(fp, 2))
- )[0]
- for i in range(tag_count):
- tag, typ, count, data = (
- self._unpack("HHQ8s", self._ensure_read(fp, 20))
- if self._bigtiff
- else self._unpack("HHL4s", self._ensure_read(fp, 12))
- )
-
- tagname = TiffTags.lookup(tag, self.group).name
- typname = TYPES.get(typ, "unknown")
- msg = f"tag: {tagname} ({tag}) - type: {typname} ({typ})"
-
- try:
- unit_size, handler = self._load_dispatch[typ]
- except KeyError:
- logger.debug(msg + f" - unsupported type {typ}")
- continue # ignore unsupported type
- size = count * unit_size
- if size > (8 if self._bigtiff else 4):
- here = fp.tell()
- (offset,) = self._unpack("Q" if self._bigtiff else "L", data)
- msg += f" Tag Location: {here} - Data Location: {offset}"
- fp.seek(offset)
- data = ImageFile._safe_read(fp, size)
- fp.seek(here)
- else:
- data = data[:size]
-
- if len(data) != size:
- warnings.warn(
- "Possibly corrupt EXIF data. "
- f"Expecting to read {size} bytes but only got {len(data)}."
- f" Skipping tag {tag}"
- )
- logger.debug(msg)
- continue
-
- if not data:
- logger.debug(msg)
- continue
-
- self._tagdata[tag] = data
- self.tagtype[tag] = typ
-
- msg += " - value: " + (
- "" % size if size > 32 else repr(data)
- )
- logger.debug(msg)
-
- (self.next,) = (
- self._unpack("Q", self._ensure_read(fp, 8))
- if self._bigtiff
- else self._unpack("L", self._ensure_read(fp, 4))
- )
- except OSError as msg:
- warnings.warn(str(msg))
- return
-
- def tobytes(self, offset=0):
- # FIXME What about tagdata?
- result = self._pack("H", len(self._tags_v2))
-
- entries = []
- offset = offset + len(result) + len(self._tags_v2) * 12 + 4
- stripoffsets = None
-
- # pass 1: convert tags to binary format
- # always write tags in ascending order
- for tag, value in sorted(self._tags_v2.items()):
- if tag == STRIPOFFSETS:
- stripoffsets = len(entries)
- typ = self.tagtype.get(tag)
- logger.debug(f"Tag {tag}, Type: {typ}, Value: {repr(value)}")
- is_ifd = typ == TiffTags.LONG and isinstance(value, dict)
- if is_ifd:
- if self._endian == "<":
- ifh = b"II\x2A\x00\x08\x00\x00\x00"
- else:
- ifh = b"MM\x00\x2A\x00\x00\x00\x08"
- ifd = ImageFileDirectory_v2(ifh, group=tag)
- values = self._tags_v2[tag]
- for ifd_tag, ifd_value in values.items():
- ifd[ifd_tag] = ifd_value
- data = ifd.tobytes(offset)
- else:
- values = value if isinstance(value, tuple) else (value,)
- data = self._write_dispatch[typ](self, *values)
-
- tagname = TiffTags.lookup(tag, self.group).name
- typname = "ifd" if is_ifd else TYPES.get(typ, "unknown")
- msg = f"save: {tagname} ({tag}) - type: {typname} ({typ})"
- msg += " - value: " + (
- "" % len(data) if len(data) >= 16 else str(values)
- )
- logger.debug(msg)
-
- # count is sum of lengths for string and arbitrary data
- if is_ifd:
- count = 1
- elif typ in [TiffTags.BYTE, TiffTags.ASCII, TiffTags.UNDEFINED]:
- count = len(data)
- else:
- count = len(values)
- # figure out if data fits into the entry
- if len(data) <= 4:
- entries.append((tag, typ, count, data.ljust(4, b"\0"), b""))
- else:
- entries.append((tag, typ, count, self._pack("L", offset), data))
- offset += (len(data) + 1) // 2 * 2 # pad to word
-
- # update strip offset data to point beyond auxiliary data
- if stripoffsets is not None:
- tag, typ, count, value, data = entries[stripoffsets]
- if data:
- msg = "multistrip support not yet implemented"
- raise NotImplementedError(msg)
- value = self._pack("L", self._unpack("L", value)[0] + offset)
- entries[stripoffsets] = tag, typ, count, value, data
-
- # pass 2: write entries to file
- for tag, typ, count, value, data in entries:
- logger.debug(f"{tag} {typ} {count} {repr(value)} {repr(data)}")
- result += self._pack("HHL4s", tag, typ, count, value)
-
- # -- overwrite here for multi-page --
- result += b"\0\0\0\0" # end of entries
-
- # pass 3: write auxiliary data to file
- for tag, typ, count, value, data in entries:
- result += data
- if len(data) & 1:
- result += b"\0"
-
- return result
-
- def save(self, fp):
- if fp.tell() == 0: # skip TIFF header on subsequent pages
- # tiff header -- PIL always starts the first IFD at offset 8
- fp.write(self._prefix + self._pack("HL", 42, 8))
-
- offset = fp.tell()
- result = self.tobytes(offset)
- fp.write(result)
- return offset + len(result)
-
-
-ImageFileDirectory_v2._load_dispatch = _load_dispatch
-ImageFileDirectory_v2._write_dispatch = _write_dispatch
-for idx, name in TYPES.items():
- name = name.replace(" ", "_")
- setattr(ImageFileDirectory_v2, "load_" + name, _load_dispatch[idx][1])
- setattr(ImageFileDirectory_v2, "write_" + name, _write_dispatch[idx])
-del _load_dispatch, _write_dispatch, idx, name
-
-
-# Legacy ImageFileDirectory support.
-class ImageFileDirectory_v1(ImageFileDirectory_v2):
- """This class represents the **legacy** interface to a TIFF tag directory.
-
- Exposes a dictionary interface of the tags in the directory::
-
- ifd = ImageFileDirectory_v1()
- ifd[key] = 'Some Data'
- ifd.tagtype[key] = TiffTags.ASCII
- print(ifd[key])
- ('Some Data',)
-
- Also contains a dictionary of tag types as read from the tiff image file,
- :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v1.tagtype`.
-
- Values are returned as a tuple.
-
- .. deprecated:: 3.0.0
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._legacy_api = True
-
- tags = property(lambda self: self._tags_v1)
- tagdata = property(lambda self: self._tagdata)
-
- # defined in ImageFileDirectory_v2
- tagtype: dict
- """Dictionary of tag types"""
-
- @classmethod
- def from_v2(cls, original):
- """Returns an
- :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`
- instance with the same data as is contained in the original
- :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2`
- instance.
-
- :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`
-
- """
-
- ifd = cls(prefix=original.prefix)
- ifd._tagdata = original._tagdata
- ifd.tagtype = original.tagtype
- ifd.next = original.next # an indicator for multipage tiffs
- return ifd
-
- def to_v2(self):
- """Returns an
- :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2`
- instance with the same data as is contained in the original
- :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`
- instance.
-
- :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2`
-
- """
-
- ifd = ImageFileDirectory_v2(prefix=self.prefix)
- ifd._tagdata = dict(self._tagdata)
- ifd.tagtype = dict(self.tagtype)
- ifd._tags_v2 = dict(self._tags_v2)
- return ifd
-
- def __contains__(self, tag):
- return tag in self._tags_v1 or tag in self._tagdata
-
- def __len__(self):
- return len(set(self._tagdata) | set(self._tags_v1))
-
- def __iter__(self):
- return iter(set(self._tagdata) | set(self._tags_v1))
-
- def __setitem__(self, tag, value):
- for legacy_api in (False, True):
- self._setitem(tag, value, legacy_api)
-
- def __getitem__(self, tag):
- if tag not in self._tags_v1: # unpack on the fly
- data = self._tagdata[tag]
- typ = self.tagtype[tag]
- size, handler = self._load_dispatch[typ]
- for legacy in (False, True):
- self._setitem(tag, handler(self, data, legacy), legacy)
- val = self._tags_v1[tag]
- if not isinstance(val, (tuple, bytes)):
- val = (val,)
- return val
-
-
-# undone -- switch this pointer when IFD_LEGACY_API == False
-ImageFileDirectory = ImageFileDirectory_v1
-
-
-##
-# Image plugin for TIFF files.
-
-
-class TiffImageFile(ImageFile.ImageFile):
- format = "TIFF"
- format_description = "Adobe TIFF"
- _close_exclusive_fp_after_loading = False
-
- def __init__(self, fp=None, filename=None):
- self.tag_v2 = None
- """ Image file directory (tag dictionary) """
-
- self.tag = None
- """ Legacy tag entries """
-
- super().__init__(fp, filename)
-
- def _open(self):
- """Open the first image in a TIFF file"""
-
- # Header
- ifh = self.fp.read(8)
- if ifh[2] == 43:
- ifh += self.fp.read(8)
-
- self.tag_v2 = ImageFileDirectory_v2(ifh)
-
- # legacy IFD entries will be filled in later
- self.ifd = None
-
- # setup frame pointers
- self.__first = self.__next = self.tag_v2.next
- self.__frame = -1
- self._fp = self.fp
- self._frame_pos = []
- self._n_frames = None
-
- logger.debug("*** TiffImageFile._open ***")
- logger.debug(f"- __first: {self.__first}")
- logger.debug(f"- ifh: {repr(ifh)}") # Use repr to avoid str(bytes)
-
- # and load the first frame
- self._seek(0)
-
- @property
- def n_frames(self):
- if self._n_frames is None:
- current = self.tell()
- self._seek(len(self._frame_pos))
- while self._n_frames is None:
- self._seek(self.tell() + 1)
- self.seek(current)
- return self._n_frames
-
- def seek(self, frame):
- """Select a given frame as current image"""
- if not self._seek_check(frame):
- return
- self._seek(frame)
- # Create a new core image object on second and
- # subsequent frames in the image. Image may be
- # different size/mode.
- Image._decompression_bomb_check(self.size)
- self.im = Image.core.new(self.mode, self.size)
-
- def _seek(self, frame):
- self.fp = self._fp
-
- # reset buffered io handle in case fp
- # was passed to libtiff, invalidating the buffer
- self.fp.tell()
-
- while len(self._frame_pos) <= frame:
- if not self.__next:
- msg = "no more images in TIFF file"
- raise EOFError(msg)
- logger.debug(
- f"Seeking to frame {frame}, on frame {self.__frame}, "
- f"__next {self.__next}, location: {self.fp.tell()}"
- )
- self.fp.seek(self.__next)
- self._frame_pos.append(self.__next)
- logger.debug("Loading tags, location: %s" % self.fp.tell())
- self.tag_v2.load(self.fp)
- if self.tag_v2.next in self._frame_pos:
- # This IFD has already been processed
- # Declare this to be the end of the image
- self.__next = 0
- else:
- self.__next = self.tag_v2.next
- if self.__next == 0:
- self._n_frames = frame + 1
- if len(self._frame_pos) == 1:
- self.is_animated = self.__next != 0
- self.__frame += 1
- self.fp.seek(self._frame_pos[frame])
- self.tag_v2.load(self.fp)
- self._reload_exif()
- # fill the legacy tag/ifd entries
- self.tag = self.ifd = ImageFileDirectory_v1.from_v2(self.tag_v2)
- self.__frame = frame
- self._setup()
-
- def tell(self):
- """Return the current frame number"""
- return self.__frame
-
- def getxmp(self):
- """
- Returns a dictionary containing the XMP tags.
- Requires defusedxml to be installed.
-
- :returns: XMP tags in a dictionary.
- """
- return self._getxmp(self.tag_v2[XMP]) if XMP in self.tag_v2 else {}
-
- def get_photoshop_blocks(self):
- """
- Returns a dictionary of Photoshop "Image Resource Blocks".
- The keys are the image resource ID. For more information, see
- https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577409_pgfId-1037727
-
- :returns: Photoshop "Image Resource Blocks" in a dictionary.
- """
- blocks = {}
- val = self.tag_v2.get(ExifTags.Base.ImageResources)
- if val:
- while val[:4] == b"8BIM":
- id = i16(val[4:6])
- n = math.ceil((val[6] + 1) / 2) * 2
- size = i32(val[6 + n : 10 + n])
- data = val[10 + n : 10 + n + size]
- blocks[id] = {"data": data}
-
- val = val[math.ceil((10 + n + size) / 2) * 2 :]
- return blocks
-
- def load(self):
- if self.tile and self.use_load_libtiff:
- return self._load_libtiff()
- return super().load()
-
- def load_end(self):
- if self._tile_orientation:
- method = {
- 2: Image.Transpose.FLIP_LEFT_RIGHT,
- 3: Image.Transpose.ROTATE_180,
- 4: Image.Transpose.FLIP_TOP_BOTTOM,
- 5: Image.Transpose.TRANSPOSE,
- 6: Image.Transpose.ROTATE_270,
- 7: Image.Transpose.TRANSVERSE,
- 8: Image.Transpose.ROTATE_90,
- }.get(self._tile_orientation)
- if method is not None:
- self.im = self.im.transpose(method)
- self._size = self.im.size
-
- # allow closing if we're on the first frame, there's no next
- # This is the ImageFile.load path only, libtiff specific below.
- if not self.is_animated:
- self._close_exclusive_fp_after_loading = True
-
- # reset buffered io handle in case fp
- # was passed to libtiff, invalidating the buffer
- self.fp.tell()
-
- # load IFD data from fp before it is closed
- exif = self.getexif()
- for key in TiffTags.TAGS_V2_GROUPS:
- if key not in exif:
- continue
- exif.get_ifd(key)
-
- def _load_libtiff(self):
- """Overload method triggered when we detect a compressed tiff
- Calls out to libtiff"""
-
- Image.Image.load(self)
-
- self.load_prepare()
-
- if not len(self.tile) == 1:
- msg = "Not exactly one tile"
- raise OSError(msg)
-
- # (self._compression, (extents tuple),
- # 0, (rawmode, self._compression, fp))
- extents = self.tile[0][1]
- args = list(self.tile[0][3])
-
- # To be nice on memory footprint, if there's a
- # file descriptor, use that instead of reading
- # into a string in python.
- try:
- fp = hasattr(self.fp, "fileno") and self.fp.fileno()
- # flush the file descriptor, prevents error on pypy 2.4+
- # should also eliminate the need for fp.tell
- # in _seek
- if hasattr(self.fp, "flush"):
- self.fp.flush()
- except OSError:
- # io.BytesIO have a fileno, but returns an OSError if
- # it doesn't use a file descriptor.
- fp = False
-
- if fp:
- args[2] = fp
-
- decoder = Image._getdecoder(
- self.mode, "libtiff", tuple(args), self.decoderconfig
- )
- try:
- decoder.setimage(self.im, extents)
- except ValueError as e:
- msg = "Couldn't set the image"
- raise OSError(msg) from e
-
- close_self_fp = self._exclusive_fp and not self.is_animated
- if hasattr(self.fp, "getvalue"):
- # We've got a stringio like thing passed in. Yay for all in memory.
- # The decoder needs the entire file in one shot, so there's not
- # a lot we can do here other than give it the entire file.
- # unless we could do something like get the address of the
- # underlying string for stringio.
- #
- # Rearranging for supporting byteio items, since they have a fileno
- # that returns an OSError if there's no underlying fp. Easier to
- # deal with here by reordering.
- logger.debug("have getvalue. just sending in a string from getvalue")
- n, err = decoder.decode(self.fp.getvalue())
- elif fp:
- # we've got a actual file on disk, pass in the fp.
- logger.debug("have fileno, calling fileno version of the decoder.")
- if not close_self_fp:
- self.fp.seek(0)
- # 4 bytes, otherwise the trace might error out
- n, err = decoder.decode(b"fpfp")
- else:
- # we have something else.
- logger.debug("don't have fileno or getvalue. just reading")
- self.fp.seek(0)
- # UNDONE -- so much for that buffer size thing.
- n, err = decoder.decode(self.fp.read())
-
- self.tile = []
- self.readonly = 0
-
- self.load_end()
-
- if close_self_fp:
- self.fp.close()
- self.fp = None # might be shared
-
- if err < 0:
- raise OSError(err)
-
- return Image.Image.load(self)
-
- def _setup(self):
- """Setup this image object based on current tags"""
-
- if 0xBC01 in self.tag_v2:
- msg = "Windows Media Photo files not yet supported"
- raise OSError(msg)
-
- # extract relevant tags
- self._compression = COMPRESSION_INFO[self.tag_v2.get(COMPRESSION, 1)]
- self._planar_configuration = self.tag_v2.get(PLANAR_CONFIGURATION, 1)
-
- # photometric is a required tag, but not everyone is reading
- # the specification
- photo = self.tag_v2.get(PHOTOMETRIC_INTERPRETATION, 0)
-
- # old style jpeg compression images most certainly are YCbCr
- if self._compression == "tiff_jpeg":
- photo = 6
-
- fillorder = self.tag_v2.get(FILLORDER, 1)
-
- logger.debug("*** Summary ***")
- logger.debug(f"- compression: {self._compression}")
- logger.debug(f"- photometric_interpretation: {photo}")
- logger.debug(f"- planar_configuration: {self._planar_configuration}")
- logger.debug(f"- fill_order: {fillorder}")
- logger.debug(f"- YCbCr subsampling: {self.tag.get(YCBCRSUBSAMPLING)}")
-
- # size
- xsize = int(self.tag_v2.get(IMAGEWIDTH))
- ysize = int(self.tag_v2.get(IMAGELENGTH))
- self._size = xsize, ysize
-
- logger.debug(f"- size: {self.size}")
-
- sample_format = self.tag_v2.get(SAMPLEFORMAT, (1,))
- if len(sample_format) > 1 and max(sample_format) == min(sample_format) == 1:
- # SAMPLEFORMAT is properly per band, so an RGB image will
- # be (1,1,1). But, we don't support per band pixel types,
- # and anything more than one band is a uint8. So, just
- # take the first element. Revisit this if adding support
- # for more exotic images.
- sample_format = (1,)
-
- bps_tuple = self.tag_v2.get(BITSPERSAMPLE, (1,))
- extra_tuple = self.tag_v2.get(EXTRASAMPLES, ())
- if photo in (2, 6, 8): # RGB, YCbCr, LAB
- bps_count = 3
- elif photo == 5: # CMYK
- bps_count = 4
- else:
- bps_count = 1
- bps_count += len(extra_tuple)
- bps_actual_count = len(bps_tuple)
- samples_per_pixel = self.tag_v2.get(
- SAMPLESPERPIXEL,
- 3 if self._compression == "tiff_jpeg" and photo in (2, 6) else 1,
- )
-
- if samples_per_pixel > MAX_SAMPLESPERPIXEL:
- # DOS check, samples_per_pixel can be a Long, and we extend the tuple below
- logger.error(
- "More samples per pixel than can be decoded: %s", samples_per_pixel
- )
- msg = "Invalid value for samples per pixel"
- raise SyntaxError(msg)
-
- if samples_per_pixel < bps_actual_count:
- # If a file has more values in bps_tuple than expected,
- # remove the excess.
- bps_tuple = bps_tuple[:samples_per_pixel]
- elif samples_per_pixel > bps_actual_count and bps_actual_count == 1:
- # If a file has only one value in bps_tuple, when it should have more,
- # presume it is the same number of bits for all of the samples.
- bps_tuple = bps_tuple * samples_per_pixel
-
- if len(bps_tuple) != samples_per_pixel:
- msg = "unknown data organization"
- raise SyntaxError(msg)
-
- # mode: check photometric interpretation and bits per pixel
- key = (
- self.tag_v2.prefix,
- photo,
- sample_format,
- fillorder,
- bps_tuple,
- extra_tuple,
- )
- logger.debug(f"format key: {key}")
- try:
- self.mode, rawmode = OPEN_INFO[key]
- except KeyError as e:
- logger.debug("- unsupported format")
- msg = "unknown pixel mode"
- raise SyntaxError(msg) from e
-
- logger.debug(f"- raw mode: {rawmode}")
- logger.debug(f"- pil mode: {self.mode}")
-
- self.info["compression"] = self._compression
-
- xres = self.tag_v2.get(X_RESOLUTION, 1)
- yres = self.tag_v2.get(Y_RESOLUTION, 1)
-
- if xres and yres:
- resunit = self.tag_v2.get(RESOLUTION_UNIT)
- if resunit == 2: # dots per inch
- self.info["dpi"] = (xres, yres)
- elif resunit == 3: # dots per centimeter. convert to dpi
- self.info["dpi"] = (xres * 2.54, yres * 2.54)
- elif resunit is None: # used to default to 1, but now 2)
- self.info["dpi"] = (xres, yres)
- # For backward compatibility,
- # we also preserve the old behavior
- self.info["resolution"] = xres, yres
- else: # No absolute unit of measurement
- self.info["resolution"] = xres, yres
-
- # build tile descriptors
- x = y = layer = 0
- self.tile = []
- self.use_load_libtiff = READ_LIBTIFF or self._compression != "raw"
- if self.use_load_libtiff:
- # Decoder expects entire file as one tile.
- # There's a buffer size limit in load (64k)
- # so large g4 images will fail if we use that
- # function.
- #
- # Setup the one tile for the whole image, then
- # use the _load_libtiff function.
-
- # libtiff handles the fillmode for us, so 1;IR should
- # actually be 1;I. Including the R double reverses the
- # bits, so stripes of the image are reversed. See
- # https://github.com/python-pillow/Pillow/issues/279
- if fillorder == 2:
- # Replace fillorder with fillorder=1
- key = key[:3] + (1,) + key[4:]
- logger.debug(f"format key: {key}")
- # this should always work, since all the
- # fillorder==2 modes have a corresponding
- # fillorder=1 mode
- self.mode, rawmode = OPEN_INFO[key]
- # libtiff always returns the bytes in native order.
- # we're expecting image byte order. So, if the rawmode
- # contains I;16, we need to convert from native to image
- # byte order.
- if rawmode == "I;16":
- rawmode = "I;16N"
- if ";16B" in rawmode:
- rawmode = rawmode.replace(";16B", ";16N")
- if ";16L" in rawmode:
- rawmode = rawmode.replace(";16L", ";16N")
-
- # YCbCr images with new jpeg compression with pixels in one plane
- # unpacked straight into RGB values
- if (
- photo == 6
- and self._compression == "jpeg"
- and self._planar_configuration == 1
- ):
- rawmode = "RGB"
-
- # Offset in the tile tuple is 0, we go from 0,0 to
- # w,h, and we only do this once -- eds
- a = (rawmode, self._compression, False, self.tag_v2.offset)
- self.tile.append(("libtiff", (0, 0, xsize, ysize), 0, a))
-
- elif STRIPOFFSETS in self.tag_v2 or TILEOFFSETS in self.tag_v2:
- # striped image
- if STRIPOFFSETS in self.tag_v2:
- offsets = self.tag_v2[STRIPOFFSETS]
- h = self.tag_v2.get(ROWSPERSTRIP, ysize)
- w = self.size[0]
- else:
- # tiled image
- offsets = self.tag_v2[TILEOFFSETS]
- w = self.tag_v2.get(TILEWIDTH)
- h = self.tag_v2.get(TILELENGTH)
-
- for offset in offsets:
- if x + w > xsize:
- stride = w * sum(bps_tuple) / 8 # bytes per line
- else:
- stride = 0
-
- tile_rawmode = rawmode
- if self._planar_configuration == 2:
- # each band on it's own layer
- tile_rawmode = rawmode[layer]
- # adjust stride width accordingly
- stride /= bps_count
-
- a = (tile_rawmode, int(stride), 1)
- self.tile.append(
- (
- self._compression,
- (x, y, min(x + w, xsize), min(y + h, ysize)),
- offset,
- a,
- )
- )
- x = x + w
- if x >= self.size[0]:
- x, y = 0, y + h
- if y >= self.size[1]:
- x = y = 0
- layer += 1
- else:
- logger.debug("- unsupported data organization")
- msg = "unknown data organization"
- raise SyntaxError(msg)
-
- # Fix up info.
- if ICCPROFILE in self.tag_v2:
- self.info["icc_profile"] = self.tag_v2[ICCPROFILE]
-
- # fixup palette descriptor
-
- if self.mode in ["P", "PA"]:
- palette = [o8(b // 256) for b in self.tag_v2[COLORMAP]]
- self.palette = ImagePalette.raw("RGB;L", b"".join(palette))
-
- self._tile_orientation = self.tag_v2.get(ExifTags.Base.Orientation)
-
-
-#
-# --------------------------------------------------------------------
-# Write TIFF files
-
-# little endian is default except for image modes with
-# explicit big endian byte-order
-
-SAVE_INFO = {
- # mode => rawmode, byteorder, photometrics,
- # sampleformat, bitspersample, extra
- "1": ("1", II, 1, 1, (1,), None),
- "L": ("L", II, 1, 1, (8,), None),
- "LA": ("LA", II, 1, 1, (8, 8), 2),
- "P": ("P", II, 3, 1, (8,), None),
- "PA": ("PA", II, 3, 1, (8, 8), 2),
- "I": ("I;32S", II, 1, 2, (32,), None),
- "I;16": ("I;16", II, 1, 1, (16,), None),
- "I;16S": ("I;16S", II, 1, 2, (16,), None),
- "F": ("F;32F", II, 1, 3, (32,), None),
- "RGB": ("RGB", II, 2, 1, (8, 8, 8), None),
- "RGBX": ("RGBX", II, 2, 1, (8, 8, 8, 8), 0),
- "RGBA": ("RGBA", II, 2, 1, (8, 8, 8, 8), 2),
- "CMYK": ("CMYK", II, 5, 1, (8, 8, 8, 8), None),
- "YCbCr": ("YCbCr", II, 6, 1, (8, 8, 8), None),
- "LAB": ("LAB", II, 8, 1, (8, 8, 8), None),
- "I;32BS": ("I;32BS", MM, 1, 2, (32,), None),
- "I;16B": ("I;16B", MM, 1, 1, (16,), None),
- "I;16BS": ("I;16BS", MM, 1, 2, (16,), None),
- "F;32BF": ("F;32BF", MM, 1, 3, (32,), None),
-}
-
-
-def _save(im, fp, filename):
- try:
- rawmode, prefix, photo, format, bits, extra = SAVE_INFO[im.mode]
- except KeyError as e:
- msg = f"cannot write mode {im.mode} as TIFF"
- raise OSError(msg) from e
-
- ifd = ImageFileDirectory_v2(prefix=prefix)
-
- encoderinfo = im.encoderinfo
- encoderconfig = im.encoderconfig
- try:
- compression = encoderinfo["compression"]
- except KeyError:
- compression = im.info.get("compression")
- if isinstance(compression, int):
- # compression value may be from BMP. Ignore it
- compression = None
- if compression is None:
- compression = "raw"
- elif compression == "tiff_jpeg":
- # OJPEG is obsolete, so use new-style JPEG compression instead
- compression = "jpeg"
- elif compression == "tiff_deflate":
- compression = "tiff_adobe_deflate"
-
- libtiff = WRITE_LIBTIFF or compression != "raw"
-
- # required for color libtiff images
- ifd[PLANAR_CONFIGURATION] = 1
-
- ifd[IMAGEWIDTH] = im.size[0]
- ifd[IMAGELENGTH] = im.size[1]
-
- # write any arbitrary tags passed in as an ImageFileDirectory
- if "tiffinfo" in encoderinfo:
- info = encoderinfo["tiffinfo"]
- elif "exif" in encoderinfo:
- info = encoderinfo["exif"]
- if isinstance(info, bytes):
- exif = Image.Exif()
- exif.load(info)
- info = exif
- else:
- info = {}
- logger.debug("Tiffinfo Keys: %s" % list(info))
- if isinstance(info, ImageFileDirectory_v1):
- info = info.to_v2()
- for key in info:
- if isinstance(info, Image.Exif) and key in TiffTags.TAGS_V2_GROUPS:
- ifd[key] = info.get_ifd(key)
- else:
- ifd[key] = info.get(key)
- try:
- ifd.tagtype[key] = info.tagtype[key]
- except Exception:
- pass # might not be an IFD. Might not have populated type
-
- # additions written by Greg Couch, gregc@cgl.ucsf.edu
- # inspired by image-sig posting from Kevin Cazabon, kcazabon@home.com
- if hasattr(im, "tag_v2"):
- # preserve tags from original TIFF image file
- for key in (
- RESOLUTION_UNIT,
- X_RESOLUTION,
- Y_RESOLUTION,
- IPTC_NAA_CHUNK,
- PHOTOSHOP_CHUNK,
- XMP,
- ):
- if key in im.tag_v2:
- ifd[key] = im.tag_v2[key]
- ifd.tagtype[key] = im.tag_v2.tagtype[key]
-
- # preserve ICC profile (should also work when saving other formats
- # which support profiles as TIFF) -- 2008-06-06 Florian Hoech
- icc = encoderinfo.get("icc_profile", im.info.get("icc_profile"))
- if icc:
- ifd[ICCPROFILE] = icc
-
- for key, name in [
- (IMAGEDESCRIPTION, "description"),
- (X_RESOLUTION, "resolution"),
- (Y_RESOLUTION, "resolution"),
- (X_RESOLUTION, "x_resolution"),
- (Y_RESOLUTION, "y_resolution"),
- (RESOLUTION_UNIT, "resolution_unit"),
- (SOFTWARE, "software"),
- (DATE_TIME, "date_time"),
- (ARTIST, "artist"),
- (COPYRIGHT, "copyright"),
- ]:
- if name in encoderinfo:
- ifd[key] = encoderinfo[name]
-
- dpi = encoderinfo.get("dpi")
- if dpi:
- ifd[RESOLUTION_UNIT] = 2
- ifd[X_RESOLUTION] = dpi[0]
- ifd[Y_RESOLUTION] = dpi[1]
-
- if bits != (1,):
- ifd[BITSPERSAMPLE] = bits
- if len(bits) != 1:
- ifd[SAMPLESPERPIXEL] = len(bits)
- if extra is not None:
- ifd[EXTRASAMPLES] = extra
- if format != 1:
- ifd[SAMPLEFORMAT] = format
-
- if PHOTOMETRIC_INTERPRETATION not in ifd:
- ifd[PHOTOMETRIC_INTERPRETATION] = photo
- elif im.mode in ("1", "L") and ifd[PHOTOMETRIC_INTERPRETATION] == 0:
- if im.mode == "1":
- inverted_im = im.copy()
- px = inverted_im.load()
- for y in range(inverted_im.height):
- for x in range(inverted_im.width):
- px[x, y] = 0 if px[x, y] == 255 else 255
- im = inverted_im
- else:
- im = ImageOps.invert(im)
-
- if im.mode in ["P", "PA"]:
- lut = im.im.getpalette("RGB", "RGB;L")
- colormap = []
- colors = len(lut) // 3
- for i in range(3):
- colormap += [v * 256 for v in lut[colors * i : colors * (i + 1)]]
- colormap += [0] * (256 - colors)
- ifd[COLORMAP] = colormap
- # data orientation
- stride = len(bits) * ((im.size[0] * bits[0] + 7) // 8)
- # aim for given strip size (64 KB by default) when using libtiff writer
- if libtiff:
- im_strip_size = encoderinfo.get("strip_size", STRIP_SIZE)
- rows_per_strip = 1 if stride == 0 else min(im_strip_size // stride, im.size[1])
- # JPEG encoder expects multiple of 8 rows
- if compression == "jpeg":
- rows_per_strip = min(((rows_per_strip + 7) // 8) * 8, im.size[1])
- else:
- rows_per_strip = im.size[1]
- if rows_per_strip == 0:
- rows_per_strip = 1
- strip_byte_counts = 1 if stride == 0 else stride * rows_per_strip
- strips_per_image = (im.size[1] + rows_per_strip - 1) // rows_per_strip
- ifd[ROWSPERSTRIP] = rows_per_strip
- if strip_byte_counts >= 2**16:
- ifd.tagtype[STRIPBYTECOUNTS] = TiffTags.LONG
- ifd[STRIPBYTECOUNTS] = (strip_byte_counts,) * (strips_per_image - 1) + (
- stride * im.size[1] - strip_byte_counts * (strips_per_image - 1),
- )
- ifd[STRIPOFFSETS] = tuple(
- range(0, strip_byte_counts * strips_per_image, strip_byte_counts)
- ) # this is adjusted by IFD writer
- # no compression by default:
- ifd[COMPRESSION] = COMPRESSION_INFO_REV.get(compression, 1)
-
- if im.mode == "YCbCr":
- for tag, value in {
- YCBCRSUBSAMPLING: (1, 1),
- REFERENCEBLACKWHITE: (0, 255, 128, 255, 128, 255),
- }.items():
- ifd.setdefault(tag, value)
-
- blocklist = [TILEWIDTH, TILELENGTH, TILEOFFSETS, TILEBYTECOUNTS]
- if libtiff:
- if "quality" in encoderinfo:
- quality = encoderinfo["quality"]
- if not isinstance(quality, int) or quality < 0 or quality > 100:
- msg = "Invalid quality setting"
- raise ValueError(msg)
- if compression != "jpeg":
- msg = "quality setting only supported for 'jpeg' compression"
- raise ValueError(msg)
- ifd[JPEGQUALITY] = quality
-
- logger.debug("Saving using libtiff encoder")
- logger.debug("Items: %s" % sorted(ifd.items()))
- _fp = 0
- if hasattr(fp, "fileno"):
- try:
- fp.seek(0)
- _fp = os.dup(fp.fileno())
- except io.UnsupportedOperation:
- pass
-
- # optional types for non core tags
- types = {}
- # STRIPOFFSETS and STRIPBYTECOUNTS are added by the library
- # based on the data in the strip.
- # The other tags expect arrays with a certain length (fixed or depending on
- # BITSPERSAMPLE, etc), passing arrays with a different length will result in
- # segfaults. Block these tags until we add extra validation.
- # SUBIFD may also cause a segfault.
- blocklist += [
- REFERENCEBLACKWHITE,
- STRIPBYTECOUNTS,
- STRIPOFFSETS,
- TRANSFERFUNCTION,
- SUBIFD,
- ]
-
- # bits per sample is a single short in the tiff directory, not a list.
- atts = {BITSPERSAMPLE: bits[0]}
- # Merge the ones that we have with (optional) more bits from
- # the original file, e.g x,y resolution so that we can
- # save(load('')) == original file.
- legacy_ifd = {}
- if hasattr(im, "tag"):
- legacy_ifd = im.tag.to_v2()
-
- # SAMPLEFORMAT is determined by the image format and should not be copied
- # from legacy_ifd.
- supplied_tags = {**getattr(im, "tag_v2", {}), **legacy_ifd}
- if SAMPLEFORMAT in supplied_tags:
- del supplied_tags[SAMPLEFORMAT]
-
- for tag, value in itertools.chain(ifd.items(), supplied_tags.items()):
- # Libtiff can only process certain core items without adding
- # them to the custom dictionary.
- # Custom items are supported for int, float, unicode, string and byte
- # values. Other types and tuples require a tagtype.
- if tag not in TiffTags.LIBTIFF_CORE:
- if not getattr(Image.core, "libtiff_support_custom_tags", False):
- continue
-
- if tag in ifd.tagtype:
- types[tag] = ifd.tagtype[tag]
- elif not (isinstance(value, (int, float, str, bytes))):
- continue
- else:
- type = TiffTags.lookup(tag).type
- if type:
- types[tag] = type
- if tag not in atts and tag not in blocklist:
- if isinstance(value, str):
- atts[tag] = value.encode("ascii", "replace") + b"\0"
- elif isinstance(value, IFDRational):
- atts[tag] = float(value)
- else:
- atts[tag] = value
-
- if SAMPLEFORMAT in atts and len(atts[SAMPLEFORMAT]) == 1:
- atts[SAMPLEFORMAT] = atts[SAMPLEFORMAT][0]
-
- logger.debug("Converted items: %s" % sorted(atts.items()))
-
- # libtiff always expects the bytes in native order.
- # we're storing image byte order. So, if the rawmode
- # contains I;16, we need to convert from native to image
- # byte order.
- if im.mode in ("I;16B", "I;16"):
- rawmode = "I;16N"
-
- # Pass tags as sorted list so that the tags are set in a fixed order.
- # This is required by libtiff for some tags. For example, the JPEGQUALITY
- # pseudo tag requires that the COMPRESS tag was already set.
- tags = list(atts.items())
- tags.sort()
- a = (rawmode, compression, _fp, filename, tags, types)
- e = Image._getencoder(im.mode, "libtiff", a, encoderconfig)
- e.setimage(im.im, (0, 0) + im.size)
- while True:
- # undone, change to self.decodermaxblock:
- errcode, data = e.encode(16 * 1024)[1:]
- if not _fp:
- fp.write(data)
- if errcode:
- break
- if _fp:
- try:
- os.close(_fp)
- except OSError:
- pass
- if errcode < 0:
- msg = f"encoder error {errcode} when writing image file"
- raise OSError(msg)
-
- else:
- for tag in blocklist:
- del ifd[tag]
- offset = ifd.save(fp)
-
- ImageFile._save(
- im, fp, [("raw", (0, 0) + im.size, offset, (rawmode, stride, 1))]
- )
-
- # -- helper for multi-page save --
- if "_debug_multipage" in encoderinfo:
- # just to access o32 and o16 (using correct byte order)
- im._debug_multipage = ifd
-
-
-class AppendingTiffWriter:
- fieldSizes = [
- 0, # None
- 1, # byte
- 1, # ascii
- 2, # short
- 4, # long
- 8, # rational
- 1, # sbyte
- 1, # undefined
- 2, # sshort
- 4, # slong
- 8, # srational
- 4, # float
- 8, # double
- 4, # ifd
- 2, # unicode
- 4, # complex
- 8, # long8
- ]
-
- # StripOffsets = 273
- # FreeOffsets = 288
- # TileOffsets = 324
- # JPEGQTables = 519
- # JPEGDCTables = 520
- # JPEGACTables = 521
- Tags = {273, 288, 324, 519, 520, 521}
-
- def __init__(self, fn, new=False):
- if hasattr(fn, "read"):
- self.f = fn
- self.close_fp = False
- else:
- self.name = fn
- self.close_fp = True
- try:
- self.f = open(fn, "w+b" if new else "r+b")
- except OSError:
- self.f = open(fn, "w+b")
- self.beginning = self.f.tell()
- self.setup()
-
- def setup(self):
- # Reset everything.
- self.f.seek(self.beginning, os.SEEK_SET)
-
- self.whereToWriteNewIFDOffset = None
- self.offsetOfNewPage = 0
-
- self.IIMM = iimm = self.f.read(4)
- if not iimm:
- # empty file - first page
- self.isFirst = True
- return
-
- self.isFirst = False
- if iimm == b"II\x2a\x00":
- self.setEndian("<")
- elif iimm == b"MM\x00\x2a":
- self.setEndian(">")
- else:
- msg = "Invalid TIFF file header"
- raise RuntimeError(msg)
-
- self.skipIFDs()
- self.goToEnd()
-
- def finalize(self):
- if self.isFirst:
- return
-
- # fix offsets
- self.f.seek(self.offsetOfNewPage)
-
- iimm = self.f.read(4)
- if not iimm:
- # msg = "nothing written into new page"
- # raise RuntimeError(msg)
- # Make it easy to finish a frame without committing to a new one.
- return
-
- if iimm != self.IIMM:
- msg = "IIMM of new page doesn't match IIMM of first page"
- raise RuntimeError(msg)
-
- ifd_offset = self.readLong()
- ifd_offset += self.offsetOfNewPage
- self.f.seek(self.whereToWriteNewIFDOffset)
- self.writeLong(ifd_offset)
- self.f.seek(ifd_offset)
- self.fixIFD()
-
- def newFrame(self):
- # Call this to finish a frame.
- self.finalize()
- self.setup()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- if self.close_fp:
- self.close()
- return False
-
- def tell(self):
- return self.f.tell() - self.offsetOfNewPage
-
- def seek(self, offset, whence=io.SEEK_SET):
- if whence == os.SEEK_SET:
- offset += self.offsetOfNewPage
-
- self.f.seek(offset, whence)
- return self.tell()
-
- def goToEnd(self):
- self.f.seek(0, os.SEEK_END)
- pos = self.f.tell()
-
- # pad to 16 byte boundary
- pad_bytes = 16 - pos % 16
- if 0 < pad_bytes < 16:
- self.f.write(bytes(pad_bytes))
- self.offsetOfNewPage = self.f.tell()
-
- def setEndian(self, endian):
- self.endian = endian
- self.longFmt = self.endian + "L"
- self.shortFmt = self.endian + "H"
- self.tagFormat = self.endian + "HHL"
-
- def skipIFDs(self):
- while True:
- ifd_offset = self.readLong()
- if ifd_offset == 0:
- self.whereToWriteNewIFDOffset = self.f.tell() - 4
- break
-
- self.f.seek(ifd_offset)
- num_tags = self.readShort()
- self.f.seek(num_tags * 12, os.SEEK_CUR)
-
- def write(self, data):
- return self.f.write(data)
-
- def readShort(self):
- (value,) = struct.unpack(self.shortFmt, self.f.read(2))
- return value
-
- def readLong(self):
- (value,) = struct.unpack(self.longFmt, self.f.read(4))
- return value
-
- def rewriteLastShortToLong(self, value):
- self.f.seek(-2, os.SEEK_CUR)
- bytes_written = self.f.write(struct.pack(self.longFmt, value))
- if bytes_written is not None and bytes_written != 4:
- msg = f"wrote only {bytes_written} bytes but wanted 4"
- raise RuntimeError(msg)
-
- def rewriteLastShort(self, value):
- self.f.seek(-2, os.SEEK_CUR)
- bytes_written = self.f.write(struct.pack(self.shortFmt, value))
- if bytes_written is not None and bytes_written != 2:
- msg = f"wrote only {bytes_written} bytes but wanted 2"
- raise RuntimeError(msg)
-
- def rewriteLastLong(self, value):
- self.f.seek(-4, os.SEEK_CUR)
- bytes_written = self.f.write(struct.pack(self.longFmt, value))
- if bytes_written is not None and bytes_written != 4:
- msg = f"wrote only {bytes_written} bytes but wanted 4"
- raise RuntimeError(msg)
-
- def writeShort(self, value):
- bytes_written = self.f.write(struct.pack(self.shortFmt, value))
- if bytes_written is not None and bytes_written != 2:
- msg = f"wrote only {bytes_written} bytes but wanted 2"
- raise RuntimeError(msg)
-
- def writeLong(self, value):
- bytes_written = self.f.write(struct.pack(self.longFmt, value))
- if bytes_written is not None and bytes_written != 4:
- msg = f"wrote only {bytes_written} bytes but wanted 4"
- raise RuntimeError(msg)
-
- def close(self):
- self.finalize()
- self.f.close()
-
- def fixIFD(self):
- num_tags = self.readShort()
-
- for i in range(num_tags):
- tag, field_type, count = struct.unpack(self.tagFormat, self.f.read(8))
-
- field_size = self.fieldSizes[field_type]
- total_size = field_size * count
- is_local = total_size <= 4
- if not is_local:
- offset = self.readLong()
- offset += self.offsetOfNewPage
- self.rewriteLastLong(offset)
-
- if tag in self.Tags:
- cur_pos = self.f.tell()
-
- if is_local:
- self.fixOffsets(
- count, isShort=(field_size == 2), isLong=(field_size == 4)
- )
- self.f.seek(cur_pos + 4)
- else:
- self.f.seek(offset)
- self.fixOffsets(
- count, isShort=(field_size == 2), isLong=(field_size == 4)
- )
- self.f.seek(cur_pos)
-
- offset = cur_pos = None
-
- elif is_local:
- # skip the locally stored value that is not an offset
- self.f.seek(4, os.SEEK_CUR)
-
- def fixOffsets(self, count, isShort=False, isLong=False):
- if not isShort and not isLong:
- msg = "offset is neither short nor long"
- raise RuntimeError(msg)
-
- for i in range(count):
- offset = self.readShort() if isShort else self.readLong()
- offset += self.offsetOfNewPage
- if isShort and offset >= 65536:
- # offset is now too large - we must convert shorts to longs
- if count != 1:
- msg = "not implemented"
- raise RuntimeError(msg) # XXX TODO
-
- # simple case - the offset is just one and therefore it is
- # local (not referenced with another offset)
- self.rewriteLastShortToLong(offset)
- self.f.seek(-10, os.SEEK_CUR)
- self.writeShort(TiffTags.LONG) # rewrite the type to LONG
- self.f.seek(8, os.SEEK_CUR)
- elif isShort:
- self.rewriteLastShort(offset)
- else:
- self.rewriteLastLong(offset)
-
-
-def _save_all(im, fp, filename):
- encoderinfo = im.encoderinfo.copy()
- encoderconfig = im.encoderconfig
- append_images = list(encoderinfo.get("append_images", []))
- if not hasattr(im, "n_frames") and not append_images:
- return _save(im, fp, filename)
-
- cur_idx = im.tell()
- try:
- with AppendingTiffWriter(fp) as tf:
- for ims in [im] + append_images:
- ims.encoderinfo = encoderinfo
- ims.encoderconfig = encoderconfig
- if not hasattr(ims, "n_frames"):
- nfr = 1
- else:
- nfr = ims.n_frames
-
- for idx in range(nfr):
- ims.seek(idx)
- ims.load()
- _save(ims, tf, filename)
- tf.newFrame()
- finally:
- im.seek(cur_idx)
-
-
-#
-# --------------------------------------------------------------------
-# Register
-
-Image.register_open(TiffImageFile.format, TiffImageFile, _accept)
-Image.register_save(TiffImageFile.format, _save)
-Image.register_save_all(TiffImageFile.format, _save_all)
-
-Image.register_extensions(TiffImageFile.format, [".tif", ".tiff"])
-
-Image.register_mime(TiffImageFile.format, "image/tiff")
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/hebrewprober.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/hebrewprober.py
deleted file mode 100644
index 785d0057bcc0ea74a4b8d65ab7a0de78474bf892..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/hebrewprober.py
+++ /dev/null
@@ -1,316 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Shy Shalom
-# Portions created by the Initial Developer are Copyright (C) 2005
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import Optional, Union
-
-from .charsetprober import CharSetProber
-from .enums import ProbingState
-from .sbcharsetprober import SingleByteCharSetProber
-
-# This prober doesn't actually recognize a language or a charset.
-# It is a helper prober for the use of the Hebrew model probers
-
-### General ideas of the Hebrew charset recognition ###
-#
-# Four main charsets exist in Hebrew:
-# "ISO-8859-8" - Visual Hebrew
-# "windows-1255" - Logical Hebrew
-# "ISO-8859-8-I" - Logical Hebrew
-# "x-mac-hebrew" - ?? Logical Hebrew ??
-#
-# Both "ISO" charsets use a completely identical set of code points, whereas
-# "windows-1255" and "x-mac-hebrew" are two different proper supersets of
-# these code points. windows-1255 defines additional characters in the range
-# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific
-# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6.
-# x-mac-hebrew defines similar additional code points but with a different
-# mapping.
-#
-# As far as an average Hebrew text with no diacritics is concerned, all four
-# charsets are identical with respect to code points. Meaning that for the
-# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters
-# (including final letters).
-#
-# The dominant difference between these charsets is their directionality.
-# "Visual" directionality means that the text is ordered as if the renderer is
-# not aware of a BIDI rendering algorithm. The renderer sees the text and
-# draws it from left to right. The text itself when ordered naturally is read
-# backwards. A buffer of Visual Hebrew generally looks like so:
-# "[last word of first line spelled backwards] [whole line ordered backwards
-# and spelled backwards] [first word of first line spelled backwards]
-# [end of line] [last word of second line] ... etc' "
-# adding punctuation marks, numbers and English text to visual text is
-# naturally also "visual" and from left to right.
-#
-# "Logical" directionality means the text is ordered "naturally" according to
-# the order it is read. It is the responsibility of the renderer to display
-# the text from right to left. A BIDI algorithm is used to place general
-# punctuation marks, numbers and English text in the text.
-#
-# Texts in x-mac-hebrew are almost impossible to find on the Internet. From
-# what little evidence I could find, it seems that its general directionality
-# is Logical.
-#
-# To sum up all of the above, the Hebrew probing mechanism knows about two
-# charsets:
-# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are
-# backwards while line order is natural. For charset recognition purposes
-# the line order is unimportant (In fact, for this implementation, even
-# word order is unimportant).
-# Logical Hebrew - "windows-1255" - normal, naturally ordered text.
-#
-# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be
-# specifically identified.
-# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew
-# that contain special punctuation marks or diacritics is displayed with
-# some unconverted characters showing as question marks. This problem might
-# be corrected using another model prober for x-mac-hebrew. Due to the fact
-# that x-mac-hebrew texts are so rare, writing another model prober isn't
-# worth the effort and performance hit.
-#
-#### The Prober ####
-#
-# The prober is divided between two SBCharSetProbers and a HebrewProber,
-# all of which are managed, created, fed data, inquired and deleted by the
-# SBCSGroupProber. The two SBCharSetProbers identify that the text is in
-# fact some kind of Hebrew, Logical or Visual. The final decision about which
-# one is it is made by the HebrewProber by combining final-letter scores
-# with the scores of the two SBCharSetProbers to produce a final answer.
-#
-# The SBCSGroupProber is responsible for stripping the original text of HTML
-# tags, English characters, numbers, low-ASCII punctuation characters, spaces
-# and new lines. It reduces any sequence of such characters to a single space.
-# The buffer fed to each prober in the SBCS group prober is pure text in
-# high-ASCII.
-# The two SBCharSetProbers (model probers) share the same language model:
-# Win1255Model.
-# The first SBCharSetProber uses the model normally as any other
-# SBCharSetProber does, to recognize windows-1255, upon which this model was
-# built. The second SBCharSetProber is told to make the pair-of-letter
-# lookup in the language model backwards. This in practice exactly simulates
-# a visual Hebrew model using the windows-1255 logical Hebrew model.
-#
-# The HebrewProber is not using any language model. All it does is look for
-# final-letter evidence suggesting the text is either logical Hebrew or visual
-# Hebrew. Disjointed from the model probers, the results of the HebrewProber
-# alone are meaningless. HebrewProber always returns 0.00 as confidence
-# since it never identifies a charset by itself. Instead, the pointer to the
-# HebrewProber is passed to the model probers as a helper "Name Prober".
-# When the Group prober receives a positive identification from any prober,
-# it asks for the name of the charset identified. If the prober queried is a
-# Hebrew model prober, the model prober forwards the call to the
-# HebrewProber to make the final decision. In the HebrewProber, the
-# decision is made according to the final-letters scores maintained and Both
-# model probers scores. The answer is returned in the form of the name of the
-# charset identified, either "windows-1255" or "ISO-8859-8".
-
-
-class HebrewProber(CharSetProber):
- SPACE = 0x20
- # windows-1255 / ISO-8859-8 code points of interest
- FINAL_KAF = 0xEA
- NORMAL_KAF = 0xEB
- FINAL_MEM = 0xED
- NORMAL_MEM = 0xEE
- FINAL_NUN = 0xEF
- NORMAL_NUN = 0xF0
- FINAL_PE = 0xF3
- NORMAL_PE = 0xF4
- FINAL_TSADI = 0xF5
- NORMAL_TSADI = 0xF6
-
- # Minimum Visual vs Logical final letter score difference.
- # If the difference is below this, don't rely solely on the final letter score
- # distance.
- MIN_FINAL_CHAR_DISTANCE = 5
-
- # Minimum Visual vs Logical model score difference.
- # If the difference is below this, don't rely at all on the model score
- # distance.
- MIN_MODEL_DISTANCE = 0.01
-
- VISUAL_HEBREW_NAME = "ISO-8859-8"
- LOGICAL_HEBREW_NAME = "windows-1255"
-
- def __init__(self) -> None:
- super().__init__()
- self._final_char_logical_score = 0
- self._final_char_visual_score = 0
- self._prev = self.SPACE
- self._before_prev = self.SPACE
- self._logical_prober: Optional[SingleByteCharSetProber] = None
- self._visual_prober: Optional[SingleByteCharSetProber] = None
- self.reset()
-
- def reset(self) -> None:
- self._final_char_logical_score = 0
- self._final_char_visual_score = 0
- # The two last characters seen in the previous buffer,
- # mPrev and mBeforePrev are initialized to space in order to simulate
- # a word delimiter at the beginning of the data
- self._prev = self.SPACE
- self._before_prev = self.SPACE
- # These probers are owned by the group prober.
-
- def set_model_probers(
- self,
- logical_prober: SingleByteCharSetProber,
- visual_prober: SingleByteCharSetProber,
- ) -> None:
- self._logical_prober = logical_prober
- self._visual_prober = visual_prober
-
- def is_final(self, c: int) -> bool:
- return c in [
- self.FINAL_KAF,
- self.FINAL_MEM,
- self.FINAL_NUN,
- self.FINAL_PE,
- self.FINAL_TSADI,
- ]
-
- def is_non_final(self, c: int) -> bool:
- # The normal Tsadi is not a good Non-Final letter due to words like
- # 'lechotet' (to chat) containing an apostrophe after the tsadi. This
- # apostrophe is converted to a space in FilterWithoutEnglishLetters
- # causing the Non-Final tsadi to appear at an end of a word even
- # though this is not the case in the original text.
- # The letters Pe and Kaf rarely display a related behavior of not being
- # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak'
- # for example legally end with a Non-Final Pe or Kaf. However, the
- # benefit of these letters as Non-Final letters outweighs the damage
- # since these words are quite rare.
- return c in [self.NORMAL_KAF, self.NORMAL_MEM, self.NORMAL_NUN, self.NORMAL_PE]
-
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
- # Final letter analysis for logical-visual decision.
- # Look for evidence that the received buffer is either logical Hebrew
- # or visual Hebrew.
- # The following cases are checked:
- # 1) A word longer than 1 letter, ending with a final letter. This is
- # an indication that the text is laid out "naturally" since the
- # final letter really appears at the end. +1 for logical score.
- # 2) A word longer than 1 letter, ending with a Non-Final letter. In
- # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi,
- # should not end with the Non-Final form of that letter. Exceptions
- # to this rule are mentioned above in isNonFinal(). This is an
- # indication that the text is laid out backwards. +1 for visual
- # score
- # 3) A word longer than 1 letter, starting with a final letter. Final
- # letters should not appear at the beginning of a word. This is an
- # indication that the text is laid out backwards. +1 for visual
- # score.
- #
- # The visual score and logical score are accumulated throughout the
- # text and are finally checked against each other in GetCharSetName().
- # No checking for final letters in the middle of words is done since
- # that case is not an indication for either Logical or Visual text.
- #
- # We automatically filter out all 7-bit characters (replace them with
- # spaces) so the word boundary detection works properly. [MAP]
-
- if self.state == ProbingState.NOT_ME:
- # Both model probers say it's not them. No reason to continue.
- return ProbingState.NOT_ME
-
- byte_str = self.filter_high_byte_only(byte_str)
-
- for cur in byte_str:
- if cur == self.SPACE:
- # We stand on a space - a word just ended
- if self._before_prev != self.SPACE:
- # next-to-last char was not a space so self._prev is not a
- # 1 letter word
- if self.is_final(self._prev):
- # case (1) [-2:not space][-1:final letter][cur:space]
- self._final_char_logical_score += 1
- elif self.is_non_final(self._prev):
- # case (2) [-2:not space][-1:Non-Final letter][
- # cur:space]
- self._final_char_visual_score += 1
- else:
- # Not standing on a space
- if (
- (self._before_prev == self.SPACE)
- and (self.is_final(self._prev))
- and (cur != self.SPACE)
- ):
- # case (3) [-2:space][-1:final letter][cur:not space]
- self._final_char_visual_score += 1
- self._before_prev = self._prev
- self._prev = cur
-
- # Forever detecting, till the end or until both model probers return
- # ProbingState.NOT_ME (handled above)
- return ProbingState.DETECTING
-
- @property
- def charset_name(self) -> str:
- assert self._logical_prober is not None
- assert self._visual_prober is not None
-
- # Make the decision: is it Logical or Visual?
- # If the final letter score distance is dominant enough, rely on it.
- finalsub = self._final_char_logical_score - self._final_char_visual_score
- if finalsub >= self.MIN_FINAL_CHAR_DISTANCE:
- return self.LOGICAL_HEBREW_NAME
- if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE:
- return self.VISUAL_HEBREW_NAME
-
- # It's not dominant enough, try to rely on the model scores instead.
- modelsub = (
- self._logical_prober.get_confidence() - self._visual_prober.get_confidence()
- )
- if modelsub > self.MIN_MODEL_DISTANCE:
- return self.LOGICAL_HEBREW_NAME
- if modelsub < -self.MIN_MODEL_DISTANCE:
- return self.VISUAL_HEBREW_NAME
-
- # Still no good, back to final letter distance, maybe it'll save the
- # day.
- if finalsub < 0.0:
- return self.VISUAL_HEBREW_NAME
-
- # (finalsub > 0 - Logical) or (don't know what to do) default to
- # Logical.
- return self.LOGICAL_HEBREW_NAME
-
- @property
- def language(self) -> str:
- return "Hebrew"
-
- @property
- def state(self) -> ProbingState:
- assert self._logical_prober is not None
- assert self._visual_prober is not None
-
- # Remain active as long as any of the model probers are active.
- if (self._logical_prober.state == ProbingState.NOT_ME) and (
- self._visual_prober.state == ProbingState.NOT_ME
- ):
- return ProbingState.NOT_ME
- return ProbingState.DETECTING
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py
deleted file mode 100644
index 9998bcaa1131db00cf432f24fb1731b65ae697cd..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/pkcs7.py
+++ /dev/null
@@ -1,235 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from __future__ import annotations
-
-import email.base64mime
-import email.generator
-import email.message
-import email.policy
-import io
-import typing
-
-from cryptography import utils, x509
-from cryptography.hazmat.bindings._rust import pkcs7 as rust_pkcs7
-from cryptography.hazmat.primitives import hashes, serialization
-from cryptography.hazmat.primitives.asymmetric import ec, rsa
-from cryptography.utils import _check_byteslike
-
-
-def load_pem_pkcs7_certificates(data: bytes) -> typing.List[x509.Certificate]:
- from cryptography.hazmat.backends.openssl.backend import backend
-
- return backend.load_pem_pkcs7_certificates(data)
-
-
-def load_der_pkcs7_certificates(data: bytes) -> typing.List[x509.Certificate]:
- from cryptography.hazmat.backends.openssl.backend import backend
-
- return backend.load_der_pkcs7_certificates(data)
-
-
-def serialize_certificates(
- certs: typing.List[x509.Certificate],
- encoding: serialization.Encoding,
-) -> bytes:
- return rust_pkcs7.serialize_certificates(certs, encoding)
-
-
-PKCS7HashTypes = typing.Union[
- hashes.SHA224,
- hashes.SHA256,
- hashes.SHA384,
- hashes.SHA512,
-]
-
-PKCS7PrivateKeyTypes = typing.Union[
- rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey
-]
-
-
-class PKCS7Options(utils.Enum):
- Text = "Add text/plain MIME type"
- Binary = "Don't translate input data into canonical MIME format"
- DetachedSignature = "Don't embed data in the PKCS7 structure"
- NoCapabilities = "Don't embed SMIME capabilities"
- NoAttributes = "Don't embed authenticatedAttributes"
- NoCerts = "Don't embed signer certificate"
-
-
-class PKCS7SignatureBuilder:
- def __init__(
- self,
- data: typing.Optional[bytes] = None,
- signers: typing.List[
- typing.Tuple[
- x509.Certificate,
- PKCS7PrivateKeyTypes,
- PKCS7HashTypes,
- ]
- ] = [],
- additional_certs: typing.List[x509.Certificate] = [],
- ):
- self._data = data
- self._signers = signers
- self._additional_certs = additional_certs
-
- def set_data(self, data: bytes) -> PKCS7SignatureBuilder:
- _check_byteslike("data", data)
- if self._data is not None:
- raise ValueError("data may only be set once")
-
- return PKCS7SignatureBuilder(data, self._signers)
-
- def add_signer(
- self,
- certificate: x509.Certificate,
- private_key: PKCS7PrivateKeyTypes,
- hash_algorithm: PKCS7HashTypes,
- ) -> PKCS7SignatureBuilder:
- if not isinstance(
- hash_algorithm,
- (
- hashes.SHA224,
- hashes.SHA256,
- hashes.SHA384,
- hashes.SHA512,
- ),
- ):
- raise TypeError(
- "hash_algorithm must be one of hashes.SHA224, "
- "SHA256, SHA384, or SHA512"
- )
- if not isinstance(certificate, x509.Certificate):
- raise TypeError("certificate must be a x509.Certificate")
-
- if not isinstance(
- private_key, (rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey)
- ):
- raise TypeError("Only RSA & EC keys are supported at this time.")
-
- return PKCS7SignatureBuilder(
- self._data,
- self._signers + [(certificate, private_key, hash_algorithm)],
- )
-
- def add_certificate(
- self, certificate: x509.Certificate
- ) -> PKCS7SignatureBuilder:
- if not isinstance(certificate, x509.Certificate):
- raise TypeError("certificate must be a x509.Certificate")
-
- return PKCS7SignatureBuilder(
- self._data, self._signers, self._additional_certs + [certificate]
- )
-
- def sign(
- self,
- encoding: serialization.Encoding,
- options: typing.Iterable[PKCS7Options],
- backend: typing.Any = None,
- ) -> bytes:
- if len(self._signers) == 0:
- raise ValueError("Must have at least one signer")
- if self._data is None:
- raise ValueError("You must add data to sign")
- options = list(options)
- if not all(isinstance(x, PKCS7Options) for x in options):
- raise ValueError("options must be from the PKCS7Options enum")
- if encoding not in (
- serialization.Encoding.PEM,
- serialization.Encoding.DER,
- serialization.Encoding.SMIME,
- ):
- raise ValueError(
- "Must be PEM, DER, or SMIME from the Encoding enum"
- )
-
- # Text is a meaningless option unless it is accompanied by
- # DetachedSignature
- if (
- PKCS7Options.Text in options
- and PKCS7Options.DetachedSignature not in options
- ):
- raise ValueError(
- "When passing the Text option you must also pass "
- "DetachedSignature"
- )
-
- if PKCS7Options.Text in options and encoding in (
- serialization.Encoding.DER,
- serialization.Encoding.PEM,
- ):
- raise ValueError(
- "The Text option is only available for SMIME serialization"
- )
-
- # No attributes implies no capabilities so we'll error if you try to
- # pass both.
- if (
- PKCS7Options.NoAttributes in options
- and PKCS7Options.NoCapabilities in options
- ):
- raise ValueError(
- "NoAttributes is a superset of NoCapabilities. Do not pass "
- "both values."
- )
-
- return rust_pkcs7.sign_and_serialize(self, encoding, options)
-
-
-def _smime_encode(
- data: bytes, signature: bytes, micalg: str, text_mode: bool
-) -> bytes:
- # This function works pretty hard to replicate what OpenSSL does
- # precisely. For good and for ill.
-
- m = email.message.Message()
- m.add_header("MIME-Version", "1.0")
- m.add_header(
- "Content-Type",
- "multipart/signed",
- protocol="application/x-pkcs7-signature",
- micalg=micalg,
- )
-
- m.preamble = "This is an S/MIME signed message\n"
-
- msg_part = OpenSSLMimePart()
- msg_part.set_payload(data)
- if text_mode:
- msg_part.add_header("Content-Type", "text/plain")
- m.attach(msg_part)
-
- sig_part = email.message.MIMEPart()
- sig_part.add_header(
- "Content-Type", "application/x-pkcs7-signature", name="smime.p7s"
- )
- sig_part.add_header("Content-Transfer-Encoding", "base64")
- sig_part.add_header(
- "Content-Disposition", "attachment", filename="smime.p7s"
- )
- sig_part.set_payload(
- email.base64mime.body_encode(signature, maxlinelen=65)
- )
- del sig_part["MIME-Version"]
- m.attach(sig_part)
-
- fp = io.BytesIO()
- g = email.generator.BytesGenerator(
- fp,
- maxheaderlen=0,
- mangle_from_=False,
- policy=m.policy.clone(linesep="\r\n"),
- )
- g.flatten(m)
- return fp.getvalue()
-
-
-class OpenSSLMimePart(email.message.MIMEPart):
- # A MIMEPart subclass that replicates OpenSSL's behavior of not including
- # a newline if there are no headers.
- def _write_headers(self, generator) -> None:
- if list(self.raw_items()):
- generator._write_headers(self)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/pyext/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/pyext/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/cihyFjudo/fairness-paper-search/Kabootar Love Full Movie Download [Extra Quality] Hd.md b/spaces/cihyFjudo/fairness-paper-search/Kabootar Love Full Movie Download [Extra Quality] Hd.md
deleted file mode 100644
index d1262e5fa75230b6f73bc9ed10a5185d5cd4b24f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Kabootar Love Full Movie Download [Extra Quality] Hd.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
")*/ $("#header_tv_icon").addClass("tv_highlighted"); $("#tv_icon").attr("src","/assets/tv_active.svg") } var navigation = [ '', '' ]; var layout_type = "movie" var items_count = "100" var platfrm = getShemarooCookies().mobile_browser_type if(layout_type == "song" || layout_type == "video" || layout_type == "videos") var items_0 = 1; var stage_padding = 50; var items_576 = 2; var items_768 = 3; var items_992 = 4; var items_1200 = 5 if(items_count > 5 && platfrm != "mobile") $(".see-more").show(); else if(items_count > 2 && platfrm == "mobile") $(".see-more").show(); else var items_0 = 2; var stage_padding = 25; var items_576 = 3; var items_768 = 4; var items_992 = 5; var items_1200 = 7; if(items_count > 7 && platfrm != "mobile") $(".see-more").show(); else if(items_count > 3 && platfrm == "mobile") $(".see-more").show(); $(".shemaroo_player").empty(); $(document).ready(function() if (getShemarooCookies().theme_option == "dark_theme") $(".preview_video_light").remove() $(".preview_video_dark").show() $(".watch_later_light").remove() $(".watch_later_dark").show() $(".share_light").remove() $(".share_dark").show() $(".download_light").remove() $(".download_dark").show() else if (getShemarooCookies().theme_option == "light_theme" ); var list_count = "1" $("#content_info_0").addClass("active") for(var i = 0; i < list_count ; i++) $("#content_info_" + i).click(function() $(".tab_chk").removeClass("active") var data = $(this).data("value").split(",") $("#content_info_"+data[2]).addClass("active") if(data[2] == 0) $("#synopsis_data").show() $('.season_all_results').html(''); else if(data[2] == 1) $("#synopsis_data").hide() $('.season_all_results').html(''); $(".relative-content-scroll").css("padding-bottom", 184); $(".scroll_loader").show(); trailer_list(data[0],data[1]) )function trailer_list(catalog_id,home_link){ $(".relative-content-scroll").css("padding-bottom", 120); $(".scroll_loader").show(); // var item_id = '' $.ajax({ url: "/catalogs/get_trailers", type: "GET", data: catalog_id : catalog_id, item_id : home_link , success: function(response){ $('.season_all_results').html(''); var list_data = ""; var list_items = ""; var trailer_list = response.trailer_list; for (var i=0; i< trailer_list.length; i++){ console.log(trailer_list[i]) var item= trailer_list[i].split("$"); list_data += ''+item[1].split("|")[0]+'
-
download Madrsi unlimited Movies and videos Download Here.Madrsi Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/The Kokuhaku Full Movie Download In Italian.md b/spaces/cihyFjudo/fairness-paper-search/The Kokuhaku Full Movie Download In Italian.md
deleted file mode 100644
index 77b56f99bc137a9757128bd8c3f6e9a485a4aa65..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/The Kokuhaku Full Movie Download In Italian.md
+++ /dev/null
@@ -1,6 +0,0 @@
-