diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X5 Download For Pc 64 Bit With Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X5 Download For Pc 64 Bit With Crack.md
deleted file mode 100644
index e5ee1a1221718c0903fee05a0fd69d4b2edc6099..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X5 Download For Pc 64 Bit With Crack.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download and Install Corel Draw X5 for PC 64 Bit
-
Corel Draw X5 is a powerful vector graphics software that can help you create stunning designs for logos, flyers, posters, banners, and more. If you want to download and install Corel Draw X5 for PC 64 bit, here are the steps you need to follow:
-
-
Visit the official website of Corel Draw and click on the "Download Trial" button.
-
Select the "CorelDRAW Graphics Suite X5" option and enter your email address to get the download link.
-
Open the link in your email and click on the "Download Now" button to start downloading the setup file.
-
Once the download is complete, run the setup file and follow the instructions on the screen to install Corel Draw X5 on your PC.
-
You can use Corel Draw X5 for free for 15 days with full features. After that, you need to purchase a license key to activate the software.
-
-
Congratulations! You have successfully downloaded and installed Corel Draw X5 for PC 64 bit. Now you can enjoy creating amazing graphics with this software.
Corel Draw X5 has many features and tools that can help you create professional-looking graphics. Some of the features include:
-
-
A redesigned user interface that is more intuitive and customizable.
-
A new Corel Connect tool that lets you access online content and resources from within the software.
-
A new Corel PowerTRACE tool that lets you convert bitmap images into vector graphics with ease.
-
A new Corel Photo-Paint tool that lets you edit and enhance photos with filters, effects, and adjustments.
-
A new Corel Website Creator tool that lets you design and publish websites with drag-and-drop functionality.
-
-
With Corel Draw X5, you can also export your graphics to various formats, such as PDF, JPG, PNG, SVG, EPS, and more. You can also optimize your graphics for web or print by adjusting the resolution, color mode, and compression settings. You can also use the built-in templates and clipart to get started quickly.
If you want to learn more about Corel Draw X5, you can visit the official website or watch the tutorials and videos available online. You can also join the Corel community and get tips and feedback from other users. Corel Draw X5 is a versatile and powerful software that can help you unleash your creativity and express your ideas visually.
Are you ready to try Corel Draw X5 for yourself? If so, don't wait any longer and download the free trial today. You will be amazed by what you can create with this software. Whether you are a beginner or a professional, Corel Draw X5 has something for everyone. Download Corel Draw X5 for PC 64 bit now and start creating stunning graphics.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Counter Strike 1.6 Orange Box Download The Ultimate Collection of Valve Games.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Counter Strike 1.6 Orange Box Download The Ultimate Collection of Valve Games.md
deleted file mode 100644
index 32ce373d61aa44d53244fbffd7bf7cbc00706507..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Counter Strike 1.6 Orange Box Download The Ultimate Collection of Valve Games.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
Counter Strike 1.6 Orange Box Download
-
If you are a fan of first-person shooter games, you have probably heard of Counter Strike 1.6, one of the most popular and influential games of all time. Counter Strike 1.6 is a multiplayer game that pits two teams of terrorists and counter-terrorists against each other in various scenarios and maps. The game is known for its fast-paced action, realistic physics, tactical gameplay, and competitive community.
-
But did you know that you can also play Counter Strike 1.6 with some extra features and benefits? That's right, there is a version of the game called Counter Strike 1.6 Orange Box, which is based on the famous Orange Box bundle released by Valve in 2007. The Orange Box is a set of five games that use the Source engine: Half-Life 2, Half-Life 2 Episode One, Half-Life 2 Episode Two, Portal, and Team Fortress 2.
In this article, we will show you how to download Counter Strike 1.6 Orange Box for free, what are its features, and how to install and play it on your computer. So, if you are ready to experience one of the best versions of Counter Strike 1.6 ever made, read on!
-
Features of Counter Strike 1.6 Orange Box
-
Counter Strike 1.6 Orange Box is not just a regular version of the game. It has some unique features that make it stand out from other versions. Here are some of them:
-
-
Original design and models: The game has the original graphics, sounds, weapons, maps, and characters from Counter Strike 1.6, which give it a classic and nostalgic feel.
-
English language and standard config: The game is fully translated into English and has a standard configuration file (cfg) that optimizes the game settings for better performance.
-
Bots and server search: The game has built-in bots (zbots) that you can control with the "H" button. You can also use the online server search function to find servers that suit your preferences.
-
Protection and performance: The game has a strong protection mechanism that prevents hacking, cheating, or modifying the game files. The game also runs smoothly on any Windows operating system (XP, Vista, 7, 8, or 10).
-
-
How to install and play Counter Strike 1.6 Orange Box
-
Installing and playing Counter Strike 1.6 Orange Box is very easy and fast. Just follow these simple steps:
-
-
Download the setup or torrent file: You can download the game from our website using either a direct link or a torrent link. The file size is about 184 MB.
-
Run the installer and choose the destination folder: After downloading the file, run the installer and follow the instructions on the screen. You can choose any folder where you want to install the game.
-
Launch the game and adjust the settings: After installing the game, launch it from your desktop or start menu shortcut. You can adjust your video, audio, keyboard, mouse, and other settings from the options menu.
-
Join a server or create your own: To play online with other players, you can join any server from the server list or use the "Find servers" button to search for servers by name, map, ping, or players. You can also create your own server by using the "Create server" button and choosing your desired map and game mode.
-
-
Conclusion
-
Counter Strike 1.6 Orange Box is a great way to enjoy one of the best games ever made with some extra features and benefits. It has original design and models, English language and standard config, bots and server search, protection and performance, and more.
-
If you want to download Counter Strike 1.6 Orange Box for free, you can do so from our website using either a direct link or a torrent link. The installation process is very simple and fast.
-
How to download Counter Strike 1.6 Orange Box for free
-Counter Strike 1.6 Orange Box full version download link
-Counter Strike 1.6 Orange Box torrent download with crack
-Counter Strike 1.6 Orange Box gameplay and features
-Counter Strike 1.6 Orange Box system requirements and compatibility
-Counter Strike 1.6 Orange Box mods and maps download
-Counter Strike 1.6 Orange Box online multiplayer servers
-Counter Strike 1.6 Orange Box cheats and hacks download
-Counter Strike 1.6 Orange Box update and patch download
-Counter Strike 1.6 Orange Box review and rating
-Best sites to download Counter Strike 1.6 Orange Box
-Counter Strike 1.6 Orange Box vs Counter Strike Source comparison
-Counter Strike 1.6 Orange Box steam key generator download
-Counter Strike 1.6 Orange Box custom skins and models download
-Counter Strike 1.6 Orange Box tips and tricks for beginners
-How to install Counter Strike 1.6 Orange Box on Windows 10
-How to fix Counter Strike 1.6 Orange Box errors and bugs
-How to play Counter Strike 1.6 Orange Box offline mode
-How to create a Counter Strike 1.6 Orange Box server
-How to join a Counter Strike 1.6 Orange Box server
-How to change Counter Strike 1.6 Orange Box language and settings
-How to improve Counter Strike 1.6 Orange Box performance and graphics
-How to uninstall Counter Strike 1.6 Orange Box completely
-How to backup and restore Counter Strike 1.6 Orange Box files
-How to record and edit Counter Strike 1.6 Orange Box videos
-How to stream and watch Counter Strike 1.6 Orange Box on Twitch
-How to earn money by playing Counter Strike 1.6 Orange Box online
-How to rank up and unlock achievements in Counter Strike 1.6 Orange Box
-How to customize and optimize Counter Strike 1.6 Orange Box config file
-How to use console commands and cheats in Counter Strike 1.6 Orange Box
-How to enable and disable bots in Counter Strike 1.6 Orange Box
-How to change crosshair and mouse sensitivity in Counter Strike 1.6 Orange Box
-How to bind keys and macros in Counter Strike 1.6 Orange Box
-How to use voice chat and communicate with teammates in Counter Strike 1.6 Orange Box
-How to mute and report toxic players in Counter Strike 1.6 Orange Box
-How to host a LAN party with friends using Counter Strike 1.6 Orange Box
-How to transfer and share Counter Strike 1.6 Orange Box files with others
-How to download and play custom maps in Counter Strike 1.6 Orange Box
-How to create and submit your own maps for Counter Strike 1.6 Orange Box
-How to find and join the best communities for Counter Strike 1.6 Orange Box players
-How to learn from the pros and watch replays of Counter Strike 1.6 Orange Box matches
-How to master the weapons and strategies in Counter Strike 1.6 Orange Box
-How to train your aim and reflexes in Counter Strike 1.6 Orange Box
-How to deal with hackers and cheaters in Counter Strike 1.6 Orange Box
-How to avoid scams and malware when downloading Counter Strike 1.6 Orange Box
-How to get support and help for any issues with Counter Strike 1.6 Orange Box
-How to access the developer console and debug mode in Counter Strike 1.6 Orange Box
-How to make your own mods and plugins for Counter Strike 1.6 Orange Box
-How to enjoy the nostalgia and fun of playing Counter Strike 1.6 Orange Box
-
So what are you waiting for? Download Counter Strike 1.6 Orange Box today and have fun playing one of the most popular and influential games of all time!
Counter Strike is a first-person shooter game that was released in 1999 as a mod for Half-Life. It became one of the most popular multiplayer games of all time with millions of players worldwide.
-
What is Orange Box?
-
Orange Box is a video game compilation that was released by Valve in 2007 for Windows and Xbox 360. It contains five games that use the Source engine: Half-Life 2, Half-Life 2 Episode One, Half-Life 2 Episode Two, Portal, and Team Fortress 2.
-
What is Counter Strike 1.6 Orange Box?
-
Counter Strike 1.6 Orange Box is a version of Counter Strike 1.6 that is based on the Orange Box bundle released by Valve in 2007. It has some extra features such as original design and models, English language and standard config, bots and server search, protection and performance.
-
How to download Counter Strike 1.6 Orange Box?
-
You can download Counter Strike 1.6 Orange Box from our website using either a direct link or a torrent link.
-
How to install Counter Strike 1.6 Orange Box?
-
You can install Counter Strike 1.6 Orange Box by running the installer file that you downloaded from our website and following the instructions on the screen.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/AUTODATA 8.89 Crack FULL 2018 64 Bit TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/AUTODATA 8.89 Crack FULL 2018 64 Bit TOP.md
deleted file mode 100644
index bef3205a0d450c4e64bbec2f975f66aa64b832c7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/AUTODATA 8.89 Crack FULL 2018 64 Bit TOP.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-January 20, 2018 — SOn Satyamurthy[2015] ... Jaaruko Full Song _ SO Satyamurthy Full Video Song - Allu Arjun Dvxs6 ... Doctor Satyamurthy Movie Audio Launch. Mumbai Doctor Satyamurthy Movie Audio Launch.
-Mumbai Doctor Satyamurthy Movie Audio Launch.
-Mumbai Doctor Satyamurthy Movie Audio La 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ayurved Sar Sangrah Book Zip.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ayurved Sar Sangrah Book Zip.md
deleted file mode 100644
index b84a8d5f6fa1ef40fbb3af15c092aea64c1d9917..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Ayurved Sar Sangrah Book Zip.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-January 20, 2021 - Book page image. 0.25x, 0.5x, 0.75x, 1.0x, 1.25x, 1.5x, 1.75x, 2x. (1 out of 866). Flip left. Flip right. Identifier: ayurved-sara-sangraha. htm. Translation: www.ayurveda.org.uk/speakingwithassam. 8a78ff9644
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Dream Live APK Mod - The Best App for Live Streaming Fans (No Top Up Required).md b/spaces/1phancelerku/anime-remove-background/Dream Live APK Mod - The Best App for Live Streaming Fans (No Top Up Required).md
deleted file mode 100644
index 6ede707e48bba5773cac2daf29eadd2f305b7311..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dream Live APK Mod - The Best App for Live Streaming Fans (No Top Up Required).md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Download Dream Live APK Mod: A Live Streaming Platform for Entertainment Lovers
-
Do you love watching live streams of talented and charming hosts? Do you want to interact with them and send them virtual gifts? Do you want to start your own live stream and share your passion with the world? If you answered yes to any of these questions, then you should download Dream Live APK Mod, a live streaming platform that focuses on the entertainment lifestyle.
-
What is Dream Live APK Mod?
-
Dream Live APK Mod is a modified version of the original Dream Live app, which is a live streaming platform that brings up lots of talent anchors to share happiness by providing online real-time interaction broadcasts. You can watch live streams of singing, dancing, talking, gaming, and more, and chat with the hosts and other viewers. You can also send and receive virtual gifts, such as flowers, hearts, diamonds, and cars, to show your support and appreciation. You can also start your own live stream and showcase your talent to the world.
Dream Live APK Mod has many features that make it more enjoyable and convenient than the original app. Here are some of them:
-
VIP Unlocked
-
With Dream Live APK Mod, you can enjoy all the benefits of being a VIP member without paying anything. You can access exclusive live streams, chat rooms, stickers, filters, and more. You can also get more attention from the hosts and other viewers.
-
download dream live apk mod ijo unlocked room
-download dream live apk mod streaming bebas
-download dream live apk mod versi terbaru 2023
-download dream live apk mod fitur premium
-download dream live apk mod gratis tanpa bayar
-download dream live apk mod untuk android
-download dream live apk mod tema hijau
-download dream live apk mod berinteraksi dengan streamer
-download dream live apk mod menyajikan konten favorit
-download dream live apk mod menyanyi menari review produk
-download dream live apk mod unlimited coins
-download dream live apk mod no watermark
-download dream live apk mod anti banned
-download dream live apk mod kualitas HD
-download dream live apk mod banyak host menawan
-download dream live apk mod membuat grup chat
-download dream live apk mod melatih keterampilan komunikasi
-download dream live apk mod tanpa iklan
-download dream live apk mod link alternatif
-download dream live apk mod cara instal
-download dream live apk mod review pengguna
-download dream live apk mod kelebihan dan kekurangan
-download dream live apk mod update terkini
-download dream live apk mod tips dan trik
-download dream live apk mod aplikasi streaming terbaik
-
No Ads
-
Dream Live APK Mod removes all the annoying ads that interrupt your viewing experience. You can watch live streams without any distractions or interruptions.
-
Unlimited Gifts
-
Dream Live APK Mod gives you unlimited coins and diamonds that you can use to send gifts to your favorite hosts. You can also receive gifts from other viewers and exchange them for real money.
-
Live Interaction
-
Dream Live APK Mod allows you to interact with the hosts and other viewers in real time. You can chat with them using text, voice, or video messages. You can also join private chat rooms and group chats. You can also participate in various events and activities, such as contests, games, quizzes, and polls.
-
Variety of Content
-
Dream Live APK Mod has a variety of content that suits your preferences and interests. You can watch live streams of different categories, such as music, dance, comedy, beauty, fashion, sports, gaming, education, travel, and more. You can also discover new and popular hosts by browsing the recommended list or searching by keywords.
-
How to Download and Install Dream Live APK Mod?
-
If you want to download and install Dream Live APK Mod on your Android device, you need to follow these simple steps:
-
Download Link
-
You can download Dream Live APK Mod from this link:
This link will take you to a trusted website where you can download the latest version of the modded
Installation Steps
-
After downloading the APK file, you need to install it on your device. Here are the steps to do so:
-
-
Go to your device settings and enable the option to install apps from unknown sources. This will allow you to install apps that are not from the Google Play Store.
-
Locate the downloaded APK file on your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Launch the app and enjoy watching and streaming live videos.
-
-
How to Use Dream Live APK Mod?
-
Using Dream Live APK Mod is very easy and fun. Here are some tips on how to use it:
-
Create an Account
-
To use Dream Live APK Mod, you need to create an account first. You can do so by using your phone number, email address, or social media accounts. You can also choose a username, password, and profile picture for your account. You can also edit your personal information, such as your gender, age, location, and bio.
-
Browse and Watch Live Streams
-
To browse and watch live streams, you can swipe left or right on the home screen to see different categories of content. You can also tap on the magnifying glass icon to search for specific hosts or keywords. You can also tap on the heart icon to see your favorite hosts and follow them. To watch a live stream, just tap on it and enjoy the show. You can also chat with the host and other viewers by typing or sending voice or video messages. You can also send gifts by tapping on the gift icon and choosing from various options.
-
Send and Receive Gifts
-
To send gifts to your favorite hosts, you need to have coins or diamonds in your account. You can get coins or diamonds by watching ads, completing tasks, inviting friends, or buying them with real money. You can also receive gifts from other viewers if they like your live stream or chat messages. You can exchange the gifts you receive for real money by withdrawing them to your bank account or PayPal.
-
Start Your Own Live Stream
-
To start your own live stream, you need to tap on the camera icon on the bottom of the home screen. You can then choose a title, category, and cover image for your live stream. You can also use various filters, stickers, and effects to enhance your appearance and mood. You can also invite guests or co-hosts to join your live stream by tapping on the invite icon. Once you are ready, just tap on the start button and go live. You can interact with your viewers by chatting with them or responding to their gifts. You can also end your live stream anytime by tapping on the stop button.
-
Pros and Cons of Dream Live APK Mod
-
Dream Live APK Mod has many pros and cons that you should consider before using it. Here are some of them:
-
Pros
-
-
Free and Premium Features: Dream Live APK Mod gives you access to all the features of the original app, plus some extra features that are only available for VIP members or paid users. You can enjoy watching exclusive live streams, using premium stickers and filters, sending unlimited gifts, and more.
-
Easy and Fun to Use: Dream Live APK Mod has a simple and user-friendly interface that makes it easy and fun to use. You can easily navigate through different categories of content, search for hosts or keywords, chat with hosts and viewers, send gifts, start your own live stream, and more.
-
Meet New People and Make Friends: Dream Live APK Mod allows you to meet new people and make friends from different countries and cultures. You can chat with them using text, voice, or video messages, join private or group chats, participate in events and activities, follow them, send them gifts, and more.
-
-
Cons
-
-
Requires Internet Connection: Dream Live APK Mod requires a stable internet connection to work properly. If you have a slow or unstable internet connection, you may experience buffering, lagging, freezing, or crashing issues while watching or streaming live videos.
-
May Contain Inappropriate Content: Dream Live APK Mod may contain inappropriate content that is not suitable for minors or sensitive viewers. Some hosts may show nudity, violence, profanity, or other offensive content in their live streams. You should be careful when choosing what to watch and whom to interact with.
-
May Not Be Compatible with Some Devices: Dream Live APK Mod may not be compatible with some devices or operating systems. Some devices may not support the installation of apps from unknown sources or may have security or performance issues while running the app. You should check the compatibility of your device before downloading and installing the app.
-
-
Conclusion
-
Dream Live APK Mod is a live streaming platform that focuses on the entertainment lifestyle. You can watch live streams of various categories, chat with hosts and viewers, send and receive gifts, and start your own live stream. You can also enjoy all the premium features of the app for free, such as VIP unlocked, no ads, and unlimited gifts. However, you should also be aware of the cons of the app, such as requiring internet connection, containing inappropriate content, and not being compatible with some devices. If you are looking for a fun and interactive way to spend your time online, you should download Dream Live APK Mod and give it a try.
-
FAQs
-
Here are some frequently asked questions about Dream Live APK Mod:
-
-
Is Dream Live APK Mod safe to use?
-
Dream Live APK Mod is safe to use as long as you download it from a trusted website and scan it with an antivirus program before installing it. You should also avoid clicking on suspicious links or downloading unknown files while using the app.
-
Is Dream Live APK Mod legal to use?
-
Dream Live APK Mod is not legal to use as it violates the terms and conditions of the original app. You may face legal consequences if you use the app for illegal purposes or infringe on the rights of the original app developers or hosts. You should use the app at your own risk and responsibility.
-
How can I update Dream Live APK Mod?
-
Dream Live APK Mod does not update automatically as it is not from the Google Play Store. You need to check for updates manually by visiting the website where you downloaded the app or searching for other sources online. You should also uninstall the previous version of the app before installing the new one.
-
How can I contact Dream Live APK Mod support?
-
Dream Live APK Mod does not have an official support team as it is not from the original app developers. You can try to contact the modders who created the app or other users who have used the app for help or feedback. You can also check online forums or blogs for tips and tricks on how to use the app.
-
How can I delete Dream Live APK Mod?
-
If you want to delete Dream Live APK Mod from your device, you can do so by following these steps:
-
-
Go to your device settings and tap on apps or applications.
-
Find and tap on Dream Live APK Mod and tap on uninstall.
-
Confirm your action and wait for the uninstallation to complete.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddim/pipeline_ddim.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddim/pipeline_ddim.py
deleted file mode 100644
index e797a45141adb41b65124aaa0da99c00980d7f99..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/ddim/pipeline_ddim.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Tuple, Union
-
-import paddle
-
-from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-class DDIMPipeline(DiffusionPipeline):
- r"""
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
-
- Parameters:
- unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
- [`DDPMScheduler`], or [`DDIMScheduler`].
- """
-
- def __init__(self, unet, scheduler):
- super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler)
-
- @paddle.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- eta: float = 0.0,
- num_inference_steps: int = 50,
- use_clipped_model_output: Optional[bool] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ) -> Union[ImagePipelineOutput, Tuple]:
- r"""
- Args:
- batch_size (`int`, *optional*, defaults to 1):
- The number of images to generate.
- generator (`paddle.Generator`, *optional*):
- One or a list of paddle generator(s) to make generation deterministic.
- eta (`float`, *optional*, defaults to 0.0):
- The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM).
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- use_clipped_model_output (`bool`, *optional*, defaults to `None`):
- if `True` or `False`, see documentation for `DDIMScheduler.step`. If `None`, nothing is passed
- downstream to the scheduler. So use `None` for schedulers which don't support this argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.
-
- Returns:
- [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if
- `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
- generated images.
- """
- # Sample gaussian noise to begin loop
- if isinstance(self.unet.sample_size, int):
- image_shape = (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)
- else:
- image_shape = (batch_size, self.unet.in_channels, *self.unet.sample_size)
-
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if isinstance(generator, list):
- shape = (1,) + image_shape[1:]
- image = [paddle.randn(shape, generator=generator[i], dtype=self.unet.dtype) for i in range(batch_size)]
- image = paddle.concat(image, axis=0)
- else:
- image = paddle.randn(image_shape, generator=generator, dtype=self.unet.dtype)
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
-
- for t in self.progress_bar(self.scheduler.timesteps):
- # 1. predict noise model_output
- model_output = self.unet(image, t).sample
-
- # 2. predict previous mean of image x_t-1 and add variance depending on eta
- # eta corresponds to η in paper and should be between [0, 1]
- # do x_t -> x_t-1
- image = self.scheduler.step(
- model_output, t, image, eta=eta, use_clipped_model_output=use_clipped_model_output, generator=generator
- ).prev_sample
-
- image = (image / 2 + 0.5).clip(0, 1)
- image = image.transpose([0, 2, 3, 1]).cast("float32").numpy()
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/backups.py b/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/A00001/bingothoo/src/components/chat-panel.tsx b/spaces/A00001/bingothoo/src/components/chat-panel.tsx
deleted file mode 100644
index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/chat-panel.tsx
+++ /dev/null
@@ -1,153 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import Image from 'next/image'
-import Textarea from 'react-textarea-autosize'
-import { useAtomValue } from 'jotai'
-import { useEnterSubmit } from '@/lib/hooks/use-enter-submit'
-import { cn } from '@/lib/utils'
-
-import BrushIcon from '@/assets/images/brush.svg'
-import ChatIcon from '@/assets/images/chat.svg'
-import VisualSearchIcon from '@/assets/images/visual-search.svg'
-import SendIcon from '@/assets/images/send.svg'
-import PinIcon from '@/assets/images/pin.svg'
-import PinFillIcon from '@/assets/images/pin-fill.svg'
-
-import { useBing } from '@/lib/hooks/use-bing'
-import { voiceListenAtom } from '@/state'
-import Voice from './voice'
-import { ChatImage } from './chat-image'
-import { ChatAttachments } from './chat-attachments'
-
-export interface ChatPanelProps
- extends Pick<
- ReturnType,
- | 'generating'
- | 'input'
- | 'setInput'
- | 'sendMessage'
- | 'resetConversation'
- | 'isSpeaking'
- | 'attachmentList'
- | 'uploadImage'
- | 'setAttachmentList'
- > {
- id?: string
- className?: string
-}
-
-export function ChatPanel({
- isSpeaking,
- generating,
- input,
- setInput,
- className,
- sendMessage,
- resetConversation,
- attachmentList,
- uploadImage,
- setAttachmentList
-}: ChatPanelProps) {
- const inputRef = React.useRef(null)
- const {formRef, onKeyDown} = useEnterSubmit()
- const [focused, setFocused] = React.useState(false)
- const [active, setActive] = React.useState(false)
- const [pin, setPin] = React.useState(false)
- const [tid, setTid] = React.useState()
- const voiceListening = useAtomValue(voiceListenAtom)
-
- const setBlur = React.useCallback(() => {
- clearTimeout(tid)
- setActive(false)
- const _tid = setTimeout(() => setFocused(false), 2000);
- setTid(_tid)
- }, [tid])
-
- const setFocus = React.useCallback(() => {
- setFocused(true)
- setActive(true)
- clearTimeout(tid)
- inputRef.current?.focus()
- }, [tid])
-
- React.useEffect(() => {
- if (input) {
- setFocus()
- }
- }, [input])
-
- return (
-
- )
-}
diff --git a/spaces/AIConsultant/MusicGen/CHANGELOG.md b/spaces/AIConsultant/MusicGen/CHANGELOG.md
deleted file mode 100644
index aabf9130b0a67aca9beaac9f2cb1a40237a4468d..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/CHANGELOG.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Changelog
-
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
-
-## [1.0.0] - 2023-08-02
-
-Major revision, added training code for EnCodec, AudioGen, MusicGen, and MultiBandDiffusion.
-Added pretrained model for AudioGen and MultiBandDiffusion.
-
-## [0.0.2] - 2023-08-01
-
-Improved demo, fixed top p (thanks @jnordberg).
-
-Compressor tanh on output to avoid clipping with some style (especially piano).
-Now repeating the conditioning periodically if it is too short.
-
-More options when launching Gradio app locally (thanks @ashleykleynhans).
-
-Testing out PyTorch 2.0 memory efficient attention.
-
-Added extended generation (infinite length) by slowly moving the windows.
-Note that other implementations exist: https://github.com/camenduru/MusicGen-colab.
-
-## [0.0.1] - 2023-06-09
-
-Initial release, with model evaluation only.
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_extractor.sh b/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_extractor.sh
deleted file mode 100644
index b1c456e8311a59a1c8d86e85da5ddd3aa7e1f9a4..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_extractor.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-rm -rf checkpoints
-mkdir checkpoints
-cd checkpoints
-echo -e "Downloading extractors"
-gdown --fuzzy https://drive.google.com/file/d/1o7RTDQcToJjTm9_mNWTyzvZvjTWpZfug/view
-gdown --fuzzy https://drive.google.com/file/d/1tX79xk0fflp07EZ660Xz1RAFE33iEyJR/view
-
-
-unzip t2m.zip
-unzip kit.zip
-
-echo -e "Cleaning\n"
-rm t2m.zip
-rm kit.zip
-echo -e "Downloading done!"
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/model.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/model.py
deleted file mode 100644
index 2746d74c16cd9a7a418487599399cdea8dc1bbac..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/model.py
+++ /dev/null
@@ -1,835 +0,0 @@
-# pytorch_diffusion + derived encoder decoder
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import rearrange
-
-from ldm.util import instantiate_from_config
-from ldm.modules.attention import LinearAttention
-
-
-def get_timestep_embedding(timesteps, embedding_dim):
- """
- This matches the implementation in Denoising Diffusion Probabilistic Models:
- From Fairseq.
- Build sinusoidal embeddings.
- This matches the implementation in tensor2tensor, but differs slightly
- from the description in Section 3.5 of "Attention Is All You Need".
- """
- assert len(timesteps.shape) == 1
-
- half_dim = embedding_dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)
- emb = emb.to(device=timesteps.device)
- emb = timesteps.float()[:, None] * emb[None, :]
- emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
- if embedding_dim % 2 == 1: # zero pad
- emb = torch.nn.functional.pad(emb, (0,1,0,0))
- return emb
-
-
-def nonlinearity(x):
- # swish
- return x*torch.sigmoid(x)
-
-
-def Normalize(in_channels, num_groups=32):
- return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class Upsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
- if self.with_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=2,
- padding=0)
-
- def forward(self, x):
- if self.with_conv:
- pad = (0,1,0,1)
- x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
- x = self.conv(x)
- else:
- x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
- return x
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
- dropout, temb_channels=512):
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
-
- self.norm1 = Normalize(in_channels)
- self.conv1 = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if temb_channels > 0:
- self.temb_proj = torch.nn.Linear(temb_channels,
- out_channels)
- self.norm2 = Normalize(out_channels)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- else:
- self.nin_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x, temb):
- h = x
- h = self.norm1(h)
- h = nonlinearity(h)
- h = self.conv1(h)
-
- if temb is not None:
- h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]
-
- h = self.norm2(h)
- h = nonlinearity(h)
- h = self.dropout(h)
- h = self.conv2(h)
-
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- x = self.conv_shortcut(x)
- else:
- x = self.nin_shortcut(x)
-
- return x+h
-
-
-class LinAttnBlock(LinearAttention):
- """to match AttnBlock usage"""
- def __init__(self, in_channels):
- super().__init__(dim=in_channels, heads=1, dim_head=in_channels)
-
-
-class AttnBlock(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = q.reshape(b,c,h*w)
- q = q.permute(0,2,1) # b,hw,c
- k = k.reshape(b,c,h*w) # b,c,hw
- w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = v.reshape(b,c,h*w)
- w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q)
- h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
- h_ = h_.reshape(b,c,h,w)
-
- h_ = self.proj_out(h_)
-
- return x+h_
-
-
-def make_attn(in_channels, attn_type="vanilla"):
- assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown'
- print(f"making attention of type '{attn_type}' with {in_channels} in_channels")
- if attn_type == "vanilla":
- return AttnBlock(in_channels)
- elif attn_type == "none":
- return nn.Identity(in_channels)
- else:
- return LinAttnBlock(in_channels)
-
-
-class Model(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = self.ch*4
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- self.use_timestep = use_timestep
- if self.use_timestep:
- # timestep embedding
- self.temb = nn.Module()
- self.temb.dense = nn.ModuleList([
- torch.nn.Linear(self.ch,
- self.temb_ch),
- torch.nn.Linear(self.temb_ch,
- self.temb_ch),
- ])
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- skip_in = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- if i_block == self.num_res_blocks:
- skip_in = ch*in_ch_mult[i_level]
- block.append(ResnetBlock(in_channels=block_in+skip_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x, t=None, context=None):
- #assert x.shape[2] == x.shape[3] == self.resolution
- if context is not None:
- # assume aligned context, cat along channel axis
- x = torch.cat((x, context), dim=1)
- if self.use_timestep:
- # timestep embedding
- assert t is not None
- temb = get_timestep_embedding(t, self.ch)
- temb = self.temb.dense[0](temb)
- temb = nonlinearity(temb)
- temb = self.temb.dense[1](temb)
- else:
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](
- torch.cat([h, hs.pop()], dim=1), temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
- def get_last_layer(self):
- return self.conv_out.weight
-
-
-class Encoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla",
- **ignore_kwargs):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.in_ch_mult = in_ch_mult
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))# vanilla attention
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # end
- self.norm_out = Normalize(block_in)# GroupNorm
- self.conv_out = torch.nn.Conv2d(block_in,
- 2*z_channels if double_z else z_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- # timestep embedding
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class Decoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
- attn_type="vanilla", **ignorekwargs):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
- self.give_pre_end = give_pre_end
- self.tanh_out = tanh_out
-
- # compute in_ch_mult, block_in and curr_res at lowest res
- in_ch_mult = (1,)+tuple(ch_mult)
- block_in = ch*ch_mult[self.num_resolutions-1]
- curr_res = resolution // 2**(self.num_resolutions-1)
- self.z_shape = (1,z_channels,curr_res,curr_res)
- print("Working with z of shape {} = {} dimensions.".format(
- self.z_shape, np.prod(self.z_shape)))
-
- # z to block_in
- self.conv_in = torch.nn.Conv2d(z_channels,
- block_in,
- kernel_size=3,
- stride=1,
- padding=1)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, z):
- #assert z.shape[1:] == self.z_shape[1:]
- self.last_z_shape = z.shape
-
- # timestep embedding
- temb = None
-
- # z to block_in
- h = self.conv_in(z)
-
- # middle
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](h, temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- if self.give_pre_end:
- return h
-
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- if self.tanh_out:
- h = torch.tanh(h)
- return h
-
-
-class SimpleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, *args, **kwargs):
- super().__init__()
- self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),
- ResnetBlock(in_channels=in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=2 * in_channels,
- out_channels=4 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=4 * in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- nn.Conv2d(2*in_channels, in_channels, 1),
- Upsample(in_channels, with_conv=True)])
- # end
- self.norm_out = Normalize(in_channels)
- self.conv_out = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- for i, layer in enumerate(self.model):
- if i in [1,2,3]:
- x = layer(x, None)
- else:
- x = layer(x)
-
- h = self.norm_out(x)
- h = nonlinearity(h)
- x = self.conv_out(h)
- return x
-
-
-class UpsampleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,
- ch_mult=(2,2), dropout=0.0):
- super().__init__()
- # upsampling
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- block_in = in_channels
- curr_res = resolution // 2 ** (self.num_resolutions - 1)
- self.res_blocks = nn.ModuleList()
- self.upsample_blocks = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- res_block = []
- block_out = ch * ch_mult[i_level]
- for i_block in range(self.num_res_blocks + 1):
- res_block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- self.res_blocks.append(nn.ModuleList(res_block))
- if i_level != self.num_resolutions - 1:
- self.upsample_blocks.append(Upsample(block_in, True))
- curr_res = curr_res * 2
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- # upsampling
- h = x
- for k, i_level in enumerate(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks + 1):
- h = self.res_blocks[i_level][i_block](h, None)
- if i_level != self.num_resolutions - 1:
- h = self.upsample_blocks[k](h)
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class LatentRescaler(nn.Module):
- def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):
- super().__init__()
- # residual block, interpolate, residual block
- self.factor = factor
- self.conv_in = nn.Conv2d(in_channels,
- mid_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
- out_channels=mid_channels,
- temb_channels=0,
- dropout=0.0) for _ in range(depth)])
- self.attn = AttnBlock(mid_channels)
- self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
- out_channels=mid_channels,
- temb_channels=0,
- dropout=0.0) for _ in range(depth)])
-
- self.conv_out = nn.Conv2d(mid_channels,
- out_channels,
- kernel_size=1,
- )
-
- def forward(self, x):
- x = self.conv_in(x)
- for block in self.res_block1:
- x = block(x, None)
- x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor))))
- x = self.attn(x)
- for block in self.res_block2:
- x = block(x, None)
- x = self.conv_out(x)
- return x
-
-
-class MergedRescaleEncoder(nn.Module):
- def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True,
- ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1):
- super().__init__()
- intermediate_chn = ch * ch_mult[-1]
- self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,
- z_channels=intermediate_chn, double_z=False, resolution=resolution,
- attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,
- out_ch=None)
- self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,
- mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)
-
- def forward(self, x):
- x = self.encoder(x)
- x = self.rescaler(x)
- return x
-
-
-class MergedRescaleDecoder(nn.Module):
- def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8),
- dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):
- super().__init__()
- tmp_chn = z_channels*ch_mult[-1]
- self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,
- resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,
- ch_mult=ch_mult, resolution=resolution, ch=ch)
- self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,
- out_channels=tmp_chn, depth=rescale_module_depth)
-
- def forward(self, x):
- x = self.rescaler(x)
- x = self.decoder(x)
- return x
-
-
-class Upsampler(nn.Module):
- def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):
- super().__init__()
- assert out_size >= in_size
- num_blocks = int(np.log2(out_size//in_size))+1
- factor_up = 1.+ (out_size % in_size)
- print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}")
- self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels,
- out_channels=in_channels)
- self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,
- attn_resolutions=[], in_channels=None, ch=in_channels,
- ch_mult=[ch_mult for _ in range(num_blocks)])
-
- def forward(self, x):
- x = self.rescaler(x)
- x = self.decoder(x)
- return x
-
-
-class Resize(nn.Module):
- def __init__(self, in_channels=None, learned=False, mode="bilinear"):
- super().__init__()
- self.with_conv = learned
- self.mode = mode
- if self.with_conv:
- print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode")
- raise NotImplementedError()
- assert in_channels is not None
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=4,
- stride=2,
- padding=1)
-
- def forward(self, x, scale_factor=1.0):
- if scale_factor==1.0:
- return x
- else:
- x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)
- return x
-
-class FirstStagePostProcessor(nn.Module):
-
- def __init__(self, ch_mult:list, in_channels,
- pretrained_model:nn.Module=None,
- reshape=False,
- n_channels=None,
- dropout=0.,
- pretrained_config=None):
- super().__init__()
- if pretrained_config is None:
- assert pretrained_model is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
- self.pretrained_model = pretrained_model
- else:
- assert pretrained_config is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
- self.instantiate_pretrained(pretrained_config)
-
- self.do_reshape = reshape
-
- if n_channels is None:
- n_channels = self.pretrained_model.encoder.ch
-
- self.proj_norm = Normalize(in_channels,num_groups=in_channels//2)
- self.proj = nn.Conv2d(in_channels,n_channels,kernel_size=3,
- stride=1,padding=1)
-
- blocks = []
- downs = []
- ch_in = n_channels
- for m in ch_mult:
- blocks.append(ResnetBlock(in_channels=ch_in,out_channels=m*n_channels,dropout=dropout))
- ch_in = m * n_channels
- downs.append(Downsample(ch_in, with_conv=False))
-
- self.model = nn.ModuleList(blocks)
- self.downsampler = nn.ModuleList(downs)
-
-
- def instantiate_pretrained(self, config):
- model = instantiate_from_config(config)
- self.pretrained_model = model.eval()
- # self.pretrained_model.train = False
- for param in self.pretrained_model.parameters():
- param.requires_grad = False
-
-
- @torch.no_grad()
- def encode_with_pretrained(self,x):
- c = self.pretrained_model.encode(x)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- return c
-
- def forward(self,x):
- z_fs = self.encode_with_pretrained(x)
- z = self.proj_norm(z_fs)
- z = self.proj(z)
- z = nonlinearity(z)
-
- for submodel, downmodel in zip(self.model,self.downsampler):
- z = submodel(z,temb=None)
- z = downmodel(z)
-
- if self.do_reshape:
- z = rearrange(z,'b c h w -> b (h w) c')
- return z
-
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/autoencoder_multi.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/autoencoder_multi.py
deleted file mode 100644
index cc4f830e24e99950f5ff412e8c5776e6a3489bf2..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/autoencoder_multi.py
+++ /dev/null
@@ -1,201 +0,0 @@
-"""
-与autoencoder.py的区别在于,autoencoder.py计算loss时只有一个discriminator,而此处又多了个multiwindowDiscriminator,所以优化器
-优化的参数改为:
-opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()),
- lr=lr, betas=(0.5, 0.9))
-"""
-
-import os
-import torch
-import pytorch_lightning as pl
-import torch.nn.functional as F
-from contextlib import contextmanager
-
-from packaging import version
-import numpy as np
-from ldm.modules.diffusionmodules.model import Encoder, Decoder
-from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
-from torch.optim.lr_scheduler import LambdaLR
-from ldm.util import instantiate_from_config
-
-
-
-class AutoencoderKL(pl.LightningModule):
- def __init__(self,
- ddconfig,
- lossconfig,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- ):
- super().__init__()
- self.image_key = image_key
- self.encoder = Encoder(**ddconfig)
- self.decoder = Decoder(**ddconfig)
- self.loss = instantiate_from_config(lossconfig)
- assert ddconfig["double_z"]
- self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
- self.embed_dim = embed_dim
- if colorize_nlabels is not None:
- assert type(colorize_nlabels)==int
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
- if monitor is not None:
- self.monitor = monitor
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- self.load_state_dict(sd, strict=False)
- print(f"Restored from {path}")
-
- def encode(self, x):
- h = self.encoder(x)
- moments = self.quant_conv(h)
- posterior = DiagonalGaussianDistribution(moments)
- return posterior
-
- def decode(self, z):
- z = self.post_quant_conv(z)
- dec = self.decoder(z)
- return dec
-
- def forward(self, input, sample_posterior=True):
- posterior = self.encode(input)
- if sample_posterior:
- z = posterior.sample()
- else:
- z = posterior.mode()
- dec = self.decode(z)
- return dec, posterior
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
- return x
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
-
- if optimizer_idx == 0:
- # train encoder+decoder+logvar
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return aeloss
-
- if optimizer_idx == 1:
- # train the discriminator
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
-
- self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- self.log("val/rec_loss", log_dict_ae["val/rec_loss"])
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def test_step(self, batch, batch_idx):
- inputs = self.get_input(batch, self.image_key)# inputs shape:(b,c,mel_len,T) or (b,c,h,w)
- reconstructions, posterior = self(inputs)# reconstructions:(b,c,mel_len,T) or (b,c,h,w)
- reconstructions = (reconstructions + 1)/2 # to mel scale
- test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
- savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
- if not os.path.exists(savedir):
- os.makedirs(savedir)
-
- file_names = batch['f_name']
- # print(f"reconstructions.shape:{reconstructions.shape}",file_names)
- reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
- for b in range(reconstructions.shape[0]):
- vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
- v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
- save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}.npy')
- np.save(save_img_path,reconstructions[b])
-
- return None
-
- def configure_optimizers(self):
- lr = self.learning_rate
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=lr, betas=(0.5, 0.9))
- opt_disc = torch.optim.Adam(list(self.loss.discriminator.parameters()) + list(self.loss.discriminator_multi.parameters()),
- lr=lr, betas=(0.5, 0.9))
- return [opt_ae, opt_disc], []
-
- def get_last_layer(self):
- return self.decoder.conv_out.weight
-
- @torch.no_grad()
- def log_images(self, batch, only_inputs=False, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- if not only_inputs:
- xrec, posterior = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["samples"] = self.decode(torch.randn_like(posterior.sample()))
- log["reconstructions"] = xrec
- log["inputs"] = x
- return log
-
- def to_rgb(self, x):
- assert self.image_key == "segmentation"
- if not hasattr(self, "colorize"):
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
- x = F.conv2d(x, weight=self.colorize)
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
- return x
-
-
-class IdentityFirstStage(torch.nn.Module):
- def __init__(self, *args, vq_interface=False, **kwargs):
- self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff
- super().__init__()
-
- def encode(self, x, *args, **kwargs):
- return x
-
- def decode(self, x, *args, **kwargs):
- return x
-
- def quantize(self, x, *args, **kwargs):
- if self.vq_interface:
- return x, None, [None, None, None]
- return x
-
- def forward(self, x, *args, **kwargs):
- return x
\ No newline at end of file
diff --git a/spaces/AP123/dreamgaussian/grid_put.py b/spaces/AP123/dreamgaussian/grid_put.py
deleted file mode 100644
index 0086cc4efa7527b77b9e583642ca9dfa9ae467fe..0000000000000000000000000000000000000000
--- a/spaces/AP123/dreamgaussian/grid_put.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-def stride_from_shape(shape):
- stride = [1]
- for x in reversed(shape[1:]):
- stride.append(stride[-1] * x)
- return list(reversed(stride))
-
-
-def scatter_add_nd(input, indices, values):
- # input: [..., C], D dimension + C channel
- # indices: [N, D], long
- # values: [N, C]
-
- D = indices.shape[-1]
- C = input.shape[-1]
- size = input.shape[:-1]
- stride = stride_from_shape(size)
-
- assert len(size) == D
-
- input = input.view(-1, C) # [HW, C]
- flatten_indices = (indices * torch.tensor(stride, dtype=torch.long, device=indices.device)).sum(-1) # [N]
-
- input.scatter_add_(0, flatten_indices.unsqueeze(1).repeat(1, C), values)
-
- return input.view(*size, C)
-
-
-def scatter_add_nd_with_count(input, count, indices, values, weights=None):
- # input: [..., C], D dimension + C channel
- # count: [..., 1], D dimension
- # indices: [N, D], long
- # values: [N, C]
-
- D = indices.shape[-1]
- C = input.shape[-1]
- size = input.shape[:-1]
- stride = stride_from_shape(size)
-
- assert len(size) == D
-
- input = input.view(-1, C) # [HW, C]
- count = count.view(-1, 1)
-
- flatten_indices = (indices * torch.tensor(stride, dtype=torch.long, device=indices.device)).sum(-1) # [N]
-
- if weights is None:
- weights = torch.ones_like(values[..., :1])
-
- input.scatter_add_(0, flatten_indices.unsqueeze(1).repeat(1, C), values)
- count.scatter_add_(0, flatten_indices.unsqueeze(1), weights)
-
- return input.view(*size, C), count.view(*size, 1)
-
-def nearest_grid_put_2d(H, W, coords, values, return_count=False):
- # coords: [N, 2], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- indices = (coords * 0.5 + 0.5) * torch.tensor(
- [H - 1, W - 1], dtype=torch.float32, device=coords.device
- )
- indices = indices.round().long() # [N, 2]
-
- result = torch.zeros(H, W, C, device=values.device, dtype=values.dtype) # [H, W, C]
- count = torch.zeros(H, W, 1, device=values.device, dtype=values.dtype) # [H, W, 1]
- weights = torch.ones_like(values[..., :1]) # [N, 1]
-
- result, count = scatter_add_nd_with_count(result, count, indices, values, weights)
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-
-def linear_grid_put_2d(H, W, coords, values, return_count=False):
- # coords: [N, 2], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- indices = (coords * 0.5 + 0.5) * torch.tensor(
- [H - 1, W - 1], dtype=torch.float32, device=coords.device
- )
- indices_00 = indices.floor().long() # [N, 2]
- indices_00[:, 0].clamp_(0, H - 2)
- indices_00[:, 1].clamp_(0, W - 2)
- indices_01 = indices_00 + torch.tensor(
- [0, 1], dtype=torch.long, device=indices.device
- )
- indices_10 = indices_00 + torch.tensor(
- [1, 0], dtype=torch.long, device=indices.device
- )
- indices_11 = indices_00 + torch.tensor(
- [1, 1], dtype=torch.long, device=indices.device
- )
-
- h = indices[..., 0] - indices_00[..., 0].float()
- w = indices[..., 1] - indices_00[..., 1].float()
- w_00 = (1 - h) * (1 - w)
- w_01 = (1 - h) * w
- w_10 = h * (1 - w)
- w_11 = h * w
-
- result = torch.zeros(H, W, C, device=values.device, dtype=values.dtype) # [H, W, C]
- count = torch.zeros(H, W, 1, device=values.device, dtype=values.dtype) # [H, W, 1]
- weights = torch.ones_like(values[..., :1]) # [N, 1]
-
- result, count = scatter_add_nd_with_count(result, count, indices_00, values * w_00.unsqueeze(1), weights* w_00.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_01, values * w_01.unsqueeze(1), weights* w_01.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_10, values * w_10.unsqueeze(1), weights* w_10.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_11, values * w_11.unsqueeze(1), weights* w_11.unsqueeze(1))
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-def mipmap_linear_grid_put_2d(H, W, coords, values, min_resolution=32, return_count=False):
- # coords: [N, 2], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- result = torch.zeros(H, W, C, device=values.device, dtype=values.dtype) # [H, W, C]
- count = torch.zeros(H, W, 1, device=values.device, dtype=values.dtype) # [H, W, 1]
-
- cur_H, cur_W = H, W
-
- while min(cur_H, cur_W) > min_resolution:
-
- # try to fill the holes
- mask = (count.squeeze(-1) == 0)
- if not mask.any():
- break
-
- cur_result, cur_count = linear_grid_put_2d(cur_H, cur_W, coords, values, return_count=True)
- result[mask] = result[mask] + F.interpolate(cur_result.permute(2,0,1).unsqueeze(0).contiguous(), (H, W), mode='bilinear', align_corners=False).squeeze(0).permute(1,2,0).contiguous()[mask]
- count[mask] = count[mask] + F.interpolate(cur_count.view(1, 1, cur_H, cur_W), (H, W), mode='bilinear', align_corners=False).view(H, W, 1)[mask]
- cur_H //= 2
- cur_W //= 2
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-def nearest_grid_put_3d(H, W, D, coords, values, return_count=False):
- # coords: [N, 3], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- indices = (coords * 0.5 + 0.5) * torch.tensor(
- [H - 1, W - 1, D - 1], dtype=torch.float32, device=coords.device
- )
- indices = indices.round().long() # [N, 2]
-
- result = torch.zeros(H, W, D, C, device=values.device, dtype=values.dtype) # [H, W, C]
- count = torch.zeros(H, W, D, 1, device=values.device, dtype=values.dtype) # [H, W, 1]
- weights = torch.ones_like(values[..., :1]) # [N, 1]
-
- result, count = scatter_add_nd_with_count(result, count, indices, values, weights)
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-
-def linear_grid_put_3d(H, W, D, coords, values, return_count=False):
- # coords: [N, 3], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- indices = (coords * 0.5 + 0.5) * torch.tensor(
- [H - 1, W - 1, D - 1], dtype=torch.float32, device=coords.device
- )
- indices_000 = indices.floor().long() # [N, 3]
- indices_000[:, 0].clamp_(0, H - 2)
- indices_000[:, 1].clamp_(0, W - 2)
- indices_000[:, 2].clamp_(0, D - 2)
-
- indices_001 = indices_000 + torch.tensor([0, 0, 1], dtype=torch.long, device=indices.device)
- indices_010 = indices_000 + torch.tensor([0, 1, 0], dtype=torch.long, device=indices.device)
- indices_011 = indices_000 + torch.tensor([0, 1, 1], dtype=torch.long, device=indices.device)
- indices_100 = indices_000 + torch.tensor([1, 0, 0], dtype=torch.long, device=indices.device)
- indices_101 = indices_000 + torch.tensor([1, 0, 1], dtype=torch.long, device=indices.device)
- indices_110 = indices_000 + torch.tensor([1, 1, 0], dtype=torch.long, device=indices.device)
- indices_111 = indices_000 + torch.tensor([1, 1, 1], dtype=torch.long, device=indices.device)
-
- h = indices[..., 0] - indices_000[..., 0].float()
- w = indices[..., 1] - indices_000[..., 1].float()
- d = indices[..., 2] - indices_000[..., 2].float()
-
- w_000 = (1 - h) * (1 - w) * (1 - d)
- w_001 = (1 - h) * w * (1 - d)
- w_010 = h * (1 - w) * (1 - d)
- w_011 = h * w * (1 - d)
- w_100 = (1 - h) * (1 - w) * d
- w_101 = (1 - h) * w * d
- w_110 = h * (1 - w) * d
- w_111 = h * w * d
-
- result = torch.zeros(H, W, D, C, device=values.device, dtype=values.dtype) # [H, W, D, C]
- count = torch.zeros(H, W, D, 1, device=values.device, dtype=values.dtype) # [H, W, D, 1]
- weights = torch.ones_like(values[..., :1]) # [N, 1]
-
- result, count = scatter_add_nd_with_count(result, count, indices_000, values * w_000.unsqueeze(1), weights * w_000.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_001, values * w_001.unsqueeze(1), weights * w_001.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_010, values * w_010.unsqueeze(1), weights * w_010.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_011, values * w_011.unsqueeze(1), weights * w_011.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_100, values * w_100.unsqueeze(1), weights * w_100.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_101, values * w_101.unsqueeze(1), weights * w_101.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_110, values * w_110.unsqueeze(1), weights * w_110.unsqueeze(1))
- result, count = scatter_add_nd_with_count(result, count, indices_111, values * w_111.unsqueeze(1), weights * w_111.unsqueeze(1))
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-def mipmap_linear_grid_put_3d(H, W, D, coords, values, min_resolution=32, return_count=False):
- # coords: [N, 3], float in [-1, 1]
- # values: [N, C]
-
- C = values.shape[-1]
-
- result = torch.zeros(H, W, D, C, device=values.device, dtype=values.dtype) # [H, W, D, C]
- count = torch.zeros(H, W, D, 1, device=values.device, dtype=values.dtype) # [H, W, D, 1]
- cur_H, cur_W, cur_D = H, W, D
-
- while min(min(cur_H, cur_W), cur_D) > min_resolution:
-
- # try to fill the holes
- mask = (count.squeeze(-1) == 0)
- if not mask.any():
- break
-
- cur_result, cur_count = linear_grid_put_3d(cur_H, cur_W, cur_D, coords, values, return_count=True)
- result[mask] = result[mask] + F.interpolate(cur_result.permute(3,0,1,2).unsqueeze(0).contiguous(), (H, W, D), mode='trilinear', align_corners=False).squeeze(0).permute(1,2,3,0).contiguous()[mask]
- count[mask] = count[mask] + F.interpolate(cur_count.view(1, 1, cur_H, cur_W, cur_D), (H, W, D), mode='trilinear', align_corners=False).view(H, W, D, 1)[mask]
- cur_H //= 2
- cur_W //= 2
- cur_D //= 2
-
- if return_count:
- return result, count
-
- mask = (count.squeeze(-1) > 0)
- result[mask] = result[mask] / count[mask].repeat(1, C)
-
- return result
-
-
-def grid_put(shape, coords, values, mode='linear-mipmap', min_resolution=32, return_raw=False):
- # shape: [D], list/tuple
- # coords: [N, D], float in [-1, 1]
- # values: [N, C]
-
- D = len(shape)
- assert D in [2, 3], f'only support D == 2 or 3, but got D == {D}'
-
- if mode == 'nearest':
- if D == 2:
- return nearest_grid_put_2d(*shape, coords, values, return_raw)
- else:
- return nearest_grid_put_3d(*shape, coords, values, return_raw)
- elif mode == 'linear':
- if D == 2:
- return linear_grid_put_2d(*shape, coords, values, return_raw)
- else:
- return linear_grid_put_3d(*shape, coords, values, return_raw)
- elif mode == 'linear-mipmap':
- if D == 2:
- return mipmap_linear_grid_put_2d(*shape, coords, values, min_resolution, return_raw)
- else:
- return mipmap_linear_grid_put_3d(*shape, coords, values, min_resolution, return_raw)
- else:
- raise NotImplementedError(f"got mode {mode}")
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/README.md b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/README.md
deleted file mode 100644
index 286b77381a57401607cc52568d1d81b8ba5b4d83..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/README.md
+++ /dev/null
@@ -1,140 +0,0 @@
-# ResNet
-
-> [Deep Residual Learning for Image Recognition](https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html)
-
-
-
-## Introduction
-
-**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of
-learning unreferenced functions. In the mainstream previous works, like VGG, the neural networks are a stack
-of layers and every layer attempts to fit a desired underlying mapping. In ResNets, a few stacked layers are
-grouped as a block, and the layers in a block attempts to learn a residual mapping.
-
-Formally, denoting the desired underlying mapping of a block as $\mathcal{H}(x)$, split the underlying mapping
-into the sum of the identity and the residual mapping as $\mathcal{H}(x) = x + \mathcal{F}(x)$, and let the
-stacked non-linear layers fit the residual mapping $\mathcal{F}(x)$.
-
-Many works proved this method makes deep neural networks easier to optimize, and can gain accuracy from
-considerably increased depth. Recently, the residual structure is widely used in various models.
-
-
-
-
-
-## Abstract
-
-
-
-Show the paper's abstract
-
-
-Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
-
-The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
-
-
-
-
-## How to use it?
-
-
-
-**Predict image**
-
-```python
-from mmpretrain import inference_model
-
-predict = inference_model('resnet18_8xb16_cifar10', 'demo/bird.JPEG')
-print(predict['pred_class'])
-print(predict['pred_score'])
-```
-
-**Use the model**
-
-```python
-import torch
-from mmpretrain import get_model
-
-model = get_model('resnet18_8xb16_cifar10', pretrained=True)
-inputs = torch.rand(1, 3, 224, 224)
-out = model(inputs)
-print(type(out))
-# To extract features.
-feats = model.extract_feat(inputs)
-print(type(feats))
-```
-
-**Train/Test Command**
-
-Prepare your dataset according to the [docs](https://mmpretrain.readthedocs.io/en/latest/user_guides/dataset_prepare.html#prepare-dataset).
-
-Train:
-
-```shell
-python tools/train.py configs/resnet/resnet18_8xb16_cifar10.py
-```
-
-Test:
-
-```shell
-python tools/test.py configs/resnet/resnet18_8xb16_cifar10.py https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.pth
-```
-
-
-
-## Models and results
-
-### Image Classification on ImageNet-1k
-
-| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
-| :--------------------------------- | :----------: | :--------: | :-------: | :-------: | :-------: | :-------------------------------------------: | :----------------------------------------------------------------------: |
-| `resnet18_8xb32_in1k` | From scratch | 11.69 | 1.82 | 69.90 | 89.43 | [config](resnet18_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.json) |
-| `resnet34_8xb32_in1k` | From scratch | 2.18 | 3.68 | 73.62 | 91.59 | [config](resnet34_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.json) |
-| `resnet50_8xb32_in1k` | From scratch | 25.56 | 4.12 | 76.55 | 93.06 | [config](resnet50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.json) |
-| `resnet101_8xb32_in1k` | From scratch | 44.55 | 7.85 | 77.97 | 94.06 | [config](resnet101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.json) |
-| `resnet152_8xb32_in1k` | From scratch | 60.19 | 11.58 | 78.48 | 94.13 | [config](resnet152_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.json) |
-| `resnetv1d50_8xb32_in1k` | From scratch | 25.58 | 4.36 | 77.54 | 93.57 | [config](resnetv1d50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.json) |
-| `resnetv1d101_8xb32_in1k` | From scratch | 44.57 | 8.09 | 78.93 | 94.48 | [config](resnetv1d101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.json) |
-| `resnetv1d152_8xb32_in1k` | From scratch | 60.21 | 11.82 | 79.41 | 94.70 | [config](resnetv1d152_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d152_b32x8_imagenet_20210531-278cf22a.json) |
-| `resnet50_8xb32-fp16_in1k` | From scratch | 25.56 | 4.12 | 76.30 | 93.07 | [config](resnet50_8xb32-fp16_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/fp16/resnet50_batch256_fp16_imagenet_20210320-b3964210.json) |
-| `resnet50_8xb256-rsb-a1-600e_in1k` | From scratch | 25.56 | 4.12 | 80.12 | 94.78 | [config](resnet50_8xb256-rsb-a1-600e_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a1-600e_in1k_20211228-20e21305.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a1-600e_in1k_20211228-20e21305.json) |
-| `resnet50_8xb256-rsb-a2-300e_in1k` | From scratch | 25.56 | 4.12 | 79.55 | 94.37 | [config](resnet50_8xb256-rsb-a2-300e_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a2-300e_in1k_20211228-0fd8be6e.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a2-300e_in1k_20211228-0fd8be6e.json) |
-| `resnet50_8xb256-rsb-a3-100e_in1k` | From scratch | 25.56 | 4.12 | 78.30 | 93.80 | [config](resnet50_8xb256-rsb-a3-100e_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a3-100e_in1k_20211228-3493673c.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb256-rsb-a3-100e_in1k_20211228-3493673c.json) |
-| `resnetv1c50_8xb32_in1k` | From scratch | 25.58 | 4.36 | 77.01 | 93.58 | [config](resnetv1c50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c50_8xb32_in1k_20220214-3343eccd.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c50_8xb32_in1k_20220214-3343eccd.json) |
-| `resnetv1c101_8xb32_in1k` | From scratch | 44.57 | 8.09 | 78.30 | 94.27 | [config](resnetv1c101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c101_8xb32_in1k_20220214-434fe45f.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c101_8xb32_in1k_20220214-434fe45f.json) |
-| `resnetv1c152_8xb32_in1k` | From scratch | 60.21 | 11.82 | 78.76 | 94.41 | [config](resnetv1c152_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c152_8xb32_in1k_20220214-c013291f.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1c152_8xb32_in1k_20220214-c013291f.json) |
-
-### Image Classification on CIFAR-10
-
-| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
-| :------------------------ | :----------: | :--------: | :-------: | :-------: | :----------------------------------: | :-------------------------------------------------------------------------------------------------: |
-| `resnet18_8xb16_cifar10` | From scratch | 11.17 | 0.56 | 94.82 | [config](resnet18_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.json) |
-| `resnet34_8xb16_cifar10` | From scratch | 21.28 | 1.16 | 95.34 | [config](resnet34_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20210528-a8aa36a6.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20210528-a8aa36a6.json) |
-| `resnet50_8xb16_cifar10` | From scratch | 23.52 | 1.31 | 95.55 | [config](resnet50_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20210528-f54bfad9.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20210528-f54bfad9.json) |
-| `resnet101_8xb16_cifar10` | From scratch | 42.51 | 2.52 | 95.58 | [config](resnet101_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20210528-2d29e936.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20210528-2d29e936.json) |
-| `resnet152_8xb16_cifar10` | From scratch | 58.16 | 3.74 | 95.76 | [config](resnet152_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20210528-3e8e9178.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20210528-3e8e9178.json) |
-
-### Image Classification on CIFAR-100
-
-| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Top-5 (%) | Config | Download |
-| :------------------------ | :----------: | :--------: | :-------: | :-------: | :-------: | :----------------------------------: | :----------------------------------------------------------------------------------------: |
-| `resnet50_8xb16_cifar100` | From scratch | 23.71 | 1.31 | 79.90 | 95.19 | [config](resnet50_8xb16_cifar100.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.json) |
-
-### Image Classification on CUB-200-2011
-
-| Model | Pretrain | Params (M) | Flops (G) | Top-1 (%) | Config | Download |
-| :------------------ | :----------: | :--------: | :-------: | :-------: | :----------------------------: | :-------------------------------------------------------------------------------------------------------------: |
-| `resnet50_8xb8_cub` | From scratch | 23.92 | 16.48 | 88.45 | [config](resnet50_8xb8_cub.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb8_cub_20220307-57840e60.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb8_cub_20220307-57840e60.json) |
-
-## Citation
-
-```bibtex
-@inproceedings{he2016deep,
- title={Deep residual learning for image recognition},
- author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
- booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
- pages={770--778},
- year={2016}
-}
-```
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/tester.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/tester.py
deleted file mode 100644
index 50d622e7edb0ed989fcd3273d35e74d66f11ce75..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/tester.py
+++ /dev/null
@@ -1,63 +0,0 @@
-from .config_manager import ConfigManager
-import os
-from typing import Dict
-
-from torch import nn
-from tqdm import tqdm
-from tqdm import trange
-
-from dataset import load_iterators
-from trainer import GeneralTrainer
-
-
-class DiacritizationTester(GeneralTrainer):
- def __init__(self, config_path: str, model_kind: str) -> None:
- self.config_path = config_path
- self.model_kind = model_kind
- self.config_manager = ConfigManager(
- config_path=config_path, model_kind=model_kind
- )
- self.config = self.config_manager.config
- self.pad_idx = 0
- self.criterion = nn.CrossEntropyLoss(ignore_index=self.pad_idx)
- self.set_device()
-
- self.text_encoder = self.config_manager.text_encoder
- self.start_symbol_id = self.text_encoder.start_symbol_id
-
- self.model = self.config_manager.get_model()
-
- self.model = self.model.to(self.device)
-
- self.load_model(model_path=self.config["test_model_path"], load_optimizer=False)
- self.load_diacritizer()
- self.diacritizer.set_model(self.model)
-
- self.initialize_model()
-
- self.print_config()
-
- def run(self):
- self.config_manager.config["load_training_data"] = False
- self.config_manager.config["load_validation_data"] = False
- self.config_manager.config["load_test_data"] = True
- _, test_iterator, _ = load_iterators(self.config_manager)
- tqdm_eval = trange(0, len(test_iterator), leave=True)
- tqdm_error_rates = trange(0, len(test_iterator), leave=True)
-
- loss, acc = self.evaluate(test_iterator, tqdm_eval, log = False)
- error_rates, _ = self.evaluate_with_error_rates(test_iterator, tqdm_error_rates, log = False)
-
- tqdm_eval.close()
- tqdm_error_rates.close()
-
- WER = error_rates["WER"]
- DER = error_rates["DER"]
- DER1 = error_rates["DER*"]
- WER1 = error_rates["WER*"]
-
- error_rates = f"DER: {DER}, WER: {WER}, DER*: {DER1}, WER*: {WER1}"
-
- print(f"global step : {self.global_step}")
- print(f"Evaluate {self.global_step}: accuracy, {acc}, loss: {loss}")
- print(f"WER/DER {self.global_step}: {error_rates}")
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Liaobots.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Liaobots.py
deleted file mode 100644
index 2ab96ce349f641d3e4afaf862169f27d749ca62b..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Liaobots.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from __future__ import annotations
-
-import uuid
-
-from aiohttp import ClientSession
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-models = {
- "gpt-4": {
- "id": "gpt-4",
- "name": "GPT-4",
- "maxLength": 24000,
- "tokenLimit": 8000,
- },
- "gpt-3.5-turbo": {
- "id": "gpt-3.5-turbo",
- "name": "GPT-3.5",
- "maxLength": 12000,
- "tokenLimit": 4000,
- },
- "gpt-3.5-turbo-16k": {
- "id": "gpt-3.5-turbo-16k",
- "name": "GPT-3.5-16k",
- "maxLength": 48000,
- "tokenLimit": 16000,
- },
-}
-
-class Liaobots(AsyncGeneratorProvider):
- url = "https://liaobots.site"
- working = True
- supports_gpt_35_turbo = True
- supports_gpt_4 = True
- _auth_code = None
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- auth: str = None,
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
- model = model if model in models else "gpt-3.5-turbo"
- headers = {
- "authority": "liaobots.com",
- "content-type": "application/json",
- "origin": cls.url,
- "referer": cls.url + "/",
- "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
- }
- async with ClientSession(
- headers=headers
- ) as session:
- cls._auth_code = auth if isinstance(auth, str) else cls._auth_code
- if not cls._auth_code:
- async with session.post(
- "https://liaobots.work/recaptcha/api/login",
- proxy=proxy,
- data={"token": "abcdefghijklmnopqrst"},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- async with session.post(
- "https://liaobots.work/api/user",
- proxy=proxy,
- json={"authcode": ""},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- cls._auth_code = (await response.json(content_type=None))["authCode"]
- data = {
- "conversationId": str(uuid.uuid4()),
- "model": models[model],
- "messages": messages,
- "key": "",
- "prompt": "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully.",
- }
- async with session.post(
- "https://liaobots.work/api/chat",
- proxy=proxy,
- json=data,
- headers={"x-auth-code": cls._auth_code},
- verify_ssl=False
- ) as response:
- response.raise_for_status()
- async for stream in response.content.iter_any():
- if stream:
- yield stream.decode()
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("proxy", "str"),
- ("auth", "str"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
diff --git a/spaces/Aditya757864/SentimentAnalysis/app.py b/spaces/Aditya757864/SentimentAnalysis/app.py
deleted file mode 100644
index c621163f70e6d4cfc1b9322a83b69a393c50b54f..0000000000000000000000000000000000000000
--- a/spaces/Aditya757864/SentimentAnalysis/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-sentiment = pipeline('sentiment-analysis')
-def get_sentiment(input_text):
- return sentiment (input_text)
-iface = gr.Interface(fn = get_sentiment,
- inputs = 'text',
- outputs = ['text'],
- title = 'Sentiment Analysis',
- examples = ['The movie was very bad', 'Every day is a new opportunity.'],
- article = 'This project is for software engineering with team members Aditya Jadhav, Sujal Kuthe, Sujal Wakalkar, and Adesh Ingle. We developed a web application for sentiment analysis that takes text data as input and classifies whether it is positive or negative.',
- thumbnail = '/content/sentiment-analysis.png',
- theme = gr.themes.Soft())
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Aditya9790/yolo7-object-tracking/train.py b/spaces/Aditya9790/yolo7-object-tracking/train.py
deleted file mode 100644
index 86c7e48d5ac214ad4a4c0a63b924d7ece429211c..0000000000000000000000000000000000000000
--- a/spaces/Aditya9790/yolo7-object-tracking/train.py
+++ /dev/null
@@ -1,705 +0,0 @@
-import argparse
-import logging
-import math
-import os
-import random
-import time
-from copy import deepcopy
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-from tqdm import tqdm
-
-import test # import test.py to get mAP after each epoch
-from models.experimental import attempt_load
-from models.yolo import Model
-from utils.autoanchor import check_anchors
-from utils.datasets import create_dataloader
-from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
- fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
- check_requirements, print_mutation, set_logging, one_cycle, colorstr
-from utils.google_utils import attempt_download
-from utils.loss import ComputeLoss, ComputeLossOTA
-from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel
-from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
-
-logger = logging.getLogger(__name__)
-
-
-def train(hyp, opt, device, tb_writer=None):
- logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
- save_dir, epochs, batch_size, total_batch_size, weights, rank, freeze = \
- Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank, opt.freeze
-
- # Directories
- wdir = save_dir / 'weights'
- wdir.mkdir(parents=True, exist_ok=True) # make dir
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = save_dir / 'results.txt'
-
- # Save run settings
- with open(save_dir / 'hyp.yaml', 'w') as f:
- yaml.dump(hyp, f, sort_keys=False)
- with open(save_dir / 'opt.yaml', 'w') as f:
- yaml.dump(vars(opt), f, sort_keys=False)
-
- # Configure
- plots = not opt.evolve # create plots
- cuda = device.type != 'cpu'
- init_seeds(2 + rank)
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- is_coco = opt.data.endswith('coco.yaml')
-
- # Logging- Doing this before checking the dataset. Might update data_dict
- loggers = {'wandb': None} # loggers dict
- if rank in [-1, 0]:
- opt.hyp = hyp # add hyperparameters
- run_id = torch.load(weights, map_location=device).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
- wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict)
- loggers['wandb'] = wandb_logger.wandb
- data_dict = wandb_logger.data_dict
- if wandb_logger.wandb:
- weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming
-
- nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes
- names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(rank):
- attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- with torch_distributed_zero_first(rank):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
-
- # Freeze
- freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
- logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
- if hasattr(v, 'im'):
- if hasattr(v.im, 'implicit'):
- pg0.append(v.im.implicit)
- else:
- for iv in v.im:
- pg0.append(iv.implicit)
- if hasattr(v, 'imc'):
- if hasattr(v.imc, 'implicit'):
- pg0.append(v.imc.implicit)
- else:
- for iv in v.imc:
- pg0.append(iv.implicit)
- if hasattr(v, 'imb'):
- if hasattr(v.imb, 'implicit'):
- pg0.append(v.imb.implicit)
- else:
- for iv in v.imb:
- pg0.append(iv.implicit)
- if hasattr(v, 'imo'):
- if hasattr(v.imo, 'implicit'):
- pg0.append(v.imo.implicit)
- else:
- for iv in v.imo:
- pg0.append(iv.implicit)
- if hasattr(v, 'ia'):
- if hasattr(v.ia, 'implicit'):
- pg0.append(v.ia.implicit)
- else:
- for iv in v.ia:
- pg0.append(iv.implicit)
- if hasattr(v, 'attn'):
- if hasattr(v.attn, 'logit_scale'):
- pg0.append(v.attn.logit_scale)
- if hasattr(v.attn, 'q_bias'):
- pg0.append(v.attn.q_bias)
- if hasattr(v.attn, 'v_bias'):
- pg0.append(v.attn.v_bias)
- if hasattr(v.attn, 'relative_position_bias_table'):
- pg0.append(v.attn.relative_position_bias_table)
- if hasattr(v, 'rbr_dense'):
- if hasattr(v.rbr_dense, 'weight_rbr_origin'):
- pg0.append(v.rbr_dense.weight_rbr_origin)
- if hasattr(v.rbr_dense, 'weight_rbr_avg_conv'):
- pg0.append(v.rbr_dense.weight_rbr_avg_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_pfir_conv'):
- pg0.append(v.rbr_dense.weight_rbr_pfir_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_idconv1'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_conv2'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_dw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_dw)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_pw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_pw)
- if hasattr(v.rbr_dense, 'vector'):
- pg0.append(v.rbr_dense.vector)
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- if opt.linear_lr:
- lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
- else:
- lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # EMA
- ema = ModelEMA(model) if rank in [-1, 0] else None
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # EMA
- if ema and ckpt.get('ema'):
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
- ema.updates = ckpt['updates']
-
- # Results
- if ckpt.get('training_results') is not None:
- results_file.write_text(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if opt.resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- if cuda and rank == -1 and torch.cuda.device_count() > 1:
- model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and rank != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
- hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
- world_size=opt.world_size, workers=opt.workers,
- image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
-
- # Process 0
- if rank in [-1, 0]:
- testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader
- hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1,
- world_size=opt.world_size, workers=opt.workers,
- pad=0.5, prefix=colorstr('val: '))[0]
-
- if not opt.resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- if plots:
- #plot_labels(labels, names, save_dir, loggers)
- if tb_writer:
- tb_writer.add_histogram('classes', c, 0)
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
- model.half().float() # pre-reduce anchor precision
-
- # DDP mode
- if cuda and rank != -1:
- model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank,
- # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698
- find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules()))
-
- # Model parameters
- hyp['box'] *= 3. / nl # scale to layers
- hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers
- hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers
- hyp['label_smoothing'] = opt.label_smoothing
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- compute_loss_ota = ComputeLossOTA(model) # init loss class
- compute_loss = ComputeLoss(model) # init loss class
- logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n'
- f'Using {dataloader.num_workers} dataloader workers\n'
- f'Logging results to {save_dir}\n'
- f'Starting training for {epochs} epochs...')
- torch.save(model, wdir / 'init.pt')
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if rank in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if rank != -1:
- indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if rank != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if rank != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size'))
- if rank in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- if 'loss_ota' not in hyp or hyp['loss_ota'] == 1:
- loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
- else:
- loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
- if rank != -1:
- loss *= opt.world_size # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
-
- # Backward
- scaler.scale(loss).backward()
-
- # Optimize
- if ni % accumulate == 0:
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
-
- # Print
- if rank in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
- pbar.set_description(s)
-
- # Plot
- if plots and ni < 10:
- f = save_dir / f'train_batch{ni}.jpg' # filename
- Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
- # if tb_writer:
- # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
- # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph
- elif plots and ni == 10 and wandb_logger.wandb:
- wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in
- save_dir.glob('train*.jpg') if x.exists()]})
-
- # end batch ------------------------------------------------------------------------------------------------
- # end epoch ----------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard
- scheduler.step()
-
- # DDP process 0 or single-GPU
- if rank in [-1, 0]:
- # mAP
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
- final_epoch = epoch + 1 == epochs
- if not opt.notest or final_epoch: # Calculate mAP
- wandb_logger.current_epoch = epoch + 1
- results, maps, times = test.test(data_dict,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- verbose=nc < 50 and final_epoch,
- plots=plots and final_epoch,
- wandb_logger=wandb_logger,
- compute_loss=compute_loss,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
- if len(opt.name) and opt.bucket:
- os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
-
- # Log
- tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if tb_writer:
- tb_writer.add_scalar(tag, x, epoch) # tensorboard
- if wandb_logger.wandb:
- wandb_logger.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
- wandb_logger.end_epoch(best_result=best_fitness == fi)
-
- # Save model
- if (not opt.nosave) or (final_epoch and not opt.evolve): # if save
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': results_file.read_text(),
- 'model': deepcopy(model.module if is_parallel(model) else model).half(),
- 'ema': deepcopy(ema.ema).half(),
- 'updates': ema.updates,
- 'optimizer': optimizer.state_dict(),
- 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- if (best_fitness == fi) and (epoch >= 200):
- torch.save(ckpt, wdir / 'best_{:03d}.pt'.format(epoch))
- if epoch == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif ((epoch+1) % 25) == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif epoch >= (epochs-5):
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- if wandb_logger.wandb:
- if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1:
- wandb_logger.log_model(
- last.parent, opt, epoch, fi, best_model=best_fitness == fi)
- del ckpt
-
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training
- if rank in [-1, 0]:
- # Plots
- if plots:
- plot_results(save_dir=save_dir) # save as results.png
- if wandb_logger.wandb:
- files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]]
- wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files
- if (save_dir / f).exists()]})
- # Test best.pt
- logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
- if opt.data.endswith('coco.yaml') and nc == 80: # if COCO
- for m in (last, best) if best.exists() else (last): # speed, mAP tests
- results, _, _ = test.test(opt.data,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
- model=attempt_load(m, device).half(),
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=True,
- plots=False,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Strip optimizers
- final = best if best.exists() else last # final model
- for f in last, best:
- if f.exists():
- strip_optimizer(f) # strip optimizers
- if opt.bucket:
- os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload
- if wandb_logger.wandb and not opt.evolve: # Log the stripped model
- wandb_logger.wandb.log_artifact(str(final), type='model',
- name='run_' + wandb_logger.wandb_run.id + '_model',
- aliases=['last', 'best', 'stripped'])
- wandb_logger.finish_run()
- else:
- dist.destroy_process_group()
- torch.cuda.empty_cache()
- return results
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolo7.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyp.scratch.p5.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300)
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
- parser.add_argument('--project', default='runs/train', help='save to project/name')
- parser.add_argument('--entity', default=None, help='W&B entity')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--linear-lr', action='store_true', help='linear LR')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
- parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
- parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
- parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone of yolov7=50, first3=0 1 2')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
-
- # Set DDP variables
- opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
- opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
- set_logging(opt.global_rank)
- #if opt.global_rank in [-1, 0]:
- # check_git_status()
- # check_requirements()
-
- # Resume
- wandb_run = check_wandb_resume(opt)
- if opt.resume and not wandb_run: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- apriori = opt.global_rank, opt.local_rank
- with open(Path(ckpt).parent.parent / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace
- opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate
- logger.info('Resuming training from %s' % ckpt)
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- opt.name = 'evolve' if opt.evolve else opt.name
- opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run
-
- # DDP mode
- opt.total_batch_size = opt.batch_size
- device = select_device(opt.device, batch_size=opt.batch_size)
- if opt.local_rank != -1:
- assert torch.cuda.device_count() > opt.local_rank
- torch.cuda.set_device(opt.local_rank)
- device = torch.device('cuda', opt.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
- assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
- opt.batch_size = opt.total_batch_size // opt.world_size
-
- # Hyperparameters
- with open(opt.hyp) as f:
- hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps
-
- # Train
- logger.info(opt)
- if not opt.evolve:
- tb_writer = None # init loggers
- if opt.global_rank in [-1, 0]:
- prefix = colorstr('tensorboard: ')
- logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
- tb_writer = SummaryWriter(opt.save_dir) # Tensorboard
- train(hyp, opt, device, tb_writer)
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0), # image mixup (probability)
- 'copy_paste': (1, 0.0, 1.0), # segment copy-paste (probability)
- 'paste_in': (1, 0.0, 1.0)} # segment copy-paste (probability)
-
- with open(opt.hyp, errors='ignore') as f:
- hyp = yaml.safe_load(f) # load hyps dict
- if 'anchors' not in hyp: # anchors commented in hyp.yaml
- hyp['anchors'] = 3
-
- assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(300): # generations to evolve
- if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() # weights
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Methods.js
deleted file mode 100644
index de92c508ece267fba20b4c9aa00d10dc17d75f3f..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Methods.js
+++ /dev/null
@@ -1,9 +0,0 @@
-import ResetDisplayContent from './ResetDisplayContent.js';
-import Modal from './Modal.js';
-
-var Methods = {
- resetDisplayContent: ResetDisplayContent,
- modal: Modal,
-}
-
-export default Methods;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_paris.sh b/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_paris.sh
deleted file mode 100644
index 66056017c3aa376ef0767a59583ab25a321b559b..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_paris.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/Paris_StreetView_Dataset_val"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in paris_eval_gt
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 segm_256
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-paris \
- location.out_dir=OUT_DIR cropping.out_square_crop=False cropping.out_min_size=227
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py
deleted file mode 100644
index 8bd45a930d3dc84912e58659ee575be08e9038f0..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : test_numeric_batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-
-import unittest
-
-import torch
-import torch.nn as nn
-from torch.autograd import Variable
-
-from sync_batchnorm.unittest import TorchTestCase
-
-
-def handy_var(a, unbias=True):
- n = a.size(0)
- asum = a.sum(dim=0)
- as_sum = (a ** 2).sum(dim=0) # a square sum
- sumvar = as_sum - asum * asum / n
- if unbias:
- return sumvar / (n - 1)
- else:
- return sumvar / n
-
-
-class NumericTestCase(TorchTestCase):
- def testNumericBatchNorm(self):
- a = torch.rand(16, 10)
- bn = nn.BatchNorm2d(10, momentum=1, eps=1e-5, affine=False)
- bn.train()
-
- a_var1 = Variable(a, requires_grad=True)
- b_var1 = bn(a_var1)
- loss1 = b_var1.sum()
- loss1.backward()
-
- a_var2 = Variable(a, requires_grad=True)
- a_mean2 = a_var2.mean(dim=0, keepdim=True)
- a_std2 = torch.sqrt(handy_var(a_var2, unbias=False).clamp(min=1e-5))
- # a_std2 = torch.sqrt(a_var2.var(dim=0, keepdim=True, unbiased=False) + 1e-5)
- b_var2 = (a_var2 - a_mean2) / a_std2
- loss2 = b_var2.sum()
- loss2.backward()
-
- self.assertTensorClose(bn.running_mean, a.mean(dim=0))
- self.assertTensorClose(bn.running_var, handy_var(a))
- self.assertTensorClose(a_var1.data, a_var2.data)
- self.assertTensorClose(b_var1.data, b_var2.data)
- self.assertTensorClose(a_var1.grad, a_var2.grad)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/unittest.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/unittest.py
deleted file mode 100644
index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/unittest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : unittest.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import unittest
-
-import numpy as np
-from torch.autograd import Variable
-
-
-def as_numpy(v):
- if isinstance(v, Variable):
- v = v.data
- return v.cpu().numpy()
-
-
-class TorchTestCase(unittest.TestCase):
- def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3):
- npa, npb = as_numpy(a), as_numpy(b)
- self.assertTrue(
- np.allclose(npa, npb, atol=atol),
- 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max())
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/unidiffuser.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/unidiffuser.md
deleted file mode 100644
index ff8f4e7c6ec9819a9505bf9aa8793220363cdee6..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/unidiffuser.md
+++ /dev/null
@@ -1,194 +0,0 @@
-
-
-# UniDiffuser
-
-The UniDiffuser model was proposed in [One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale](https://huggingface.co/papers/2303.06555) by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu.
-
-The abstract from the [paper](https://arxiv.org/abs/2303.06555) is:
-
-*This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is -- learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).*
-
-You can find the original codebase at [thu-ml/unidiffuser](https://github.com/thu-ml/unidiffuser) and additional checkpoints at [thu-ml](https://huggingface.co/thu-ml).
-
-This pipeline was contributed by [dg845](https://github.com/dg845). ❤️
-
-## Usage Examples
-
-Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks:
-
-### Unconditional Image and Text Generation
-
-Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a [`UniDiffuserPipeline`] will produce a (image, text) pair:
-
-```python
-import torch
-
-from diffusers import UniDiffuserPipeline
-
-device = "cuda"
-model_id_or_path = "thu-ml/unidiffuser-v1"
-pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
-pipe.to(device)
-
-# Unconditional image and text generation. The generation task is automatically inferred.
-sample = pipe(num_inference_steps=20, guidance_scale=8.0)
-image = sample.images[0]
-text = sample.text[0]
-image.save("unidiffuser_joint_sample_image.png")
-print(text)
-```
-
-This is also called "joint" generation in the UniDiffusers paper, since we are sampling from the joint image-text distribution.
-
-Note that the generation task is inferred from the inputs used when calling the pipeline.
-It is also possible to manually specify the unconditional generation task ("mode") manually with [`UniDiffuserPipeline.set_joint_mode`]:
-
-```python
-# Equivalent to the above.
-pipe.set_joint_mode()
-sample = pipe(num_inference_steps=20, guidance_scale=8.0)
-```
-
-When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting the infer the mode.
-You can reset the mode with [`UniDiffuserPipeline.reset_mode`], after which the pipeline will once again infer the mode.
-
-You can also generate only an image or only text (which the UniDiffuser paper calls "marginal" generation since we sample from the marginal distribution of images and text, respectively):
-
-```python
-# Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance
-# Image-only generation
-pipe.set_image_mode()
-sample_image = pipe(num_inference_steps=20).images[0]
-# Text-only generation
-pipe.set_text_mode()
-sample_text = pipe(num_inference_steps=20).text[0]
-```
-
-### Text-to-Image Generation
-
-UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image.
-Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation):
-
-```python
-import torch
-
-from diffusers import UniDiffuserPipeline
-
-device = "cuda"
-model_id_or_path = "thu-ml/unidiffuser-v1"
-pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
-pipe.to(device)
-
-# Text-to-image generation
-prompt = "an elephant under the sea"
-
-sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
-t2i_image = sample.images[0]
-t2i_image.save("unidiffuser_text2img_sample_image.png")
-```
-
-The `text2img` mode requires that either an input `prompt` or `prompt_embeds` be supplied. You can set the `text2img` mode manually with [`UniDiffuserPipeline.set_text_to_image_mode`].
-
-### Image-to-Text Generation
-
-Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation):
-
-```python
-import torch
-
-from diffusers import UniDiffuserPipeline
-from diffusers.utils import load_image
-
-device = "cuda"
-model_id_or_path = "thu-ml/unidiffuser-v1"
-pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
-pipe.to(device)
-
-# Image-to-text generation
-image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
-init_image = load_image(image_url).resize((512, 512))
-
-sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
-i2t_text = sample.text[0]
-print(i2t_text)
-```
-
-The `img2text` mode requires that an input `image` be supplied. You can set the `img2text` mode manually with [`UniDiffuserPipeline.set_image_to_text_mode`].
-
-### Image Variation
-
-The UniDiffuser authors suggest performing image variation through a "round-trip" generation method, where given an input image, we first perform an image-to-text generation, and the perform a text-to-image generation on the outputs of the first generation.
-This produces a new image which is semantically similar to the input image:
-
-```python
-import torch
-
-from diffusers import UniDiffuserPipeline
-from diffusers.utils import load_image
-
-device = "cuda"
-model_id_or_path = "thu-ml/unidiffuser-v1"
-pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
-pipe.to(device)
-
-# Image variation can be performed with a image-to-text generation followed by a text-to-image generation:
-# 1. Image-to-text generation
-image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
-init_image = load_image(image_url).resize((512, 512))
-
-sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
-i2t_text = sample.text[0]
-print(i2t_text)
-
-# 2. Text-to-image generation
-sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0)
-final_image = sample.images[0]
-final_image.save("unidiffuser_image_variation_sample.png")
-```
-
-### Text Variation
-
-
-Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation:
-
-```python
-import torch
-
-from diffusers import UniDiffuserPipeline
-
-device = "cuda"
-model_id_or_path = "thu-ml/unidiffuser-v1"
-pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
-pipe.to(device)
-
-# Text variation can be performed with a text-to-image generation followed by a image-to-text generation:
-# 1. Text-to-image generation
-prompt = "an elephant under the sea"
-
-sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
-t2i_image = sample.images[0]
-t2i_image.save("unidiffuser_text2img_sample_image.png")
-
-# 2. Image-to-text generation
-sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0)
-final_prompt = sample.text[0]
-print(final_prompt)
-```
-
-## UniDiffuserPipeline
-[[autodoc]] UniDiffuserPipeline
- - all
- - __call__
-
-## ImageTextPipelineOutput
-[[autodoc]] pipelines.ImageTextPipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/using_safetensors.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/using_safetensors.md
deleted file mode 100644
index 6972103bde102b37679fac338dd3cd8489ca2f40..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/using_safetensors.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# 세이프센서란 무엇인가요?
-
-[세이프텐서](https://github.com/huggingface/safetensors)는 피클을 사용하는 파이토치를 사용하는 기존의 '.bin'과는 다른 형식입니다.
-
-피클은 악의적인 파일이 임의의 코드를 실행할 수 있는 안전하지 않은 것으로 악명이 높습니다.
-허브 자체에서 문제를 방지하기 위해 노력하고 있지만 만병통치약은 아닙니다.
-
-세이프텐서의 가장 중요한 목표는 컴퓨터를 탈취할 수 없다는 의미에서 머신 러닝 모델 로딩을 *안전하게* 만드는 것입니다.
-
-# 왜 세이프센서를 사용하나요?
-
-**잘 알려지지 않은 모델을 사용하려는 경우, 그리고 파일의 출처가 확실하지 않은 경우 "안전성"이 하나의 이유가 될 수 있습니다.
-
-그리고 두 번째 이유는 **로딩 속도**입니다. 세이프센서는 일반 피클 파일보다 훨씬 빠르게 모델을 훨씬 빠르게 로드할 수 있습니다. 모델을 전환하는 데 많은 시간을 소비하는 경우, 이는 엄청난 시간 절약이 가능합니다.
\ No newline at end of file
diff --git a/spaces/Andy1621/IAT_enhancement/model/IAT.py b/spaces/Andy1621/IAT_enhancement/model/IAT.py
deleted file mode 100644
index fc293f7ccb3a437cbc56ba3e918f77b4515ad094..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/IAT_enhancement/model/IAT.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import numpy as np
-from torch import nn
-import torch.nn.functional as F
-import os
-import math
-
-from timm.models.layers import trunc_normal_
-from .blocks import CBlock_ln, SwinTransformerBlock
-from .global_net import Global_pred
-
-
-class Local_pred(nn.Module):
- def __init__(self, dim=16, number=4, type='ccc'):
- super(Local_pred, self).__init__()
- # initial convolution
- self.conv1 = nn.Conv2d(3, dim, 3, padding=1, groups=1)
- self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- # main blocks
- block = CBlock_ln(dim)
- block_t = SwinTransformerBlock(dim) # head number
- if type =='ccc':
- #blocks1, blocks2 = [block for _ in range(number)], [block for _ in range(number)]
- blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- elif type =='ttt':
- blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
- elif type =='cct':
- blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
- # block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
- self.mul_blocks = nn.Sequential(*blocks1, nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
- self.add_blocks = nn.Sequential(*blocks2, nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
-
- def forward(self, img):
- img1 = self.relu(self.conv1(img))
- mul = self.mul_blocks(img1)
- add = self.add_blocks(img1)
- return mul, add
-
-
-# Short Cut Connection on Final Layer
-class Local_pred_S(nn.Module):
- def __init__(self, in_dim=3, dim=16, number=4, type='ccc'):
- super(Local_pred_S, self).__init__()
- # initial convolution
- self.conv1 = nn.Conv2d(in_dim, dim, 3, padding=1, groups=1)
- self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- # main blocks
- block = CBlock_ln(dim)
- block_t = SwinTransformerBlock(dim) # head number
- if type =='ccc':
- blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
- elif type =='ttt':
- blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
- elif type =='cct':
- blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
- # block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
- self.mul_blocks = nn.Sequential(*blocks1)
- self.add_blocks = nn.Sequential(*blocks2)
-
- self.mul_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
- self.add_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
- elif isinstance(m, nn.Conv2d):
- fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- fan_out //= m.groups
- m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, img):
- img1 = self.relu(self.conv1(img))
- # short cut connection
- mul = self.mul_blocks(img1) + img1
- add = self.add_blocks(img1) + img1
- mul = self.mul_end(mul)
- add = self.add_end(add)
- return mul, add
-
-
-class IAT(nn.Module):
- def __init__(self, in_dim=3, with_global=True, type='lol'):
- super(IAT, self).__init__()
- self.local_net = Local_pred_S(in_dim=in_dim)
- self.with_global = with_global
- if self.with_global:
- self.global_net = Global_pred(in_channels=in_dim, type=type)
-
- def apply_color(self, image, ccm):
- shape = image.shape
- image = image.view(-1, 3)
- image = torch.tensordot(image, ccm, dims=[[-1], [-1]])
- image = image.view(shape)
- return torch.clamp(image, 1e-8, 1.0)
-
- def forward(self, img_low):
- #print(self.with_global)
- mul, add = self.local_net(img_low)
- img_high = (img_low.mul(mul)).add(add)
-
- if not self.with_global:
- return img_high
- else:
- gamma, color = self.global_net(img_low)
- b = img_high.shape[0]
- img_high = img_high.permute(0, 2, 3, 1) # (B,C,H,W) -- (B,H,W,C)
- img_high = torch.stack([self.apply_color(img_high[i,:,:,:], color[i,:,:])**gamma[i,:] for i in range(b)], dim=0)
- img_high = img_high.permute(0, 3, 1, 2) # (B,H,W,C) -- (B,C,H,W)
- return img_high
-
-
-if __name__ == "__main__":
- img = torch.Tensor(1, 3, 400, 600)
- net = IAT()
- print('total parameters:', sum(param.numel() for param in net.parameters()))
- high = net(img)
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py
deleted file mode 100644
index d1bcf3c102fb660641eda2a1398db3df520caa3a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py
deleted file mode 100644
index c735298487e14e4a0ec42913f25673cccb98a8a0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/instance_balanced_pos_sampler.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import numpy as np
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .random_sampler import RandomSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class InstanceBalancedPosSampler(RandomSampler):
- """Instance balanced sampler that samples equal number of positive samples
- for each instance."""
-
- def _sample_pos(self, assign_result, num_expected, **kwargs):
- """Sample positive boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): The assigned results of boxes.
- num_expected (int): The number of expected positive samples
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
- if pos_inds.numel() != 0:
- pos_inds = pos_inds.squeeze(1)
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- unique_gt_inds = assign_result.gt_inds[pos_inds].unique()
- num_gts = len(unique_gt_inds)
- num_per_gt = int(round(num_expected / float(num_gts)) + 1)
- sampled_inds = []
- for i in unique_gt_inds:
- inds = torch.nonzero(
- assign_result.gt_inds == i.item(), as_tuple=False)
- if inds.numel() != 0:
- inds = inds.squeeze(1)
- else:
- continue
- if len(inds) > num_per_gt:
- inds = self.random_choice(inds, num_per_gt)
- sampled_inds.append(inds)
- sampled_inds = torch.cat(sampled_inds)
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(
- list(set(pos_inds.cpu()) - set(sampled_inds.cpu())))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- extra_inds = torch.from_numpy(extra_inds).to(
- assign_result.gt_inds.device).long()
- sampled_inds = torch.cat([sampled_inds, extra_inds])
- elif len(sampled_inds) > num_expected:
- sampled_inds = self.random_choice(sampled_inds, num_expected)
- return sampled_inds
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pspnet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pspnet_r50-d8.py
deleted file mode 100644
index f451e08ad2eb0732dcb806b1851eb978d4acf136..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pspnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='PSPHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index e6d58a67b3b4dddf3da42efca30fa599e623f183..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/app.py b/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/app.py
deleted file mode 100644
index cd31e220114a75cdb8f8a860618ddac423ade9af..0000000000000000000000000000000000000000
--- a/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/TehVenom/MPT-7b-Chat-Instruct-LongCTX-Merge").launch()
\ No newline at end of file
diff --git a/spaces/AnishKumbhar/DogDiseasePredictor/Dockerfile b/spaces/AnishKumbhar/DogDiseasePredictor/Dockerfile
deleted file mode 100644
index 55b67bca9bb7304b8ab0898ff9d4c82002dcd53b..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/DogDiseasePredictor/Dockerfile
+++ /dev/null
@@ -1,27 +0,0 @@
-# Use the official Python 3.9 image
-FROM python:3.9
-
-# Set the working directory to /code
-WORKDIR /code
-
-# Copy the current directory contents into the container at /code
-COPY ./requirements.txt /code/requirements.txt
-
-# Install requirements.txt
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -m -u 1000 user
-# Switch to the "user" user
-USER user
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/AnjaneyuluChinni/AnjiChinniGenAIAvatar/README.md b/spaces/AnjaneyuluChinni/AnjiChinniGenAIAvatar/README.md
deleted file mode 100644
index 7a107ff60cbaf32fcaf6a132f30e2076138cebd4..0000000000000000000000000000000000000000
--- a/spaces/AnjaneyuluChinni/AnjiChinniGenAIAvatar/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AnjiChinniGenAIAvatar
-emoji: 🦀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Artrajz/vits-simple-api/config.py b/spaces/Artrajz/vits-simple-api/config.py
deleted file mode 100644
index 3fb65b82ceac7fe1f83a61b171bec7c06d7cd972..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/config.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import os
-import sys
-
-import torch
-
-JSON_AS_ASCII = False
-
-MAX_CONTENT_LENGTH = 5242880
-
-# Flask debug mode
-DEBUG = False
-
-# Server port
-PORT = 7860
-
-# Absolute path of vits-simple-api
-ABS_PATH = os.path.dirname(os.path.realpath(__file__))
-
-# Upload path
-UPLOAD_FOLDER = ABS_PATH + "/upload"
-
-# Cahce path
-CACHE_PATH = ABS_PATH + "/cache"
-
-# Logs path
-LOGS_PATH = ABS_PATH + "/logs"
-
-# Set the number of backup log files to keep.
-LOGS_BACKUPCOUNT = 30
-
-# If CLEAN_INTERVAL_SECONDS <= 0, the cleaning task will not be executed.
-CLEAN_INTERVAL_SECONDS = 3600
-
-# save audio to CACHE_PATH
-SAVE_AUDIO = False
-
-# zh ja ko en... If it is empty, it will be read based on the text_cleaners specified in the config.json.
-LANGUAGE_AUTOMATIC_DETECT = []
-
-# Set to True to enable API Key authentication
-API_KEY_ENABLED = False
-
-# API_KEY is required for authentication
-API_KEY = "api-key"
-
-# logging_level:DEBUG/INFO/WARNING/ERROR/CRITICAL
-LOGGING_LEVEL = "DEBUG"
-
-# Language identification library. Optional fastlid, langid
-LANGUAGE_IDENTIFICATION_LIBRARY = "langid"
-
-# To use the english_cleaner, you need to install espeak and provide the path of libespeak-ng.dll as input here.
-# If ESPEAK_LIBRARY is set to empty, it will be read from the environment variable.
-# For windows : "C:/Program Files/eSpeak NG/libespeak-ng.dll"
-ESPEAK_LIBRARY = ""
-
-# Fill in the model path here
-MODEL_LIST = [
- # VITS
- [ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/1374_epochs.pth", ABS_PATH + "/Model/Nene_Nanami_Rong_Tang/config.json"],
- [ABS_PATH + "/Model/vctk/pretrained_vctk.pth", ABS_PATH + "/Model/vctk/vctk_base.json"],
- [ABS_PATH + "/Model/paimon/paimon6k_390000.pth", ABS_PATH + "/Model/paimon/paimon6k.json"],
- [ABS_PATH + "/Model/vits_chinese/vits_bert_model.pth", ABS_PATH + "/Model/vits_chinese/bert_vits.json"],
- [ABS_PATH + "/Model/Bishojo_Mangekyo/generator_mangekyo.pth", ABS_PATH + "/Model/Bishojo_Mangekyo/config_mangekyo.json"],
- [ABS_PATH + "/Model/Cantonese/model.pth", ABS_PATH + "/Model/Cantonese/config.json"],
- [ABS_PATH + "/Model/shanghainese/2796_epochs.pth", ABS_PATH + "/Model/shanghainese/config.json"],
- [ABS_PATH + "/Model/genshin/G_953000.pth", ABS_PATH + "/Model/genshin/config.json"],
- # HuBert-VITS (Need to configure HUBERT_SOFT_MODEL)
- [ABS_PATH + "/Model/louise/360_epochs.pth", ABS_PATH + "/Model/louise/config.json"],
- # W2V2-VITS (Need to configure DIMENSIONAL_EMOTION_NPY)
- [ABS_PATH + "/Model/w2v2-vits/1026_epochs.pth", ABS_PATH + "/Model/w2v2-vits/config.json"],
-]
-
-# hubert-vits: hubert soft model
-HUBERT_SOFT_MODEL = ABS_PATH + "/Model/hubert-soft-0d54a1f4.pt"
-
-# w2v2-vits: Dimensional emotion npy file
-# load single npy: ABS_PATH+"/all_emotions.npy
-# load mutiple npy: [ABS_PATH + "/emotions1.npy", ABS_PATH + "/emotions2.npy"]
-# load mutiple npy from folder: ABS_PATH + "/Model/npy"
-DIMENSIONAL_EMOTION_NPY = ABS_PATH + "/Model/npy"
-
-# w2v2-vits: Need to have both `model.onnx` and `model.yaml` files in the same path.
-# DIMENSIONAL_EMOTION_MODEL = ABS_PATH + "/Model/model.yaml"
-
-DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-"""
-Default parameter
-"""
-
-ID = 0
-
-FORMAT = "wav"
-
-LANG = "AUTO"
-
-LENGTH = 1
-
-NOISE = 0.33
-
-NOISEW = 0.4
-
-# 长文本分段阈值,max<=0表示不分段.
-# Batch processing threshold. Text will not be processed in batches if max<=0
-MAX = 50
-
-# Bert_VITS2
-SDP_RATIO = 0.2
diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/logger.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/logger.py
deleted file mode 100644
index 18145f54c927abd59b95f3fa6e6da8002bc2ce97..0000000000000000000000000000000000000000
--- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/logger.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import functools
-import logging
-import os
-import sys
-
-from termcolor import colored
-
-
-class _ColorfulFormatter(logging.Formatter):
- def __init__(self, *args, **kwargs):
- self._root_name = kwargs.pop("root_name") + "."
- self._abbrev_name = kwargs.pop("abbrev_name", "")
- if len(self._abbrev_name):
- self._abbrev_name = self._abbrev_name + "."
- super(_ColorfulFormatter, self).__init__(*args, **kwargs)
-
- def formatMessage(self, record):
- record.name = record.name.replace(self._root_name, self._abbrev_name)
- log = super(_ColorfulFormatter, self).formatMessage(record)
- if record.levelno == logging.WARNING:
- prefix = colored("WARNING", "red", attrs=["blink"])
- elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL:
- prefix = colored("ERROR", "red", attrs=["blink", "underline"])
- else:
- return log
- return prefix + " " + log
-
-
-# so that calling setup_logger multiple times won't add many handlers
-@functools.lru_cache()
-def setup_logger(output=None, distributed_rank=0, *, color=True, name="imagenet", abbrev_name=None):
- """
- Initialize the detectron2 logger and set its verbosity level to "INFO".
-
- Args:
- output (str): a file name or a directory to save log. If None, will not save log file.
- If ends with ".txt" or ".log", assumed to be a file name.
- Otherwise, logs will be saved to `output/log.txt`.
- name (str): the root module name of this logger
-
- Returns:
- logging.Logger: a logger
- """
- logger = logging.getLogger(name)
- logger.setLevel(logging.DEBUG)
- logger.propagate = False
-
- if abbrev_name is None:
- abbrev_name = name
-
- plain_formatter = logging.Formatter(
- "[%(asctime)s.%(msecs)03d]: %(message)s", datefmt="%m/%d %H:%M:%S"
- )
- # stdout logging: master only
- if distributed_rank == 0:
- ch = logging.StreamHandler(stream=sys.stdout)
- ch.setLevel(logging.DEBUG)
- if color:
- formatter = _ColorfulFormatter(
- colored("[%(asctime)s.%(msecs)03d]: ", "green") + "%(message)s",
- datefmt="%m/%d %H:%M:%S",
- root_name=name,
- abbrev_name=str(abbrev_name),
- )
- else:
- formatter = plain_formatter
- ch.setFormatter(formatter)
- logger.addHandler(ch)
-
- # file logging: all workers
- if output is not None:
- if output.endswith(".txt") or output.endswith(".log"):
- filename = output
- else:
- filename = os.path.join(output, "log.txt")
- if distributed_rank > 0:
- filename = filename + f".rank{distributed_rank}"
- os.makedirs(os.path.dirname(filename), exist_ok=True)
-
- fh = logging.StreamHandler(_cached_log_stream(filename))
- fh.setLevel(logging.DEBUG)
- fh.setFormatter(plain_formatter)
- logger.addHandler(fh)
-
- return logger
-
-
-# cache the opened file object, so that different calls to `setup_logger`
-# with the same file name can safely write to the same file.
-@functools.lru_cache(maxsize=None)
-def _cached_log_stream(filename):
- return open(filename, "a")
diff --git a/spaces/Asahi402/White-box-Cartoonization/README.md b/spaces/Asahi402/White-box-Cartoonization/README.md
deleted file mode 100644
index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000
--- a/spaces/Asahi402/White-box-Cartoonization/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-python_version: 3.7
-title: White Box Cartoonization
-emoji: 📚
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: hylee/White-box-Cartoonization
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/gb2312freq.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/gb2312freq.py
deleted file mode 100644
index b32bfc74213d93d434f1f3a47cb5d7d0bf4863d3..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/gb2312freq.py
+++ /dev/null
@@ -1,284 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Communicator client code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-# GB2312 most frequently used character table
-#
-# Char to FreqOrder table , from hz6763
-
-# 512 --> 0.79 -- 0.79
-# 1024 --> 0.92 -- 0.13
-# 2048 --> 0.98 -- 0.06
-# 6768 --> 1.00 -- 0.02
-#
-# Ideal Distribution Ratio = 0.79135/(1-0.79135) = 3.79
-# Random Distribution Ration = 512 / (3755 - 512) = 0.157
-#
-# Typical Distribution Ratio about 25% of Ideal one, still much higher that RDR
-
-GB2312_TYPICAL_DISTRIBUTION_RATIO = 0.9
-
-GB2312_TABLE_SIZE = 3760
-
-# fmt: off
-GB2312_CHAR_TO_FREQ_ORDER = (
-1671, 749,1443,2364,3924,3807,2330,3921,1704,3463,2691,1511,1515, 572,3191,2205,
-2361, 224,2558, 479,1711, 963,3162, 440,4060,1905,2966,2947,3580,2647,3961,3842,
-2204, 869,4207, 970,2678,5626,2944,2956,1479,4048, 514,3595, 588,1346,2820,3409,
- 249,4088,1746,1873,2047,1774, 581,1813, 358,1174,3590,1014,1561,4844,2245, 670,
-1636,3112, 889,1286, 953, 556,2327,3060,1290,3141, 613, 185,3477,1367, 850,3820,
-1715,2428,2642,2303,2732,3041,2562,2648,3566,3946,1349, 388,3098,2091,1360,3585,
- 152,1687,1539, 738,1559, 59,1232,2925,2267,1388,1249,1741,1679,2960, 151,1566,
-1125,1352,4271, 924,4296, 385,3166,4459, 310,1245,2850, 70,3285,2729,3534,3575,
-2398,3298,3466,1960,2265, 217,3647, 864,1909,2084,4401,2773,1010,3269,5152, 853,
-3051,3121,1244,4251,1895, 364,1499,1540,2313,1180,3655,2268, 562, 715,2417,3061,
- 544, 336,3768,2380,1752,4075, 950, 280,2425,4382, 183,2759,3272, 333,4297,2155,
-1688,2356,1444,1039,4540, 736,1177,3349,2443,2368,2144,2225, 565, 196,1482,3406,
- 927,1335,4147, 692, 878,1311,1653,3911,3622,1378,4200,1840,2969,3149,2126,1816,
-2534,1546,2393,2760, 737,2494, 13, 447, 245,2747, 38,2765,2129,2589,1079, 606,
- 360, 471,3755,2890, 404, 848, 699,1785,1236, 370,2221,1023,3746,2074,2026,2023,
-2388,1581,2119, 812,1141,3091,2536,1519, 804,2053, 406,1596,1090, 784, 548,4414,
-1806,2264,2936,1100, 343,4114,5096, 622,3358, 743,3668,1510,1626,5020,3567,2513,
-3195,4115,5627,2489,2991, 24,2065,2697,1087,2719, 48,1634, 315, 68, 985,2052,
- 198,2239,1347,1107,1439, 597,2366,2172, 871,3307, 919,2487,2790,1867, 236,2570,
-1413,3794, 906,3365,3381,1701,1982,1818,1524,2924,1205, 616,2586,2072,2004, 575,
- 253,3099, 32,1365,1182, 197,1714,2454,1201, 554,3388,3224,2748, 756,2587, 250,
-2567,1507,1517,3529,1922,2761,2337,3416,1961,1677,2452,2238,3153, 615, 911,1506,
-1474,2495,1265,1906,2749,3756,3280,2161, 898,2714,1759,3450,2243,2444, 563, 26,
-3286,2266,3769,3344,2707,3677, 611,1402, 531,1028,2871,4548,1375, 261,2948, 835,
-1190,4134, 353, 840,2684,1900,3082,1435,2109,1207,1674, 329,1872,2781,4055,2686,
-2104, 608,3318,2423,2957,2768,1108,3739,3512,3271,3985,2203,1771,3520,1418,2054,
-1681,1153, 225,1627,2929, 162,2050,2511,3687,1954, 124,1859,2431,1684,3032,2894,
- 585,4805,3969,2869,2704,2088,2032,2095,3656,2635,4362,2209, 256, 518,2042,2105,
-3777,3657, 643,2298,1148,1779, 190, 989,3544, 414, 11,2135,2063,2979,1471, 403,
-3678, 126, 770,1563, 671,2499,3216,2877, 600,1179, 307,2805,4937,1268,1297,2694,
- 252,4032,1448,1494,1331,1394, 127,2256, 222,1647,1035,1481,3056,1915,1048, 873,
-3651, 210, 33,1608,2516, 200,1520, 415, 102, 0,3389,1287, 817, 91,3299,2940,
- 836,1814, 549,2197,1396,1669,2987,3582,2297,2848,4528,1070, 687, 20,1819, 121,
-1552,1364,1461,1968,2617,3540,2824,2083, 177, 948,4938,2291, 110,4549,2066, 648,
-3359,1755,2110,2114,4642,4845,1693,3937,3308,1257,1869,2123, 208,1804,3159,2992,
-2531,2549,3361,2418,1350,2347,2800,2568,1291,2036,2680, 72, 842,1990, 212,1233,
-1154,1586, 75,2027,3410,4900,1823,1337,2710,2676, 728,2810,1522,3026,4995, 157,
- 755,1050,4022, 710, 785,1936,2194,2085,1406,2777,2400, 150,1250,4049,1206, 807,
-1910, 534, 529,3309,1721,1660, 274, 39,2827, 661,2670,1578, 925,3248,3815,1094,
-4278,4901,4252, 41,1150,3747,2572,2227,4501,3658,4902,3813,3357,3617,2884,2258,
- 887, 538,4187,3199,1294,2439,3042,2329,2343,2497,1255, 107, 543,1527, 521,3478,
-3568, 194,5062, 15, 961,3870,1241,1192,2664, 66,5215,3260,2111,1295,1127,2152,
-3805,4135, 901,1164,1976, 398,1278, 530,1460, 748, 904,1054,1966,1426, 53,2909,
- 509, 523,2279,1534, 536,1019, 239,1685, 460,2353, 673,1065,2401,3600,4298,2272,
-1272,2363, 284,1753,3679,4064,1695, 81, 815,2677,2757,2731,1386, 859, 500,4221,
-2190,2566, 757,1006,2519,2068,1166,1455, 337,2654,3203,1863,1682,1914,3025,1252,
-1409,1366, 847, 714,2834,2038,3209, 964,2970,1901, 885,2553,1078,1756,3049, 301,
-1572,3326, 688,2130,1996,2429,1805,1648,2930,3421,2750,3652,3088, 262,1158,1254,
- 389,1641,1812, 526,1719, 923,2073,1073,1902, 468, 489,4625,1140, 857,2375,3070,
-3319,2863, 380, 116,1328,2693,1161,2244, 273,1212,1884,2769,3011,1775,1142, 461,
-3066,1200,2147,2212, 790, 702,2695,4222,1601,1058, 434,2338,5153,3640, 67,2360,
-4099,2502, 618,3472,1329, 416,1132, 830,2782,1807,2653,3211,3510,1662, 192,2124,
- 296,3979,1739,1611,3684, 23, 118, 324, 446,1239,1225, 293,2520,3814,3795,2535,
-3116, 17,1074, 467,2692,2201, 387,2922, 45,1326,3055,1645,3659,2817, 958, 243,
-1903,2320,1339,2825,1784,3289, 356, 576, 865,2315,2381,3377,3916,1088,3122,1713,
-1655, 935, 628,4689,1034,1327, 441, 800, 720, 894,1979,2183,1528,5289,2702,1071,
-4046,3572,2399,1571,3281, 79, 761,1103, 327, 134, 758,1899,1371,1615, 879, 442,
- 215,2605,2579, 173,2048,2485,1057,2975,3317,1097,2253,3801,4263,1403,1650,2946,
- 814,4968,3487,1548,2644,1567,1285, 2, 295,2636, 97, 946,3576, 832, 141,4257,
-3273, 760,3821,3521,3156,2607, 949,1024,1733,1516,1803,1920,2125,2283,2665,3180,
-1501,2064,3560,2171,1592, 803,3518,1416, 732,3897,4258,1363,1362,2458, 119,1427,
- 602,1525,2608,1605,1639,3175, 694,3064, 10, 465, 76,2000,4846,4208, 444,3781,
-1619,3353,2206,1273,3796, 740,2483, 320,1723,2377,3660,2619,1359,1137,1762,1724,
-2345,2842,1850,1862, 912, 821,1866, 612,2625,1735,2573,3369,1093, 844, 89, 937,
- 930,1424,3564,2413,2972,1004,3046,3019,2011, 711,3171,1452,4178, 428, 801,1943,
- 432, 445,2811, 206,4136,1472, 730, 349, 73, 397,2802,2547, 998,1637,1167, 789,
- 396,3217, 154,1218, 716,1120,1780,2819,4826,1931,3334,3762,2139,1215,2627, 552,
-3664,3628,3232,1405,2383,3111,1356,2652,3577,3320,3101,1703, 640,1045,1370,1246,
-4996, 371,1575,2436,1621,2210, 984,4033,1734,2638, 16,4529, 663,2755,3255,1451,
-3917,2257,1253,1955,2234,1263,2951, 214,1229, 617, 485, 359,1831,1969, 473,2310,
- 750,2058, 165, 80,2864,2419, 361,4344,2416,2479,1134, 796,3726,1266,2943, 860,
-2715, 938, 390,2734,1313,1384, 248, 202, 877,1064,2854, 522,3907, 279,1602, 297,
-2357, 395,3740, 137,2075, 944,4089,2584,1267,3802, 62,1533,2285, 178, 176, 780,
-2440, 201,3707, 590, 478,1560,4354,2117,1075, 30, 74,4643,4004,1635,1441,2745,
- 776,2596, 238,1077,1692,1912,2844, 605, 499,1742,3947, 241,3053, 980,1749, 936,
-2640,4511,2582, 515,1543,2162,5322,2892,2993, 890,2148,1924, 665,1827,3581,1032,
- 968,3163, 339,1044,1896, 270, 583,1791,1720,4367,1194,3488,3669, 43,2523,1657,
- 163,2167, 290,1209,1622,3378, 550, 634,2508,2510, 695,2634,2384,2512,1476,1414,
- 220,1469,2341,2138,2852,3183,2900,4939,2865,3502,1211,3680, 854,3227,1299,2976,
-3172, 186,2998,1459, 443,1067,3251,1495, 321,1932,3054, 909, 753,1410,1828, 436,
-2441,1119,1587,3164,2186,1258, 227, 231,1425,1890,3200,3942, 247, 959, 725,5254,
-2741, 577,2158,2079, 929, 120, 174, 838,2813, 591,1115, 417,2024, 40,3240,1536,
-1037, 291,4151,2354, 632,1298,2406,2500,3535,1825,1846,3451, 205,1171, 345,4238,
- 18,1163, 811, 685,2208,1217, 425,1312,1508,1175,4308,2552,1033, 587,1381,3059,
-2984,3482, 340,1316,4023,3972, 792,3176, 519, 777,4690, 918, 933,4130,2981,3741,
- 90,3360,2911,2200,5184,4550, 609,3079,2030, 272,3379,2736, 363,3881,1130,1447,
- 286, 779, 357,1169,3350,3137,1630,1220,2687,2391, 747,1277,3688,2618,2682,2601,
-1156,3196,5290,4034,3102,1689,3596,3128, 874, 219,2783, 798, 508,1843,2461, 269,
-1658,1776,1392,1913,2983,3287,2866,2159,2372, 829,4076, 46,4253,2873,1889,1894,
- 915,1834,1631,2181,2318, 298, 664,2818,3555,2735, 954,3228,3117, 527,3511,2173,
- 681,2712,3033,2247,2346,3467,1652, 155,2164,3382, 113,1994, 450, 899, 494, 994,
-1237,2958,1875,2336,1926,3727, 545,1577,1550, 633,3473, 204,1305,3072,2410,1956,
-2471, 707,2134, 841,2195,2196,2663,3843,1026,4940, 990,3252,4997, 368,1092, 437,
-3212,3258,1933,1829, 675,2977,2893, 412, 943,3723,4644,3294,3283,2230,2373,5154,
-2389,2241,2661,2323,1404,2524, 593, 787, 677,3008,1275,2059, 438,2709,2609,2240,
-2269,2246,1446, 36,1568,1373,3892,1574,2301,1456,3962, 693,2276,5216,2035,1143,
-2720,1919,1797,1811,2763,4137,2597,1830,1699,1488,1198,2090, 424,1694, 312,3634,
-3390,4179,3335,2252,1214, 561,1059,3243,2295,2561, 975,5155,2321,2751,3772, 472,
-1537,3282,3398,1047,2077,2348,2878,1323,3340,3076, 690,2906, 51, 369, 170,3541,
-1060,2187,2688,3670,2541,1083,1683, 928,3918, 459, 109,4427, 599,3744,4286, 143,
-2101,2730,2490, 82,1588,3036,2121, 281,1860, 477,4035,1238,2812,3020,2716,3312,
-1530,2188,2055,1317, 843, 636,1808,1173,3495, 649, 181,1002, 147,3641,1159,2414,
-3750,2289,2795, 813,3123,2610,1136,4368, 5,3391,4541,2174, 420, 429,1728, 754,
-1228,2115,2219, 347,2223,2733, 735,1518,3003,2355,3134,1764,3948,3329,1888,2424,
-1001,1234,1972,3321,3363,1672,1021,1450,1584, 226, 765, 655,2526,3404,3244,2302,
-3665, 731, 594,2184, 319,1576, 621, 658,2656,4299,2099,3864,1279,2071,2598,2739,
- 795,3086,3699,3908,1707,2352,2402,1382,3136,2475,1465,4847,3496,3865,1085,3004,
-2591,1084, 213,2287,1963,3565,2250, 822, 793,4574,3187,1772,1789,3050, 595,1484,
-1959,2770,1080,2650, 456, 422,2996, 940,3322,4328,4345,3092,2742, 965,2784, 739,
-4124, 952,1358,2498,2949,2565, 332,2698,2378, 660,2260,2473,4194,3856,2919, 535,
-1260,2651,1208,1428,1300,1949,1303,2942, 433,2455,2450,1251,1946, 614,1269, 641,
-1306,1810,2737,3078,2912, 564,2365,1419,1415,1497,4460,2367,2185,1379,3005,1307,
-3218,2175,1897,3063, 682,1157,4040,4005,1712,1160,1941,1399, 394, 402,2952,1573,
-1151,2986,2404, 862, 299,2033,1489,3006, 346, 171,2886,3401,1726,2932, 168,2533,
- 47,2507,1030,3735,1145,3370,1395,1318,1579,3609,4560,2857,4116,1457,2529,1965,
- 504,1036,2690,2988,2405, 745,5871, 849,2397,2056,3081, 863,2359,3857,2096, 99,
-1397,1769,2300,4428,1643,3455,1978,1757,3718,1440, 35,4879,3742,1296,4228,2280,
- 160,5063,1599,2013, 166, 520,3479,1646,3345,3012, 490,1937,1545,1264,2182,2505,
-1096,1188,1369,1436,2421,1667,2792,2460,1270,2122, 727,3167,2143, 806,1706,1012,
-1800,3037, 960,2218,1882, 805, 139,2456,1139,1521, 851,1052,3093,3089, 342,2039,
- 744,5097,1468,1502,1585,2087, 223, 939, 326,2140,2577, 892,2481,1623,4077, 982,
-3708, 135,2131, 87,2503,3114,2326,1106, 876,1616, 547,2997,2831,2093,3441,4530,
-4314, 9,3256,4229,4148, 659,1462,1986,1710,2046,2913,2231,4090,4880,5255,3392,
-3274,1368,3689,4645,1477, 705,3384,3635,1068,1529,2941,1458,3782,1509, 100,1656,
-2548, 718,2339, 408,1590,2780,3548,1838,4117,3719,1345,3530, 717,3442,2778,3220,
-2898,1892,4590,3614,3371,2043,1998,1224,3483, 891, 635, 584,2559,3355, 733,1766,
-1729,1172,3789,1891,2307, 781,2982,2271,1957,1580,5773,2633,2005,4195,3097,1535,
-3213,1189,1934,5693,3262, 586,3118,1324,1598, 517,1564,2217,1868,1893,4445,3728,
-2703,3139,1526,1787,1992,3882,2875,1549,1199,1056,2224,1904,2711,5098,4287, 338,
-1993,3129,3489,2689,1809,2815,1997, 957,1855,3898,2550,3275,3057,1105,1319, 627,
-1505,1911,1883,3526, 698,3629,3456,1833,1431, 746, 77,1261,2017,2296,1977,1885,
- 125,1334,1600, 525,1798,1109,2222,1470,1945, 559,2236,1186,3443,2476,1929,1411,
-2411,3135,1777,3372,2621,1841,1613,3229, 668,1430,1839,2643,2916, 195,1989,2671,
-2358,1387, 629,3205,2293,5256,4439, 123,1310, 888,1879,4300,3021,3605,1003,1162,
-3192,2910,2010, 140,2395,2859, 55,1082,2012,2901, 662, 419,2081,1438, 680,2774,
-4654,3912,1620,1731,1625,5035,4065,2328, 512,1344, 802,5443,2163,2311,2537, 524,
-3399, 98,1155,2103,1918,2606,3925,2816,1393,2465,1504,3773,2177,3963,1478,4346,
- 180,1113,4655,3461,2028,1698, 833,2696,1235,1322,1594,4408,3623,3013,3225,2040,
-3022, 541,2881, 607,3632,2029,1665,1219, 639,1385,1686,1099,2803,3231,1938,3188,
-2858, 427, 676,2772,1168,2025, 454,3253,2486,3556, 230,1950, 580, 791,1991,1280,
-1086,1974,2034, 630, 257,3338,2788,4903,1017, 86,4790, 966,2789,1995,1696,1131,
- 259,3095,4188,1308, 179,1463,5257, 289,4107,1248, 42,3413,1725,2288, 896,1947,
- 774,4474,4254, 604,3430,4264, 392,2514,2588, 452, 237,1408,3018, 988,4531,1970,
-3034,3310, 540,2370,1562,1288,2990, 502,4765,1147, 4,1853,2708, 207, 294,2814,
-4078,2902,2509, 684, 34,3105,3532,2551, 644, 709,2801,2344, 573,1727,3573,3557,
-2021,1081,3100,4315,2100,3681, 199,2263,1837,2385, 146,3484,1195,2776,3949, 997,
-1939,3973,1008,1091,1202,1962,1847,1149,4209,5444,1076, 493, 117,5400,2521, 972,
-1490,2934,1796,4542,2374,1512,2933,2657, 413,2888,1135,2762,2314,2156,1355,2369,
- 766,2007,2527,2170,3124,2491,2593,2632,4757,2437, 234,3125,3591,1898,1750,1376,
-1942,3468,3138, 570,2127,2145,3276,4131, 962, 132,1445,4196, 19, 941,3624,3480,
-3366,1973,1374,4461,3431,2629, 283,2415,2275, 808,2887,3620,2112,2563,1353,3610,
- 955,1089,3103,1053, 96, 88,4097, 823,3808,1583, 399, 292,4091,3313, 421,1128,
- 642,4006, 903,2539,1877,2082, 596, 29,4066,1790, 722,2157, 130, 995,1569, 769,
-1485, 464, 513,2213, 288,1923,1101,2453,4316, 133, 486,2445, 50, 625, 487,2207,
- 57, 423, 481,2962, 159,3729,1558, 491, 303, 482, 501, 240,2837, 112,3648,2392,
-1783, 362, 8,3433,3422, 610,2793,3277,1390,1284,1654, 21,3823, 734, 367, 623,
- 193, 287, 374,1009,1483, 816, 476, 313,2255,2340,1262,2150,2899,1146,2581, 782,
-2116,1659,2018,1880, 255,3586,3314,1110,2867,2137,2564, 986,2767,5185,2006, 650,
- 158, 926, 762, 881,3157,2717,2362,3587, 306,3690,3245,1542,3077,2427,1691,2478,
-2118,2985,3490,2438, 539,2305, 983, 129,1754, 355,4201,2386, 827,2923, 104,1773,
-2838,2771, 411,2905,3919, 376, 767, 122,1114, 828,2422,1817,3506, 266,3460,1007,
-1609,4998, 945,2612,4429,2274, 726,1247,1964,2914,2199,2070,4002,4108, 657,3323,
-1422, 579, 455,2764,4737,1222,2895,1670, 824,1223,1487,2525, 558, 861,3080, 598,
-2659,2515,1967, 752,2583,2376,2214,4180, 977, 704,2464,4999,2622,4109,1210,2961,
- 819,1541, 142,2284, 44, 418, 457,1126,3730,4347,4626,1644,1876,3671,1864, 302,
-1063,5694, 624, 723,1984,3745,1314,1676,2488,1610,1449,3558,3569,2166,2098, 409,
-1011,2325,3704,2306, 818,1732,1383,1824,1844,3757, 999,2705,3497,1216,1423,2683,
-2426,2954,2501,2726,2229,1475,2554,5064,1971,1794,1666,2014,1343, 783, 724, 191,
-2434,1354,2220,5065,1763,2752,2472,4152, 131, 175,2885,3434, 92,1466,4920,2616,
-3871,3872,3866, 128,1551,1632, 669,1854,3682,4691,4125,1230, 188,2973,3290,1302,
-1213, 560,3266, 917, 763,3909,3249,1760, 868,1958, 764,1782,2097, 145,2277,3774,
-4462, 64,1491,3062, 971,2132,3606,2442, 221,1226,1617, 218, 323,1185,3207,3147,
- 571, 619,1473,1005,1744,2281, 449,1887,2396,3685, 275, 375,3816,1743,3844,3731,
- 845,1983,2350,4210,1377, 773, 967,3499,3052,3743,2725,4007,1697,1022,3943,1464,
-3264,2855,2722,1952,1029,2839,2467, 84,4383,2215, 820,1391,2015,2448,3672, 377,
-1948,2168, 797,2545,3536,2578,2645, 94,2874,1678, 405,1259,3071, 771, 546,1315,
- 470,1243,3083, 895,2468, 981, 969,2037, 846,4181, 653,1276,2928, 14,2594, 557,
-3007,2474, 156, 902,1338,1740,2574, 537,2518, 973,2282,2216,2433,1928, 138,2903,
-1293,2631,1612, 646,3457, 839,2935, 111, 496,2191,2847, 589,3186, 149,3994,2060,
-4031,2641,4067,3145,1870, 37,3597,2136,1025,2051,3009,3383,3549,1121,1016,3261,
-1301, 251,2446,2599,2153, 872,3246, 637, 334,3705, 831, 884, 921,3065,3140,4092,
-2198,1944, 246,2964, 108,2045,1152,1921,2308,1031, 203,3173,4170,1907,3890, 810,
-1401,2003,1690, 506, 647,1242,2828,1761,1649,3208,2249,1589,3709,2931,5156,1708,
- 498, 666,2613, 834,3817,1231, 184,2851,1124, 883,3197,2261,3710,1765,1553,2658,
-1178,2639,2351, 93,1193, 942,2538,2141,4402, 235,1821, 870,1591,2192,1709,1871,
-3341,1618,4126,2595,2334, 603, 651, 69, 701, 268,2662,3411,2555,1380,1606, 503,
- 448, 254,2371,2646, 574,1187,2309,1770, 322,2235,1292,1801, 305, 566,1133, 229,
-2067,2057, 706, 167, 483,2002,2672,3295,1820,3561,3067, 316, 378,2746,3452,1112,
- 136,1981, 507,1651,2917,1117, 285,4591, 182,2580,3522,1304, 335,3303,1835,2504,
-1795,1792,2248, 674,1018,2106,2449,1857,2292,2845, 976,3047,1781,2600,2727,1389,
-1281, 52,3152, 153, 265,3950, 672,3485,3951,4463, 430,1183, 365, 278,2169, 27,
-1407,1336,2304, 209,1340,1730,2202,1852,2403,2883, 979,1737,1062, 631,2829,2542,
-3876,2592, 825,2086,2226,3048,3625, 352,1417,3724, 542, 991, 431,1351,3938,1861,
-2294, 826,1361,2927,3142,3503,1738, 463,2462,2723, 582,1916,1595,2808, 400,3845,
-3891,2868,3621,2254, 58,2492,1123, 910,2160,2614,1372,1603,1196,1072,3385,1700,
-3267,1980, 696, 480,2430, 920, 799,1570,2920,1951,2041,4047,2540,1321,4223,2469,
-3562,2228,1271,2602, 401,2833,3351,2575,5157, 907,2312,1256, 410, 263,3507,1582,
- 996, 678,1849,2316,1480, 908,3545,2237, 703,2322, 667,1826,2849,1531,2604,2999,
-2407,3146,2151,2630,1786,3711, 469,3542, 497,3899,2409, 858, 837,4446,3393,1274,
- 786, 620,1845,2001,3311, 484, 308,3367,1204,1815,3691,2332,1532,2557,1842,2020,
-2724,1927,2333,4440, 567, 22,1673,2728,4475,1987,1858,1144,1597, 101,1832,3601,
- 12, 974,3783,4391, 951,1412, 1,3720, 453,4608,4041, 528,1041,1027,3230,2628,
-1129, 875,1051,3291,1203,2262,1069,2860,2799,2149,2615,3278, 144,1758,3040, 31,
- 475,1680, 366,2685,3184, 311,1642,4008,2466,5036,1593,1493,2809, 216,1420,1668,
- 233, 304,2128,3284, 232,1429,1768,1040,2008,3407,2740,2967,2543, 242,2133, 778,
-1565,2022,2620, 505,2189,2756,1098,2273, 372,1614, 708, 553,2846,2094,2278, 169,
-3626,2835,4161, 228,2674,3165, 809,1454,1309, 466,1705,1095, 900,3423, 880,2667,
-3751,5258,2317,3109,2571,4317,2766,1503,1342, 866,4447,1118, 63,2076, 314,1881,
-1348,1061, 172, 978,3515,1747, 532, 511,3970, 6, 601, 905,2699,3300,1751, 276,
-1467,3725,2668, 65,4239,2544,2779,2556,1604, 578,2451,1802, 992,2331,2624,1320,
-3446, 713,1513,1013, 103,2786,2447,1661, 886,1702, 916, 654,3574,2031,1556, 751,
-2178,2821,2179,1498,1538,2176, 271, 914,2251,2080,1325, 638,1953,2937,3877,2432,
-2754, 95,3265,1716, 260,1227,4083, 775, 106,1357,3254, 426,1607, 555,2480, 772,
-1985, 244,2546, 474, 495,1046,2611,1851,2061, 71,2089,1675,2590, 742,3758,2843,
-3222,1433, 267,2180,2576,2826,2233,2092,3913,2435, 956,1745,3075, 856,2113,1116,
- 451, 3,1988,2896,1398, 993,2463,1878,2049,1341,2718,2721,2870,2108, 712,2904,
-4363,2753,2324, 277,2872,2349,2649, 384, 987, 435, 691,3000, 922, 164,3939, 652,
-1500,1184,4153,2482,3373,2165,4848,2335,3775,3508,3154,2806,2830,1554,2102,1664,
-2530,1434,2408, 893,1547,2623,3447,2832,2242,2532,3169,2856,3223,2078, 49,3770,
-3469, 462, 318, 656,2259,3250,3069, 679,1629,2758, 344,1138,1104,3120,1836,1283,
-3115,2154,1437,4448, 934, 759,1999, 794,2862,1038, 533,2560,1722,2342, 855,2626,
-1197,1663,4476,3127, 85,4240,2528, 25,1111,1181,3673, 407,3470,4561,2679,2713,
- 768,1925,2841,3986,1544,1165, 932, 373,1240,2146,1930,2673, 721,4766, 354,4333,
- 391,2963, 187, 61,3364,1442,1102, 330,1940,1767, 341,3809,4118, 393,2496,2062,
-2211, 105, 331, 300, 439, 913,1332, 626, 379,3304,1557, 328, 689,3952, 309,1555,
- 931, 317,2517,3027, 325, 569, 686,2107,3084, 60,1042,1333,2794, 264,3177,4014,
-1628, 258,3712, 7,4464,1176,1043,1778, 683, 114,1975, 78,1492, 383,1886, 510,
- 386, 645,5291,2891,2069,3305,4138,3867,2939,2603,2493,1935,1066,1848,3588,1015,
-1282,1289,4609, 697,1453,3044,2666,3611,1856,2412, 54, 719,1330, 568,3778,2459,
-1748, 788, 492, 551,1191,1000, 488,3394,3763, 282,1799, 348,2016,1523,3155,2390,
-1049, 382,2019,1788,1170, 729,2968,3523, 897,3926,2785,2938,3292, 350,2319,3238,
-1718,1717,2655,3453,3143,4465, 161,2889,2980,2009,1421, 56,1908,1640,2387,2232,
-1917,1874,2477,4921, 148, 83,3438, 592,4245,2882,1822,1055, 741, 115,1496,1624,
- 381,1638,4592,1020, 516,3214, 458, 947,4575,1432, 211,1514,2926,1865,2142, 189,
- 852,1221,1400,1486, 882,2299,4036, 351, 28,1122, 700,6479,6480,6481,6482,6483, #last 512
-)
-# fmt: on
diff --git a/spaces/Atualli/yoloxTeste/configs/__init__.py b/spaces/Atualli/yoloxTeste/configs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Benson/text-generation/Examples/Arena Breakout Beta Global Descargar.md b/spaces/Benson/text-generation/Examples/Arena Breakout Beta Global Descargar.md
deleted file mode 100644
index 77d38db818ebcafa265bf1e1e2ec9a06866efa1a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Arena Breakout Beta Global Descargar.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-
Arena Breakout Global Beta Descargar: Cómo unirse al FPS táctico inmersivo de próxima generación en móviles
-
Si usted está buscando un nuevo y emocionante juego de disparos que desafía sus habilidades y recompensa sus riesgos, es posible que desee echar un vistazo a Arena Breakout. Este juego es un FPS táctico inmersivo de próxima generación que empuja los límites de la simulación de guerra en el móvil. También es el primer shooter de extracción de saqueadores que te permite disparar, saquear y escapar para ganar.
-
En este artículo, le diremos todo lo que necesita saber sobre Arena Breakout, cómo descargar y jugar la versión beta global, y lo que es nuevo en la última actualización. ¡Vamos a empezar!
Arena Breakout es un juego desarrollado por Level Infinite, un estudio que tiene como objetivo crear juegos innovadores e inmersivos para dispositivos móviles. Arena Breakout es su título insignia, y ha estado en desarrollo durante más de dos años. El juego ha sido elogiado por jugadores y críticos por sus gráficos realistas, efectos de sonido y mecánica de juego.
-
Un nuevo tipo de juego de disparos
-
Arena Breakout no es el típico juego de disparos. Es un juego que combina elementos de FPS tácticos, battle royale y géneros de disparos de saqueo. El juego tiene dos modos: solitario y escuadrón. En el modo solitario, juegas como un lobo solitario que tiene que sobrevivir contra otros jugadores y enemigos de la IA. En el modo de escuadrón, haces equipo con hasta otros tres jugadores y cooperas para eliminar la competencia.
-
El juego también tiene una característica única llamada breakout. Breakout es la única manera de ganar el juego. Tienes que escapar de la zona de combate vivo con su botín antes de que acabe el tiempo. Si mueres o no te escapas, pierdes todo lo que has recogido en el partido. Esto añade una capa de tensión y estrategia al juego, ya que tienes que decidir cuándo luchar, cuándo saquear y cuándo correr.
-
Una experiencia realista e inmersiva
-
-
El juego también tiene un sistema de disparos realista que simula la física y la mecánica de las armas reales. Usted tiene que manejar su retroceso, recargar sus revistas, parchear sus heridas, y utilizar la cubierta y el movimiento sabiamente. El juego también tiene diferentes condiciones climáticas, ciclos de día y de noche, y entornos destructibles que afectan a su juego.
-
Un sistema de alto riesgo y alta recompensa
-
Arena Breakout es un juego que recompensa tus riesgos con altas recompensas. El juego tiene un sistema de botín que te permite recoger valiosas armas, accesorios y suministros de la arena. También puedes saquear los cadáveres o las cajas de otros jugadores para obtener más botín. El botín que recojas se puede usar en la partida o guardar para su uso posterior.
-
El juego también tiene un sistema de divisas que le permite comprar o vender artículos en el mercado. Puedes usar la moneda para comprar mejores equipos o cosméticos para tu personaje. Sin embargo, usted tiene que tener cuidado con su dinero, ya que puede perderlo todo si usted muere o no escapa. El juego también tiene un sistema de clasificación que rastrea tu rendimiento y progreso en el juego.
-
-
¿Cómo descargar y jugar Arena Breakout beta global?
-
Si estás interesado en jugar la beta global de Arena Breakout, aquí hay algunas cosas que necesitas saber:
- Requisitos y disponibilidad
-
Arena Breakout beta global está disponible actualmente solo para dispositivos Android. Necesitas tener un dispositivo Android con al menos 4 GB de RAM y Android 8.0 o superior para jugar el juego. El juego también requiere una conexión a Internet estable y aproximadamente 2 GB de espacio de almacenamiento.
-
-
Pasos para descargar e instalar
-
Una vez que tenga un código de invitación beta, puede seguir estos pasos para descargar e instalar Arena Breakout beta global en su dispositivo Android:
-
-
Ir a la página web oficial de Arena Breakout y haga clic en el botón de descarga. Serás redirigido a una página donde podrás introducir tu código de invitación beta y tu dirección de correo electrónico. Después de verificar su código y correo electrónico, recibirá un enlace de descarga para el juego.
-
Alternativamente, puede ir a una fuente de terceros que proporciona el enlace de descarga para el juego, como APKPure o TapTap. Sin embargo, asegúrate de descargar el juego desde una fuente confiable y segura, ya que algunas fuentes pueden contener malware o virus.
-
Después de descargar el juego, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en el dispositivo. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y conéctela.
-
Luego, busque el archivo descargado en su dispositivo y toque en él para instalarlo. Es posible que necesite conceder algunos permisos para que el juego se ejecute correctamente.
-
Después de instalar el juego, ejecútelo e ingrese su código de invitación beta nuevamente para iniciar sesión. También es posible que necesite crear una cuenta o vincular su cuenta de redes sociales para jugar el juego.
-
Disfruta jugando Arena Breakout beta global!
-
-
Consejos y trucos para principiantes
-
Si eres nuevo en Arena Breakout, aquí hay algunos consejos y trucos que pueden ayudarte a mejorar tus habilidades y ganar más partidos:
-
-
Aprende los conceptos básicos del juego, como cómo mover, apuntar, disparar, recargar, sanar, saquear y escapar. Puedes practicar estas habilidades en el modo de entrenamiento o en modo individual antes de unirte al modo escuadrón.
-
-
Usa la cubierta y el movimiento sabiamente. Puedes usar paredes, edificios, vehículos y otros objetos como cobertura del fuego enemigo. También puedes usar diferentes movimientos, como agacharte, inclinarte, deslizarte, saltar y rodar para esquivar balas y sorprender a tus enemigos.
-
Saquea inteligente y estratégicamente. Puedes saquear armas, accesorios y suministros de cajas, cadáveres o edificios en la arena. Sin embargo, ten cuidado de no exponerte demasiado mientras saqueas, ya que puedes atraer la atención no deseada de otros jugadores o enemigos de la IA. También, sea selectivo acerca de lo que saquea, ya que tiene espacio de inventario limitado y capacidad de peso.
-
Fuga en el momento adecuado. Fuga es la única manera de ganar el juego, pero también es arriesgado. Tienes que escapar de la zona de combate vivo con su botín antes de que acabe el tiempo. Sin embargo, también tienes que tener cuidado con otros jugadores que pueden intentar detenerte o robar tu botín. Por lo tanto, usted tiene que elegir cuándo romper cuidadosamente basado en su situación y estrategia.
-
-
¿Qué hay de nuevo en la actualización beta global?
-
Arena Breakout beta global se ha actualizado con nuevas características y mejoras que hacen que el juego sea más divertido y atractivo. Estos son algunos de los aspectos más destacados de la actualización:
-
Personajes femeninos y opciones de personalización
-
Ahora puedes elegir entre personajes masculinos y femeninos en Arena Breakout. También puedes personalizar la apariencia de tu personaje con diferentes peinados, tonos de piel, caras, trajes, accesorios y más. También puedes desbloquear más opciones de personalización completando misiones o comprándolas con moneda.
-
En el partido matar cam y equipamiento rápido característica
-
Ahora puedes ver cómo moriste o cómo mataste a alguien en el partido con la función kill cam. La cámara mortal te muestra una repetición de los últimos momentos de tu vida o la vida de tu enemigo desde su perspectiva. Puedes usar esta función para aprender de tus errores o para disfrutar de tus victorias.
-
-
Sistema de préstamo de equipos y de invitación de amigos
-
Ahora puede prestar su equipo a sus compañeros de escuadra o pedir prestado equipo de ellos en el partido con el sistema de préstamo de equipos. El sistema de préstamo de equipos le permite compartir sus armas, accesorios y suministros con los miembros de su equipo para ayudarlos o para optimizar su carga. También puede solicitar u ofrecer equipos a sus compañeros de escuadra con un simple toque.
-
También puedes invitar a tus amigos a jugar contigo en Arena Breakout con el sistema de invitación de amigos. El sistema de invitación de amigos te permite enviar o recibir invitaciones para unirte a un equipo con tus amigos u otros jugadores. También puedes chatear con tus amigos o compañeros de equipo en el lobby del juego o en el partido.
-
Sala de trofeos y soporte de idiomas
-
Ahora puedes mostrar tus logros y progreso en Arena Breakout con la función de sala de trofeos. La función de sala de trofeos le permite mostrar sus trofeos, medallas, insignias y estadísticas en una sala virtual que puede personalizar y decorar. También puede visitar las salas de trofeos de otros jugadores y comparar su rendimiento con ellos.
-
También puede jugar Arena Breakout en diferentes idiomas con la función de soporte de idioma. El juego actualmente es compatible con los idiomas inglés, chino, español, portugués, ruso, turco, árabe e indonesio. Puede cambiar el idioma del juego en el menú de configuración.
-
Conclusión
-
Arena Breakout es un juego que ofrece una nueva y emocionante manera de jugar juegos de disparos en dispositivos móviles. Es un juego que combina FPS tácticos, battle royale y elementos de disparos de saqueo en una experiencia realista e inmersiva. También es un juego que desafía tus habilidades y recompensa tus riesgos con un sistema de alto riesgo y alta recompensa.
-
-
Arena Breakout es un juego que vale la pena probar si estás buscando un FPS táctico de próxima generación en dispositivos móviles. Es un juego que te mantendrá al borde de tu asiento mientras disparas, saqueas y rompes para ganar.
-
Preguntas frecuentes
-
-
¿Qué es Arena Breakout?
-
Arena Breakout es un FPS táctico inmersivo de próxima generación que empuja los límites de la simulación de guerra en dispositivos móviles. También es el primer shooter de extracción de saqueadores que te permite disparar, saquear y escapar para ganar.
-
¿Cómo descargar y jugar Arena Breakout beta global?
-
Necesitas tener un dispositivo Android con al menos 4 GB de RAM y Android 8.0 o superior, una conexión a Internet estable y un código de invitación beta. Puedes descargar el juego desde el sitio web oficial o desde fuentes de terceros, y seguir los pasos para instalarlo y jugarlo.
-
¿Qué hay de nuevo en la actualización beta global?
-
La actualización beta global ha añadido nuevas características y mejoras, tales como personajes femeninos y opciones de personalización, cámara asesina en el partido y función de equipamiento rápido, sistema de préstamo de equipos e invitación a amigos, sala de trofeos y soporte de idioma, y más.
-
¿Cómo obtener un código de invitación beta?
-
Puedes obtener un código de invitación beta siguiendo las cuentas de redes sociales oficiales de Arena Breakout o uniéndote al servidor oficial de Discord del juego. También puedes obtener un código de invitación beta participando en sorteos o eventos organizados por los desarrolladores o influencers.
-
¿Cómo romper en Arena Breakout?
-
Breakout es la única manera de ganar el juego. Tienes que escapar de la zona de combate con tu botín antes de que se acabe el tiempo. Sin embargo, también tienes que tener cuidado con otros jugadores que pueden intentar detenerte o robar tu botín. Por lo tanto, usted tiene que elegir cuándo romper cuidadosamente basado en su situación y estrategia.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Caso Penal Pacfico Baha Mod Men Apk.md b/spaces/Benson/text-generation/Examples/Caso Penal Pacfico Baha Mod Men Apk.md
deleted file mode 100644
index fc72dcd8a7cc1d91604aa005857df4c5a3c73678..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Caso Penal Pacfico Baha Mod Men Apk.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Caso Penal: Pacific Bay Mod Menu APK - Una guía para los solucionadores de delitos
-
¿Te encanta jugar juegos de detectives? ¿Te gusta encontrar pistas, interrogar sospechosos y resolver misterios? Si es así, entonces es posible que haya oído hablar de Criminal Case: Pacific Bay, uno de los juegos de objetos ocultos más populares en Android. Pero ¿sabías que hay una manera de hacer este juego aún más divertido y emocionante? Sí, estamos hablando de Caso Penal: Pacific Bay Mod Menu APK, una herramienta de hackeo que le da recursos ilimitados, compras gratis, sin anuncios, y más. En este artículo, le diremos todo lo que necesita saber acerca de este menú mod apk, incluyendo lo que es, cómo descargarlo e instalarlo, cómo usarlo, y cuáles son sus pros y sus contras. Así que, vamos a empezar!
Criminal Case: Pacific Bay es un juego de objetos ocultos desarrollado por Pretty Simple, un estudio francés especializado en juegos casuales. Es la segunda temporada de la serie Criminal Case, que tiene más de 100 millones de descargas en Google Play. En este juego, juegas como un detective que trabaja para el Departamento de Policía de Pacific Bay. Su trabajo es investigar varias escenas del crimen, encontrar pistas, analizar pruebas, interrogar sospechosos y arrestar a los asesinos. También puedes hacer equipo con otros jugadores en línea y competir por las mejores puntuaciones.
-
Una aventura emocionante en la Bahía del Pacífico
-
-
Una experiencia desafiante y gratificante
-
Criminal Case: Pacific Bay no es un juego fácil. Tendrás que usar tus habilidades de observación, deducción y lógica para resolver los puzzles y encontrar a los culpables. También tendrás que administrar tu tiempo y energía sabiamente, ya que son recursos limitados en el juego. Tendrás que ganar estrellas completando tareas en cada escena del crimen. Puedes usar estas estrellas para desbloquear nuevas escenas, comprar objetos o realizar acciones. También tendrás que recoger monedas y dinero en efectivo jugando minijuegos o viendo anuncios. Puedes usar estas monedas para personalizar tu avatar, comprar potenciadores o acceder a funciones premium. También tendrás que subir de nivel ganando puntos de experiencia (XP) y posicionarte ganando medallas. También tendrás que desbloquear logros y trofeos completando ciertos objetivos.
-
¿Qué es el caso penal: Pacific Bay Mod Menu APK?
-
Una versión modificada del juego
-
Caso Penal: Pacific Bay Mod Menu APK es una versión modificada del juego original que incluye algunas características de hackeo para ayudar a los jugadores a superar fácilmente los niveles más difíciles. No es una aplicación oficial de Pretty Simple, sino una aplicación de terceros creada por algunos fans o desarrolladores que quieren mejorar la experiencia de juego. No está disponible en Google Play , pero se puede descargar desde algunos sitios web que ofrecen aplicaciones y juegos modificados. Sin embargo, debe tener cuidado al descargar e instalar dichas aplicaciones, ya que pueden contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal.
-
-
Una herramienta de hackeo para recursos ilimitados
-
-
Una forma de disfrutar del juego sin anuncios
-
Caso Penal: Pacific Bay Mod Menu APK es una manera de disfrutar del juego sin anuncios. Los anuncios son molestos y distraen, especialmente cuando aparecen en medio del juego o cuando estás viendo un video. También consumen sus datos y la batería. Con este menú mod apk, puede eliminar todos los anuncios del juego y jugar sin ninguna interrupción. También puedes evitar ver anuncios para ganar monedas o dinero en el juego.
-
Cómo descargar e instalar Caso Penal: Pacific Bay Mod Menu APK?
-
Los requisitos y precauciones
-
Antes de descargar e instalar Caso Penal: Pacific Bay Mod Menu APK, es necesario asegurarse de que su dispositivo cumple con los siguientes requisitos y precauciones:
-
-
Tu dispositivo debe tener Android 4.1 o una versión superior.
-
El dispositivo debe tener suficiente espacio de almacenamiento para instalar la aplicación.
-
El dispositivo debe tener una conexión a Internet estable para descargar la aplicación.
-
Debe habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo.
-
Debe desinstalar la versión original de Criminal Case: Pacific Bay desde su dispositivo.
-
Debe hacer una copia de seguridad de los datos del juego antes de instalar el menú mod apk.
-
Debe ser consciente de los riesgos de usar aplicaciones y juegos modificados, como prohibir, bloquear o perder su cuenta.
-
-
Los pasos a seguir
-
Después de haber comprobado los requisitos y precauciones, puede seguir estos pasos para descargar e instalar Caso Penal: Pacific Bay Mod Menu APK:
-
-
Ir a un sitio web que ofrece Caso Penal: Pacific Bay Mod Menu APK, tales como [APKPure], [APKDone], o [ModDroid].
-
Encontrar y descargar la última versión de Caso Penal: Pacific Bay Mod Menu APK en su dispositivo.
-
Localice y toque en el archivo descargado para iniciar el proceso de instalación.
-
Siga las instrucciones en la pantalla para completar la instalación.
-
-
Los beneficios y desventajas
-
Caso Penal: Pacific Bay Mod Menu APK tiene algunos beneficios y desventajas que usted debe considerar antes de usarlo. Estos son algunos de ellos:
-
-
-
Beneficios
-
Inconvenientes
-
-
-
Puedes disfrutar del juego con recursos ilimitados y sin anuncios.
-
Puede que te prohíban jugar o pierdas tu cuenta.
-
-
-
Puedes saltarte el tiempo de espera y jugar el juego cuando quieras.
-
Puedes perderte la diversión y el desafío del juego.
-
-
-
Puedes personalizar tu avatar y comprar potenciadores sin gastar dinero real.
-
Puedes encontrar algunos errores o errores en el juego.
-
-
-
Puedes posicionarte más rápido y desbloquear logros y trofeos fácilmente.
-
Usted puede perder sus datos de juego o el progreso si el menú mod apk no se actualiza.
-
-
-
Cómo utilizar Caso Penal: Pacific Bay Mod Menu APK?
-
Las características y funciones
-
Caso Penal: Pacific Bay Mod Menu APK tiene algunas características y funciones que puede utilizar para mejorar su experiencia de juego. Estos son algunos de ellos:
-
-
Estrellas ilimitadas: Puedes usar esta función para desbloquear nuevas escenas, comprar objetos o realizar acciones sin ganar estrellas en el juego.
-
Monedas ilimitadas: Puedes usar esta función para personalizar tu avatar, comprar boosters o acceder a funciones premium sin recoger monedas en el juego.
-
Dinero ilimitado: Puede utilizar esta función para obtener compras gratis en la tienda de juegos sin gastar dinero real.
-
Energía ilimitada: Puede utilizar esta función para jugar el juego sin esperar a recargar energía o ver anuncios.
-
Pistas ilimitadas: Puedes usar esta función para obtener pistas en cada escena del crimen sin usar estrellas o monedas.
-
Sin anuncios: Puede utilizar esta función para eliminar todos los anuncios del juego y jugar sin ninguna interrupción.
-
-
XP ilimitado: Puedes usar esta función para subir de nivel más rápido y ganar más puntos de experiencia en el juego.
-
Rank Hack: Puede utilizar esta función para clasificar más rápido y ganar más medallas en el juego.
-
Los consejos y trucos
-
Caso Penal: Pacific Bay Mod Menu APK tiene algunos consejos y trucos que puede utilizar para mejorar su juego y puntuación. Estos son algunos de ellos:
-
-
Utilice la función de sugerencias ilimitadas sabiamente. No confíe en él demasiado, ya que puede reducir la diversión y el desafío del juego. Trata de encontrar las pistas por ti mismo primero, y usa las pistas solo cuando estés atascado o te estés quedando sin tiempo.
-
Utilice la función de energía ilimitada con moderación. No juegue el juego durante demasiado tiempo, ya que puede causar fatiga ocular, o adicción. Tome descansos entre sesiones y limite su tiempo de reproducción diario.
-
Usa las monedas ilimitadas y la función de efectivo moderadamente. No compres todo en la tienda de juegos, ya que puede hacer que el juego sea demasiado fácil o aburrido. Guardar algunas monedas y dinero en efectivo para los niveles posteriores, o para los elementos que realmente necesita o quiere.
-
Usa cuidadosamente la función de estrellas ilimitadas. No desbloquear todas las escenas a la vez, ya que puede estropear la historia o el suspenso del juego. Sigue el orden de los casos y desbloquea las escenas a medida que avanzas.
-
Utilice la función de compras gratuitas selectivamente. No compres artículos que no sean compatibles con tu dispositivo, ya que puede causar fallos o errores en el juego. Compruebe la compatibilidad y las revisiones de los artículos antes de comprarlos.
-
Utilice la función ilimitada XP y rango hack con cautela. No suba de nivel ni suba de rango demasiado rápido, ya que puede aumentar la sospecha o la detección de los desarrolladores de juegos u otros jugadores. Mantén tu nivel y rango dentro de un rango razonable, y evita usar esta función en modo online.
-
-
Los riesgos y limitaciones
-
Caso Penal: Pacific Bay Mod Menu APK tiene algunos riesgos y limitaciones que usted debe ser consciente de antes de usarlo. Estos son algunos de ellos:
-
-
-
Usted puede encontrar algunos errores o errores en el juego si se utiliza este menú mod apk. El menú mod apk puede no ser compatible con su dispositivo, su versión del juego, o sus datos de juego. También puedes experimentar bloqueos, congelaciones, retrasos o fallos en el juego.
-
Usted puede perder los datos del juego o el progreso si se utiliza este menú mod apk. El menú mod apk puede sobrescribir o corromper los datos del juego o el progreso. También puede perder sus datos o el progreso si desinstalar el menú mod apk o actualizar el juego.
-
Usted puede perder la diversión y el desafío del juego si se utiliza este menú mod apk. El menú mod apk puede hacer el juego demasiado fácil o aburrido para usted. También puede perder interés en el juego o sentirse culpable por hacer trampa.
-
-
Conclusión
-
Caso Penal: Pacific Bay Mod Menu APK es una herramienta de hackeo que le da recursos ilimitados, compras gratis, sin anuncios, y más en Criminal Case: Pacific Bay, un popular juego de objetos ocultos en Android. Es una versión modificada del juego original que no está disponible en Google Play, pero en algunos sitios web que ofrecen aplicaciones y juegos modificados. Tiene algunos beneficios y desventajas que debe considerar antes de usarlo. También tiene algunas características y funciones que puede utilizar para mejorar su experiencia de juego. También tiene algunos consejos y trucos que puede utilizar para mejorar su juego y puntuación. También tiene algunos riesgos y limitaciones que debes conocer antes de usarlo.
-
Preguntas frecuentes
-
Q: ¿Es el caso penal: Pacific Bay Mod menú APK seguro de usar?
-
A: Caso Penal: Pacific Bay Mod Menu APK no es seguro de usar, ya que puede contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. También puede hacer que te expulsen del juego o que pierdas tu cuenta. También puede causar errores o errores en el juego. Es mejor utilizar la versión original de Criminal Case: Pacific Bay de Google Play.
-
Q: Es Caso Penal: Pacific Bay Mod Menu APK legal de usar?
-
-
Q: ¿Cómo puedo actualizar Caso Penal: Pacific Bay Mod Menu APK?
-
A: Puede actualizar Caso Penal: Pacific Bay Mod Menu APK mediante la descarga e instalación de la última versión de un sitio web que ofrece aplicaciones y juegos modded. Sin embargo, usted debe tener cuidado al actualizar el menú mod apk, ya que puede no ser compatible con los datos del juego o el progreso. También puede perder sus características de hackeo o enfrentar nuevos riesgos o limitaciones. Es mejor hacer una copia de seguridad de los datos del juego antes de actualizar el menú mod apk.
-
Q: ¿Cómo puedo desinstalar Caso Penal: Pacific Bay Mod Menu APK?
-
A: Usted puede desinstalar Caso Penal: Pacific Bay Mod Menu APK siguiendo estos pasos:
-
-
Ir a la configuración del dispositivo y toque en aplicaciones o aplicaciones.
-
Encontrar y toque en Caso Penal: Pacific Bay Mod Menú APK.
-
Toque en Desinstalar y confirme su acción.
-
Espere a que termine el proceso de desinstalación.
-
-
También puede volver a instalar la versión original de Criminal Case: Pacific Bay de Google Play si desea jugar el juego de nuevo.
-
Q: ¿Dónde puedo encontrar más información sobre Caso Penal: Pacific Bay Mod Menu APK?
-
A: Usted puede encontrar más información sobre Caso Penal: Pacific Bay Mod Menu APK visitando el sitio web que lo ofrece, o buscando en línea para comentarios, comentarios o tutoriales. Sin embargo, debe tener cuidado al visitar dichos sitios web o fuentes, ya que pueden no ser confiables o confiables. También debe evitar hacer clic en cualquier enlace sospechoso o descargar archivos desconocidos. Es mejor usar un antivirus o una aplicación de seguridad de buena reputación para proteger su dispositivo y sus datos.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/utils.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/utils.py
deleted file mode 100644
index dd2d245a0bebcd5fc37ac20526aabbd5358dab0e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/utils.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module offers general convenience and utility functions for dealing with
-datetimes.
-
-.. versionadded:: 2.7.0
-"""
-from __future__ import unicode_literals
-
-from datetime import datetime, time
-
-
-def today(tzinfo=None):
- """
- Returns a :py:class:`datetime` representing the current day at midnight
-
- :param tzinfo:
- The time zone to attach (also used to determine the current day).
-
- :return:
- A :py:class:`datetime.datetime` object representing the current day
- at midnight.
- """
-
- dt = datetime.now(tzinfo)
- return datetime.combine(dt.date(), time(0, tzinfo=tzinfo))
-
-
-def default_tzinfo(dt, tzinfo):
- """
- Sets the ``tzinfo`` parameter on naive datetimes only
-
- This is useful for example when you are provided a datetime that may have
- either an implicit or explicit time zone, such as when parsing a time zone
- string.
-
- .. doctest::
-
- >>> from dateutil.tz import tzoffset
- >>> from dateutil.parser import parse
- >>> from dateutil.utils import default_tzinfo
- >>> dflt_tz = tzoffset("EST", -18000)
- >>> print(default_tzinfo(parse('2014-01-01 12:30 UTC'), dflt_tz))
- 2014-01-01 12:30:00+00:00
- >>> print(default_tzinfo(parse('2014-01-01 12:30'), dflt_tz))
- 2014-01-01 12:30:00-05:00
-
- :param dt:
- The datetime on which to replace the time zone
-
- :param tzinfo:
- The :py:class:`datetime.tzinfo` subclass instance to assign to
- ``dt`` if (and only if) it is naive.
-
- :return:
- Returns an aware :py:class:`datetime.datetime`.
- """
- if dt.tzinfo is not None:
- return dt
- else:
- return dt.replace(tzinfo=tzinfo)
-
-
-def within_delta(dt1, dt2, delta):
- """
- Useful for comparing two datetimes that may have a negligible difference
- to be considered equal.
- """
- delta = abs(delta)
- difference = dt1 - dt2
- return -delta <= difference <= delta
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/version.py
deleted file mode 100644
index 95e1869658566aac3060562d8cd5a6b647887d1e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/version.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import pkg_resources
-
-try:
- __version__ = pkg_resources.get_distribution('setuptools').version
-except Exception:
- __version__ = 'unknown'
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/linter.sh b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/linter.sh
deleted file mode 100644
index 8dff44115ddb323486a00a8ca0764fe8fb7d2e0c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/linter.sh
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-# Run this script at project root by "./dev/linter.sh" before you commit
-
-vergte() {
- [ "$2" = "$(echo -e "$1\n$2" | sort -V | head -n1)" ]
-}
-
-{
- black --version | grep "19.3b0" > /dev/null
-} || {
- echo "Linter requires black==19.3b0 !"
- exit 1
-}
-
-ISORT_TARGET_VERSION="4.3.21"
-ISORT_VERSION=$(isort -v | grep VERSION | awk '{print $2}')
-vergte "$ISORT_VERSION" "$ISORT_TARGET_VERSION" || {
- echo "Linter requires isort>=${ISORT_TARGET_VERSION} !"
- exit 1
-}
-
-set -v
-
-echo "Running isort ..."
-isort -y -sp . --atomic
-
-echo "Running black ..."
-black -l 100 .
-
-echo "Running flake8 ..."
-if [ -x "$(command -v flake8-3)" ]; then
- flake8-3 .
-else
- python3 -m flake8 .
-fi
-
-# echo "Running mypy ..."
-# Pytorch does not have enough type annotations
-# mypy detectron2/solver detectron2/structures detectron2/config
-
-echo "Running clang-format ..."
-find . -regex ".*\.\(cpp\|c\|cc\|cu\|cxx\|h\|hh\|hpp\|hxx\|tcc\|mm\|m\)" -print0 | xargs -0 clang-format -i
-
-command -v arc > /dev/null && arc lint
diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/utils.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/utils.py
deleted file mode 100644
index 32b09e5374766d3002347c3c4cbaa702a3ddd6cd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/utils.py
+++ /dev/null
@@ -1,41 +0,0 @@
-
-import numpy as np
-
-from collections import Counter
-from torch.utils.data import Subset
-from sklearn.model_selection import train_test_split
-
-
-def __balance_val_split(dataset, val_split=0.):
- targets = np.array(dataset.targets)
- train_indices, val_indices = train_test_split(
- np.arange(targets.shape[0]),
- test_size=val_split,
- stratify=targets
- )
-
- train_dataset = Subset(dataset, indices=train_indices)
- val_dataset = Subset(dataset, indices=val_indices)
-
- return train_dataset, val_dataset
-
-
-def __split_of_train_sequence(subset: Subset, train_split=1.0):
- if train_split == 1:
- return subset
-
- targets = np.array([subset.dataset.targets[i] for i in subset.indices])
- train_indices, _ = train_test_split(
- np.arange(targets.shape[0]),
- test_size=1 - train_split,
- stratify=targets
- )
-
- train_dataset = Subset(subset.dataset, indices=[subset.indices[i] for i in train_indices])
-
- return train_dataset
-
-
-def __log_class_statistics(subset: Subset):
- train_classes = [subset.dataset.targets[i] for i in subset.indices]
- print(dict(Counter(train_classes)))
diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/max_iou_assigner.py b/spaces/CVPR/WALT/mmdet/core/bbox/assigners/max_iou_assigner.py
deleted file mode 100644
index 5cf4c4b4b450f87dfb99c3d33d8ed83d3e5cfcb3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/max_iou_assigner.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class MaxIoUAssigner(BaseAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with `-1`, or a semi-positive integer
- indicating the ground truth index.
-
- - -1: negative sample, no assigned gt
- - semi-positive integer: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_iou_thr (float): IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple): IoU threshold for negative bboxes.
- min_pos_iou (float): Minimum iou for a bbox to be considered as a
- positive bbox. Positive samples can have smaller IoU than
- pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool): Whether to assign all bboxes with the same
- highest overlap with some gt to that gt.
- ignore_iof_thr (float): IoF threshold for ignoring bboxes (if
- `gt_bboxes_ignore` is specified). Negative values mean not
- ignoring any bboxes.
- ignore_wrt_candidates (bool): Whether to compute the iof between
- `bboxes` and `gt_bboxes_ignore`, or the contrary.
- match_low_quality (bool): Whether to allow low quality matches. This is
- usually allowed for RPN and single stage detectors, but not allowed
- in the second stage. Details are demonstrated in Step 4.
- gpu_assign_thr (int): The upper bound of the number of GT for GPU
- assign. When the number of gt is above this threshold, will assign
- on CPU device. Negative values mean not assign on CPU.
- """
-
- def __init__(self,
- pos_iou_thr,
- neg_iou_thr,
- min_pos_iou=.0,
- gt_max_assign_all=True,
- ignore_iof_thr=-1,
- ignore_wrt_candidates=True,
- match_low_quality=True,
- gpu_assign_thr=-1,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_iou_thr = pos_iou_thr
- self.neg_iou_thr = neg_iou_thr
- self.min_pos_iou = min_pos_iou
- self.gt_max_assign_all = gt_max_assign_all
- self.ignore_iof_thr = ignore_iof_thr
- self.ignore_wrt_candidates = ignore_wrt_candidates
- self.gpu_assign_thr = gpu_assign_thr
- self.match_low_quality = match_low_quality
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign gt to bboxes.
-
- This method assign a gt bbox to every bbox (proposal/anchor), each bbox
- will be assigned with -1, or a semi-positive number. -1 means negative
- sample, semi-positive number is the index (0-based) of assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every bbox to the background
- 2. assign proposals whose iou with all gts < neg_iou_thr to 0
- 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr,
- assign it to that bbox
- 4. for each gt bbox, assign its nearest proposals (may be more than
- one) to itself
-
- Args:
- bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
-
- Example:
- >>> self = MaxIoUAssigner(0.5, 0.5)
- >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
- >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]])
- >>> assign_result = self.assign(bboxes, gt_bboxes)
- >>> expected_gt_inds = torch.LongTensor([1, 0])
- >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- """
- assign_on_cpu = True if (self.gpu_assign_thr > 0) and (
- gt_bboxes.shape[0] > self.gpu_assign_thr) else False
- # compute overlap and assign gt on CPU when number of GT is large
- if assign_on_cpu:
- device = bboxes.device
- bboxes = bboxes.cpu()
- gt_bboxes = gt_bboxes.cpu()
- if gt_bboxes_ignore is not None:
- gt_bboxes_ignore = gt_bboxes_ignore.cpu()
- if gt_labels is not None:
- gt_labels = gt_labels.cpu()
-
- overlaps = self.iou_calculator(gt_bboxes, bboxes)
-
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
- and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0):
- if self.ignore_wrt_candidates:
- ignore_overlaps = self.iou_calculator(
- bboxes, gt_bboxes_ignore, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
- else:
- ignore_overlaps = self.iou_calculator(
- gt_bboxes_ignore, bboxes, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=0)
- overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1
-
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- if assign_on_cpu:
- assign_result.gt_inds = assign_result.gt_inds.to(device)
- assign_result.max_overlaps = assign_result.max_overlaps.to(device)
- if assign_result.labels is not None:
- assign_result.labels = assign_result.labels.to(device)
- return assign_result
-
- def assign_wrt_overlaps(self, overlaps, gt_labels=None):
- """Assign w.r.t. the overlaps of bboxes with gts.
-
- Args:
- overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes,
- shape(k, n).
- gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_gts, num_bboxes = overlaps.size(0), overlaps.size(1)
-
- # 1. assign -1 by default
- assigned_gt_inds = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
-
- if num_gts == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = overlaps.new_zeros((num_bboxes, ))
- if num_gts == 0:
- # No truth, assign everything to background
- assigned_gt_inds[:] = 0
- if gt_labels is None:
- assigned_labels = None
- else:
- assigned_labels = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gts,
- assigned_gt_inds,
- max_overlaps,
- labels=assigned_labels)
-
- # for each anchor, which gt best overlaps with it
- # for each anchor, the max iou of all gts
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
- # for each gt, which anchor best overlaps with it
- # for each gt, the max iou of all proposals
- gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1)
-
- # 2. assign negative: below
- # the negative inds are set to be 0
- if isinstance(self.neg_iou_thr, float):
- assigned_gt_inds[(max_overlaps >= 0)
- & (max_overlaps < self.neg_iou_thr)] = 0
- elif isinstance(self.neg_iou_thr, tuple):
- assert len(self.neg_iou_thr) == 2
- assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0])
- & (max_overlaps < self.neg_iou_thr[1])] = 0
-
- # 3. assign positive: above positive IoU threshold
- pos_inds = max_overlaps >= self.pos_iou_thr
- assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1
-
- if self.match_low_quality:
- # Low-quality matching will overwrite the assigned_gt_inds assigned
- # in Step 3. Thus, the assigned gt might not be the best one for
- # prediction.
- # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2,
- # bbox 1 will be assigned as the best target for bbox A in step 3.
- # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's
- # assigned_gt_inds will be overwritten to be bbox B.
- # This might be the reason that it is not used in ROI Heads.
- for i in range(num_gts):
- if gt_max_overlaps[i] >= self.min_pos_iou:
- if self.gt_max_assign_all:
- max_iou_inds = overlaps[i, :] == gt_max_overlaps[i]
- assigned_gt_inds[max_iou_inds] = i + 1
- else:
- assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1
-
- if gt_labels is not None:
- assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1)
- pos_inds = torch.nonzero(
- assigned_gt_inds > 0, as_tuple=False).squeeze()
- if pos_inds.numel() > 0:
- assigned_labels[pos_inds] = gt_labels[
- assigned_gt_inds[pos_inds] - 1]
- else:
- assigned_labels = None
-
- return AssignResult(
- num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels)
diff --git a/spaces/CarlDennis/HYTTS/README.md b/spaces/CarlDennis/HYTTS/README.md
deleted file mode 100644
index 81ad14c82083062751fa05dd8959f38d87955b72..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/HYTTS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: HYTTS
-emoji: 👁
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: cc-by-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CjangCjengh/Sanskrit-TTS/README.md b/spaces/CjangCjengh/Sanskrit-TTS/README.md
deleted file mode 100644
index f7bc0e8e6fb3e84c76812e9332baf5fbf9fcc205..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Sanskrit-TTS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sanskrit TTS
-emoji: 👀
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/midas_net.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/midas_net.py
deleted file mode 100644
index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000
--- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/midas_net.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""MidashNet: Network for monocular depth estimation trained by mixing several datasets.
-This file contains code that is adapted from
-https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
-"""
-import torch
-import torch.nn as nn
-
-from .base_model import BaseModel
-from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
-
-
-class MidasNet(BaseModel):
- """Network for monocular depth estimation.
- """
-
- def __init__(self, path=None, features=256, non_negative=True):
- """Init.
-
- Args:
- path (str, optional): Path to saved model. Defaults to None.
- features (int, optional): Number of features. Defaults to 256.
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
- """
- print("Loading weights: ", path)
-
- super(MidasNet, self).__init__()
-
- use_pretrained = False if path is None else True
-
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
-
- self.scratch.refinenet4 = FeatureFusionBlock(features)
- self.scratch.refinenet3 = FeatureFusionBlock(features)
- self.scratch.refinenet2 = FeatureFusionBlock(features)
- self.scratch.refinenet1 = FeatureFusionBlock(features)
-
- self.scratch.output_conv = nn.Sequential(
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear"),
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- )
-
- if path:
- self.load(path)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input data (image)
-
- Returns:
- tensor: depth
- """
-
- layer_1 = self.pretrained.layer1(x)
- layer_2 = self.pretrained.layer2(layer_1)
- layer_3 = self.pretrained.layer3(layer_2)
- layer_4 = self.pretrained.layer4(layer_3)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return torch.squeeze(out, dim=1)
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/database.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/database.ts
deleted file mode 100644
index 76a5461af74cecbf7ac012cfb6ba7c53bfdfa213..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/database.ts
+++ /dev/null
@@ -1,31 +0,0 @@
-import { MONGODB_URL, MONGODB_DB_NAME } from "$env/static/private";
-import { MongoClient } from "mongodb";
-import type { Conversation } from "$lib/types/Conversation";
-import type { SharedConversation } from "$lib/types/SharedConversation";
-import type { AbortedGeneration } from "$lib/types/AbortedGeneration";
-import type { Settings } from "$lib/types/Settings";
-
-const client = new MongoClient(MONGODB_URL, {
- // directConnection: true
-});
-
-export const connectPromise = client.connect().catch(console.error);
-
-const db = client.db(MONGODB_DB_NAME);
-
-const conversations = db.collection("conversations");
-const sharedConversations = db.collection("sharedConversations");
-const abortedGenerations = db.collection("abortedGenerations");
-const settings = db.collection("settings");
-
-export { client, db };
-export const collections = { conversations, sharedConversations, abortedGenerations, settings };
-
-client.on("open", () => {
- conversations.createIndex({ sessionId: 1, updatedAt: -1 });
- abortedGenerations.createIndex({ updatedAt: 1 }, { expireAfterSeconds: 30 });
- abortedGenerations.createIndex({ conversationId: 1 }, { unique: true });
- sharedConversations.createIndex({ hash: 1 }, { unique: true });
- // Sparse so that we can have settings on userId later
- settings.createIndex({ sessionId: 1 }, { unique: true, sparse: true });
-});
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/sha256.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/sha256.ts
deleted file mode 100644
index 43059b518fc5a4da6ed08ab36aeb6c289007f6aa..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/sha256.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-export async function sha256(input: string): Promise {
- const utf8 = new TextEncoder().encode(input);
- const hashBuffer = await crypto.subtle.digest("SHA-256", utf8);
- const hashArray = Array.from(new Uint8Array(hashBuffer));
- const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join("");
- return hashHex;
-}
diff --git a/spaces/DaleChen/AutoGPT/autogpt/token_counter.py b/spaces/DaleChen/AutoGPT/autogpt/token_counter.py
deleted file mode 100644
index 338fe6be4d47a679f2bf0815685edeb3dce66936..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/token_counter.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""Functions for counting the number of tokens in a message or string."""
-from __future__ import annotations
-
-import tiktoken
-
-from autogpt.logs import logger
-
-
-def count_message_tokens(
- messages: list[dict[str, str]], model: str = "gpt-3.5-turbo-0301"
-) -> int:
- """
- Returns the number of tokens used by a list of messages.
-
- Args:
- messages (list): A list of messages, each of which is a dictionary
- containing the role and content of the message.
- model (str): The name of the model to use for tokenization.
- Defaults to "gpt-3.5-turbo-0301".
-
- Returns:
- int: The number of tokens used by the list of messages.
- """
- try:
- encoding = tiktoken.encoding_for_model(model)
- except KeyError:
- logger.warn("Warning: model not found. Using cl100k_base encoding.")
- encoding = tiktoken.get_encoding("cl100k_base")
- if model == "gpt-3.5-turbo":
- # !Note: gpt-3.5-turbo may change over time.
- # Returning num tokens assuming gpt-3.5-turbo-0301.")
- return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
- elif model == "gpt-4":
- # !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
- return count_message_tokens(messages, model="gpt-4-0314")
- elif model == "gpt-3.5-turbo-0301":
- tokens_per_message = (
- 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
- )
- tokens_per_name = -1 # if there's a name, the role is omitted
- elif model == "gpt-4-0314":
- tokens_per_message = 3
- tokens_per_name = 1
- else:
- raise NotImplementedError(
- f"num_tokens_from_messages() is not implemented for model {model}.\n"
- " See https://github.com/openai/openai-python/blob/main/chatml.md for"
- " information on how messages are converted to tokens."
- )
- num_tokens = 0
- for message in messages:
- num_tokens += tokens_per_message
- for key, value in message.items():
- num_tokens += len(encoding.encode(value))
- if key == "name":
- num_tokens += tokens_per_name
- num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
- return num_tokens
-
-
-def count_string_tokens(string: str, model_name: str) -> int:
- """
- Returns the number of tokens in a text string.
-
- Args:
- string (str): The text string.
- model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
-
- Returns:
- int: The number of tokens in the text string.
- """
- encoding = tiktoken.encoding_for_model(model_name)
- return len(encoding.encode(string))
diff --git a/spaces/DaleChen/AutoGPT/tests/test_config.py b/spaces/DaleChen/AutoGPT/tests/test_config.py
deleted file mode 100644
index b472a24c78edd1f931a76c68e08ed544bbe61d98..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/tests/test_config.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from unittest import TestCase
-
-from autogpt.config import Config
-
-
-class TestConfig(TestCase):
- """
- Test cases for the Config class, which handles the configuration settings
- for the AI and ensures it behaves as a singleton.
- """
-
- def setUp(self):
- """
- Set up the test environment by creating an instance of the Config class.
- """
- self.config = Config()
-
- def test_singleton(self):
- """
- Test if the Config class behaves as a singleton by ensuring that two instances are the same.
- """
- config2 = Config()
- self.assertIs(self.config, config2)
-
- def test_initial_values(self):
- """
- Test if the initial values of the Config class attributes are set correctly.
- """
- self.assertFalse(self.config.debug_mode)
- self.assertFalse(self.config.continuous_mode)
- self.assertFalse(self.config.speak_mode)
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo")
- self.assertEqual(self.config.smart_llm_model, "gpt-4")
- self.assertEqual(self.config.fast_token_limit, 4000)
- self.assertEqual(self.config.smart_token_limit, 8000)
-
- def test_set_continuous_mode(self):
- """
- Test if the set_continuous_mode() method updates the continuous_mode attribute.
- """
- self.config.set_continuous_mode(True)
- self.assertTrue(self.config.continuous_mode)
-
- def test_set_speak_mode(self):
- """
- Test if the set_speak_mode() method updates the speak_mode attribute.
- """
- self.config.set_speak_mode(True)
- self.assertTrue(self.config.speak_mode)
-
- def test_set_fast_llm_model(self):
- """
- Test if the set_fast_llm_model() method updates the fast_llm_model attribute.
- """
- self.config.set_fast_llm_model("gpt-3.5-turbo-test")
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test")
-
- def test_set_smart_llm_model(self):
- """
- Test if the set_smart_llm_model() method updates the smart_llm_model attribute.
- """
- self.config.set_smart_llm_model("gpt-4-test")
- self.assertEqual(self.config.smart_llm_model, "gpt-4-test")
-
- def test_set_fast_token_limit(self):
- """
- Test if the set_fast_token_limit() method updates the fast_token_limit attribute.
- """
- self.config.set_fast_token_limit(5000)
- self.assertEqual(self.config.fast_token_limit, 5000)
-
- def test_set_smart_token_limit(self):
- """
- Test if the set_smart_token_limit() method updates the smart_token_limit attribute.
- """
- self.config.set_smart_token_limit(9000)
- self.assertEqual(self.config.smart_token_limit, 9000)
-
- def test_set_debug_mode(self):
- """
- Test if the set_debug_mode() method updates the debug_mode attribute.
- """
- self.config.set_debug_mode(True)
- self.assertTrue(self.config.debug_mode)
diff --git a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/distrib.py b/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/distrib.py
deleted file mode 100644
index a0743fea86faf09ade822e7647fc558ba7d428f6..0000000000000000000000000000000000000000
--- a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/distrib.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import logging
-import os
-
-import torch
-from torch.utils.data.distributed import DistributedSampler
-from torch.utils.data import DataLoader, Subset
-from torch.nn.parallel.distributed import DistributedDataParallel
-
-logger = logging.getLogger(__name__)
-rank = 0
-world_size = 1
-
-
-def init(args):
- """init.
-
- Initialize DDP using the given rendezvous file.
- """
- global rank, world_size
- if args.ddp:
- assert args.rank is not None and args.world_size is not None
- rank = args.rank
- world_size = args.world_size
- if world_size == 1:
- return
- torch.cuda.set_device(rank)
- torch.distributed.init_process_group(
- backend=args.ddp_backend,
- init_method='file://' + os.path.abspath(args.rendezvous_file),
- world_size=world_size,
- rank=rank)
- logger.debug("Distributed rendezvous went well, rank %d/%d", rank, world_size)
-
-
-def average(metrics, count=1.):
- """average.
-
- Average all the relevant metrices across processes
- `metrics`should be a 1D float32 fector. Returns the average of `metrics`
- over all hosts. You can use `count` to control the weight of each worker.
- """
- if world_size == 1:
- return metrics
- tensor = torch.tensor(list(metrics) + [1], device='cuda', dtype=torch.float32)
- tensor *= count
- torch.distributed.all_reduce(tensor, op=torch.distributed.ReduceOp.SUM)
- return (tensor[:-1] / tensor[-1]).cpu().numpy().tolist()
-
-
-def wrap(model):
- """wrap.
-
- Wrap a model with DDP if distributed training is enabled.
- """
- if world_size == 1:
- return model
- else:
- return DistributedDataParallel(
- model,
- device_ids=[torch.cuda.current_device()],
- output_device=torch.cuda.current_device())
-
-
-def barrier():
- if world_size > 1:
- torch.distributed.barrier()
-
-
-def loader(dataset, *args, shuffle=False, klass=DataLoader, **kwargs):
- """loader.
-
- Create a dataloader properly in case of distributed training.
- If a gradient is going to be computed you must set `shuffle=True`.
-
- :param dataset: the dataset to be parallelized
- :param args: relevant args for the loader
- :param shuffle: shuffle examples
- :param klass: loader class
- :param kwargs: relevant args
- """
-
- if world_size == 1:
- return klass(dataset, *args, shuffle=shuffle, **kwargs)
-
- if shuffle:
- # train means we will compute backward, we use DistributedSampler
- sampler = DistributedSampler(dataset)
- # We ignore shuffle, DistributedSampler already shuffles
- return klass(dataset, *args, **kwargs, sampler=sampler)
- else:
- # We make a manual shard, as DistributedSampler otherwise replicate some examples
- dataset = Subset(dataset, list(range(rank, len(dataset), world_size)))
- return klass(dataset, *args, shuffle=shuffle)
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnet.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnet.py
deleted file mode 100644
index ea5fdf82fafa3058c5f00074d55fbb1e584d5865..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnet.py
+++ /dev/null
@@ -1,235 +0,0 @@
-import os
-import sys
-import torch
-import torch.nn as nn
-import math
-try:
- from lib.nn import SynchronizedBatchNorm2d
-except ImportError:
- from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d
-
-try:
- from urllib import urlretrieve
-except ImportError:
- from urllib.request import urlretrieve
-
-
-__all__ = ['ResNet', 'resnet50', 'resnet101'] # resnet101 is coming soon!
-
-
-model_urls = {
- 'resnet50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet50-imagenet.pth',
- 'resnet101': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet101-imagenet.pth'
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = SynchronizedBatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = SynchronizedBatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(Bottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = SynchronizedBatchNorm2d(planes)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
- self.bn2 = SynchronizedBatchNorm2d(planes)
- self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
- self.bn3 = SynchronizedBatchNorm2d(planes * 4)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers, num_classes=1000):
- self.inplanes = 128
- super(ResNet, self).__init__()
- self.conv1 = conv3x3(3, 64, stride=2)
- self.bn1 = SynchronizedBatchNorm2d(64)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(64, 64)
- self.bn2 = SynchronizedBatchNorm2d(64)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = conv3x3(64, 128)
- self.bn3 = SynchronizedBatchNorm2d(128)
- self.relu3 = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
- self.avgpool = nn.AvgPool2d(7, stride=1)
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, SynchronizedBatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- SynchronizedBatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
-
- return x
-
-'''
-def resnet18(pretrained=False, **kwargs):
- """Constructs a ResNet-18 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet18']))
- return model
-
-
-def resnet34(pretrained=False, **kwargs):
- """Constructs a ResNet-34 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet34']))
- return model
-'''
-
-def resnet50(pretrained=False, **kwargs):
- """Constructs a ResNet-50 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet50']), strict=False)
- return model
-
-
-def resnet101(pretrained=False, **kwargs):
- """Constructs a ResNet-101 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on Places
- """
- model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet101']), strict=False)
- return model
-
-# def resnet152(pretrained=False, **kwargs):
-# """Constructs a ResNet-152 model.
-#
-# Args:
-# pretrained (bool): If True, returns a model pre-trained on Places
-# """
-# model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
-# if pretrained:
-# model.load_state_dict(load_url(model_urls['resnet152']))
-# return model
-
-def load_url(url, model_dir='./pretrained', map_location=None):
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- filename = url.split('/')[-1]
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- urlretrieve(url, cached_file)
- return torch.load(cached_file, map_location=map_location)
diff --git a/spaces/DragGan/DragGan/stylegan_human/training/networks_stylegan3.py b/spaces/DragGan/DragGan/stylegan_human/training/networks_stylegan3.py
deleted file mode 100644
index 70d0ebce100b504b39791dbf3e1dfea4c9473f2b..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/training/networks_stylegan3.py
+++ /dev/null
@@ -1,538 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Generator architecture from the paper
-"Alias-Free Generative Adversarial Networks"."""
-
-import numpy as np
-import scipy.signal
-import scipy.optimize
-import torch
-import torch.nn.functional as F
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_gradfix
-from torch_utils.ops import filtered_lrelu
-from torch_utils.ops import bias_act
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def modulated_conv2d(
- x, # Input tensor: [batch_size, in_channels, in_height, in_width]
- w, # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width]
- s, # Style tensor: [batch_size, in_channels]
- demodulate = True, # Apply weight demodulation?
- padding = 0, # Padding: int or [padH, padW]
- input_gain = None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels]
-):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(x.shape[0])
- out_channels, in_channels, kh, kw = w.shape
- misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(s, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs.
- if demodulate:
- w = w * w.square().mean([1,2,3], keepdim=True).rsqrt()
- s = s * s.square().mean().rsqrt()
-
- # Modulate weights.
- w = w.unsqueeze(0) # [NOIkk]
- w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Demodulate weights.
- if demodulate:
- dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO]
- w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Apply input scaling.
- if input_gain is not None:
- input_gain = input_gain.expand(batch_size, in_channels) # [NI]
- w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Execute as one fused op using grouped convolution.
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_gradfix.conv2d(input=x, weight=w.to(x.dtype), padding=padding, groups=batch_size)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- return x
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- activation = 'linear', # Activation function: 'relu', 'lrelu', etc.
- bias = True, # Apply additive bias before the activation function?
- lr_multiplier = 1, # Learning rate multiplier.
- weight_init = 1, # Initial standard deviation of the weight tensor.
- bias_init = 0, # Initial value of the additive bias.
- ):
- super().__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) * (weight_init / lr_multiplier))
- bias_init = np.broadcast_to(np.asarray(bias_init, dtype=np.float32), [out_features])
- self.bias = torch.nn.Parameter(torch.from_numpy(bias_init / lr_multiplier)) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
- def extra_repr(self):
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- c_dim, # Conditioning label (C) dimensionality, 0 = no labels.
- w_dim, # Intermediate latent (W) dimensionality.
- num_ws, # Number of intermediate latents to output.
- num_layers = 2, # Number of mapping layers.
- lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers.
- w_avg_beta = 0.998, # Decay for tracking the moving average of W during training.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- # Construct layers.
- self.embed = FullyConnectedLayer(self.c_dim, self.w_dim) if self.c_dim > 0 else None
- features = [self.z_dim + (self.w_dim if self.c_dim > 0 else 0)] + [self.w_dim] * self.num_layers
- for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]):
- layer = FullyConnectedLayer(in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
- misc.assert_shape(z, [None, self.z_dim])
- if truncation_cutoff is None:
- truncation_cutoff = self.num_ws
-
- # Embed, normalize, and concatenate inputs.
- x = z.to(torch.float32)
- x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = self.embed(c.to(torch.float32))
- y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Execute layers.
- for idx in range(self.num_layers):
- x = getattr(self, f'fc{idx}')(x)
-
- # Update moving average of W.
- if update_emas:
- self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast and apply truncation.
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
- if truncation_psi != 1:
- x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi)
- return x
-
- def extra_repr(self):
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisInput(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- channels, # Number of output channels.
- size, # Output spatial size: int or [width, height].
- sampling_rate, # Output sampling rate.
- bandwidth, # Output bandwidth.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.channels = channels
- self.size = np.broadcast_to(np.asarray(size), [2])
- self.sampling_rate = sampling_rate
- self.bandwidth = bandwidth
-
- # Draw random frequencies from uniform 2D disc.
- freqs = torch.randn([self.channels, 2])
- radii = freqs.square().sum(dim=1, keepdim=True).sqrt()
- freqs /= radii * radii.square().exp().pow(0.25)
- freqs *= bandwidth
- phases = torch.rand([self.channels]) - 0.5
-
- # Setup parameters and buffers.
- self.weight = torch.nn.Parameter(torch.randn([self.channels, self.channels]))
- self.affine = FullyConnectedLayer(w_dim, 4, weight_init=0, bias_init=[1,0,0,0])
- self.register_buffer('transform', torch.eye(3, 3)) # User-specified inverse transform wrt. resulting image.
- self.register_buffer('freqs', freqs)
- self.register_buffer('phases', phases)
-
- def forward(self, w):
- # Introduce batch dimension.
- transforms = self.transform.unsqueeze(0) # [batch, row, col]
- freqs = self.freqs.unsqueeze(0) # [batch, channel, xy]
- phases = self.phases.unsqueeze(0) # [batch, channel]
-
- # Apply learned transformation.
- t = self.affine(w) # t = (r_c, r_s, t_x, t_y)
- t = t / t[:, :2].norm(dim=1, keepdim=True) # t' = (r'_c, r'_s, t'_x, t'_y)
- m_r = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse rotation wrt. resulting image.
- m_r[:, 0, 0] = t[:, 0] # r'_c
- m_r[:, 0, 1] = -t[:, 1] # r'_s
- m_r[:, 1, 0] = t[:, 1] # r'_s
- m_r[:, 1, 1] = t[:, 0] # r'_c
- m_t = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse translation wrt. resulting image.
- m_t[:, 0, 2] = -t[:, 2] # t'_x
- m_t[:, 1, 2] = -t[:, 3] # t'_y
- transforms = m_r @ m_t @ transforms # First rotate resulting image, then translate, and finally apply user-specified transform.
-
- # Transform frequencies.
- phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2)
- freqs = freqs @ transforms[:, :2, :2]
-
- # Dampen out-of-band frequencies that may occur due to the user-specified transform.
- amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) / (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1)
-
- # Construct sampling grid.
- theta = torch.eye(2, 3, device=w.device)
- theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate
- theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate
- grids = torch.nn.functional.affine_grid(theta.unsqueeze(0), [1, 1, self.size[1], self.size[0]], align_corners=False)
-
- # Compute Fourier features.
- x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)).squeeze(3) # [batch, height, width, channel]
- x = x + phases.unsqueeze(1).unsqueeze(2)
- x = torch.sin(x * (np.pi * 2))
- x = x * amplitudes.unsqueeze(1).unsqueeze(2)
-
- # Apply trainable mapping.
- weight = self.weight / np.sqrt(self.channels)
- x = x @ weight.t()
-
- # Ensure correct shape.
- x = x.permute(0, 3, 1, 2) # [batch, channel, height, width]
- misc.assert_shape(x, [w.shape[0], self.channels, int(self.size[1]), int(self.size[0])])
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},',
- f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- is_torgb, # Is this the final ToRGB layer?
- is_critically_sampled, # Does this layer use critical sampling?
- use_fp16, # Does this layer use FP16?
-
- # Input & output specifications.
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- in_size, # Input spatial size: int or [width, height].
- out_size, # Output spatial size: int or [width, height].
- in_sampling_rate, # Input sampling rate (s).
- out_sampling_rate, # Output sampling rate (s).
- in_cutoff, # Input cutoff frequency (f_c).
- out_cutoff, # Output cutoff frequency (f_c).
- in_half_width, # Input transition band half-width (f_h).
- out_half_width, # Output Transition band half-width (f_h).
-
- # Hyperparameters.
- conv_kernel = 3, # Convolution kernel size. Ignored for final the ToRGB layer.
- filter_size = 6, # Low-pass filter size relative to the lower resolution when up/downsampling.
- lrelu_upsampling = 2, # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer.
- use_radial_filters = False, # Use radially symmetric downsampling filter? Ignored for critically sampled layers.
- conv_clamp = 256, # Clamp the output to [-X, +X], None = disable clamping.
- magnitude_ema_beta = 0.999, # Decay rate for the moving average of input magnitudes.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.is_torgb = is_torgb
- self.is_critically_sampled = is_critically_sampled
- self.use_fp16 = use_fp16
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.in_size = np.broadcast_to(np.asarray(in_size), [2])
- self.out_size = np.broadcast_to(np.asarray(out_size), [2])
- self.in_sampling_rate = in_sampling_rate
- self.out_sampling_rate = out_sampling_rate
- self.tmp_sampling_rate = max(in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling)
- self.in_cutoff = in_cutoff
- self.out_cutoff = out_cutoff
- self.in_half_width = in_half_width
- self.out_half_width = out_half_width
- self.conv_kernel = 1 if is_torgb else conv_kernel
- self.conv_clamp = conv_clamp
- self.magnitude_ema_beta = magnitude_ema_beta
-
- # Setup parameters and buffers.
- self.affine = FullyConnectedLayer(self.w_dim, self.in_channels, bias_init=1)
- self.weight = torch.nn.Parameter(torch.randn([self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel]))
- self.bias = torch.nn.Parameter(torch.zeros([self.out_channels]))
- self.register_buffer('magnitude_ema', torch.ones([]))
-
- # Design upsampling filter.
- self.up_factor = int(np.rint(self.tmp_sampling_rate / self.in_sampling_rate))
- assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate
- self.up_taps = filter_size * self.up_factor if self.up_factor > 1 and not self.is_torgb else 1
- self.register_buffer('up_filter', self.design_lowpass_filter(
- numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate))
-
- # Design downsampling filter.
- self.down_factor = int(np.rint(self.tmp_sampling_rate / self.out_sampling_rate))
- assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate
- self.down_taps = filter_size * self.down_factor if self.down_factor > 1 and not self.is_torgb else 1
- self.down_radial = use_radial_filters and not self.is_critically_sampled
- self.register_buffer('down_filter', self.design_lowpass_filter(
- numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial))
-
- # Compute padding.
- pad_total = (self.out_size - 1) * self.down_factor + 1 # Desired output size before downsampling.
- pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor # Input size after upsampling.
- pad_total += self.up_taps + self.down_taps - 2 # Size reduction caused by the filters.
- pad_lo = (pad_total + self.up_factor) // 2 # Shift sample locations according to the symmetric interpretation (Appendix C.3).
- pad_hi = pad_total - pad_lo
- self.padding = [int(pad_lo[0]), int(pad_hi[0]), int(pad_lo[1]), int(pad_hi[1])]
-
- def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False):
- assert noise_mode in ['random', 'const', 'none'] # unused
- misc.assert_shape(x, [None, self.in_channels, int(self.in_size[1]), int(self.in_size[0])])
- misc.assert_shape(w, [x.shape[0], self.w_dim])
-
- # Track input magnitude.
- if update_emas:
- with torch.autograd.profiler.record_function('update_magnitude_ema'):
- magnitude_cur = x.detach().to(torch.float32).square().mean()
- self.magnitude_ema.copy_(magnitude_cur.lerp(self.magnitude_ema, self.magnitude_ema_beta))
- input_gain = self.magnitude_ema.rsqrt()
-
- # Execute affine layer.
- styles = self.affine(w)
- if self.is_torgb:
- weight_gain = 1 / np.sqrt(self.in_channels * (self.conv_kernel ** 2))
- styles = styles * weight_gain
-
- # Execute modulated conv2d.
- dtype = torch.float16 if (self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32
- x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles,
- padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain)
-
- # Execute bias, filtered leaky ReLU, and clamping.
- gain = 1 if self.is_torgb else np.sqrt(2)
- slope = 1 if self.is_torgb else 0.2
- x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype),
- up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp)
-
- # Ensure correct shape and dtype.
- misc.assert_shape(x, [None, self.out_channels, int(self.out_size[1]), int(self.out_size[0])])
- assert x.dtype == dtype
- return x
-
- @staticmethod
- def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False):
- assert numtaps >= 1
-
- # Identity filter.
- if numtaps == 1:
- return None
-
- # Separable Kaiser low-pass filter.
- if not radial:
- f = scipy.signal.firwin(numtaps=numtaps, cutoff=cutoff, width=width, fs=fs)
- return torch.as_tensor(f, dtype=torch.float32)
-
- # Radially symmetric jinc-based filter.
- x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs
- r = np.hypot(*np.meshgrid(x, x))
- f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r)
- beta = scipy.signal.kaiser_beta(scipy.signal.kaiser_atten(numtaps, width / (fs / 2)))
- w = np.kaiser(numtaps, beta)
- f *= np.outer(w, w)
- f /= np.sum(f)
- return torch.as_tensor(f, dtype=torch.float32)
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},',
- f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},',
- f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},',
- f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},',
- f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},',
- f'in_size={list(self.in_size)}, out_size={list(self.out_size)},',
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- channel_base = 32768, # Overall multiplier for the number of channels.
- channel_max = 512, # Maximum number of channels in any layer.
- num_layers = 14, # Total number of layers, excluding Fourier features and ToRGB.
- num_critical = 2, # Number of critically sampled layers at the end.
- first_cutoff = 2, # Cutoff frequency of the first layer (f_{c,0}).
- first_stopband = 2**2.1, # Minimum stopband of the first layer (f_{t,0}).
- last_stopband_rel = 2**0.3, # Minimum stopband of the last layer, expressed relative to the cutoff.
- margin_size = 10, # Number of additional pixels outside the image.
- output_scale = 0.25, # Scale factor for the output image.
- num_fp16_res = 4, # Use FP16 for the N highest resolutions.
- **layer_kwargs, # Arguments for SynthesisLayer.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.num_ws = num_layers + 2
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.num_layers = num_layers
- self.num_critical = num_critical
- self.margin_size = margin_size
- self.output_scale = output_scale
- self.num_fp16_res = num_fp16_res
-
- # Geometric progression of layer cutoffs and min. stopbands.
- last_cutoff = self.img_resolution / 2 # f_{c,N}
- last_stopband = last_cutoff * last_stopband_rel # f_{t,N}
- exponents = np.minimum(np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1)
- cutoffs = first_cutoff * (last_cutoff / first_cutoff) ** exponents # f_c[i]
- stopbands = first_stopband * (last_stopband / first_stopband) ** exponents # f_t[i]
-
- # Compute remaining layer parameters.
- sampling_rates = np.exp2(np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i]
- half_widths = np.maximum(stopbands, sampling_rates / 2) - cutoffs # f_h[i]
- sizes = sampling_rates + self.margin_size * 2
- sizes[-2:] = self.img_resolution
- channels = np.rint(np.minimum((channel_base / 2) / cutoffs, channel_max))
- channels[-1] = self.img_channels
-
- # Construct layers.
- self.input = SynthesisInput(
- w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]),
- sampling_rate=sampling_rates[0], bandwidth=cutoffs[0])
- self.layer_names = []
- for idx in range(self.num_layers + 1):
- prev = max(idx - 1, 0)
- is_torgb = (idx == self.num_layers)
- is_critically_sampled = (idx >= self.num_layers - self.num_critical)
- use_fp16 = (sampling_rates[idx] * (2 ** self.num_fp16_res) > self.img_resolution)
- layer = SynthesisLayer(
- w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16,
- in_channels=int(channels[prev]), out_channels= int(channels[idx]),
- in_size=int(sizes[prev]), out_size=int(sizes[idx]),
- in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]),
- in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx],
- in_half_width=half_widths[prev], out_half_width=half_widths[idx],
- **layer_kwargs)
- name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}'
- setattr(self, name, layer)
- self.layer_names.append(name)
-
- def forward(self, ws, **layer_kwargs):
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32).unbind(dim=1)
-
- # Execute layers.
- x = self.input(ws[0])
- for name, w in zip(self.layer_names, ws[1:]):
- x = getattr(self, name)(x, w, **layer_kwargs)
- if self.output_scale != 1:
- x = x * self.output_scale
-
- # Ensure correct shape and dtype.
- misc.assert_shape(x, [None, self.img_channels, self.img_resolution, self.img_resolution])
- x = x.to(torch.float32)
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
- f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},',
- f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- c_dim, # Conditioning label (C) dimensionality.
- w_dim, # Intermediate latent (W) dimensionality.
- img_resolution, # Output resolution.
- img_channels, # Number of output color channels.
- mapping_kwargs = {}, # Arguments for MappingNetwork.
- resize=None,
- **synthesis_kwargs, # Arguments for SynthesisNetwork.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
- self.resize = resize
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, **synthesis_kwargs):
- if input_is_w:
- ws = z
- if ws.dim() == 2:
- ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1])
- else:
- ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas)
- img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs)
- if self.resize is not None:
- img = imresize(img, [self.resize, self.resize])
- return img
-
-#----------------------------------------------------------------------------
-
-def imresize(image, size):
- dim = image.dim()
- if dim == 3:
- image = image.unsqueeze(1)
- b, _, h, w = image.shape
- if size[0] > h:
- image = F.interpolate(image, size, mode='bilinear')
- elif size[0] < h:
- image = F.interpolate(image, size, mode='area')
- if dim == 3:
- image = image.squeeze(1)
- return image
diff --git a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/setup.py b/spaces/EDGAhab/Aatrox-Talking/monotonic_align/setup.py
deleted file mode 100644
index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/setup.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-import numpy
-
-setup(
- name = 'monotonic_align',
- ext_modules = cythonize("core.pyx"),
- include_dirs=[numpy.get_include()]
-)
diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/taskonomy/taskonomy_dataset.py b/spaces/EPFL-VILAB/MultiMAE/utils/taskonomy/taskonomy_dataset.py
deleted file mode 100644
index 2797802b16e127f7ecd229401f073e2191702f01..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/utils/taskonomy/taskonomy_dataset.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import os
-
-import pandas as pd
-from PIL import Image, ImageFile
-from torch.utils.data import Dataset
-
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-
-from .transforms import task_transform
-
-
-class TaskonomyDataset(Dataset):
- def __init__(self,
- data_root,
- tasks,
- split='train',
- variant='tiny',
- image_size=256,
- max_images=None):
- """
- Taskonomy dataloader.
-
- Args:
- data_root: Root of Taskonomy data directory
- tasks: List of tasks. Any of ['rgb', 'depth_euclidean', 'depth_zbuffer',
- 'edge_occlusion', 'edge_texture', 'keypoints2d', 'keypoints3d', 'normal',
- 'principal_curvature', 'reshading', 'mask_valid'].
- split: One of {'train', 'val', 'test'}
- variant: One of {'debug', 'tiny', 'medium', 'full', 'fullplus'}
- image_size: Target image size
- max_images: Optional subset selection
- """
- super(TaskonomyDataset, self).__init__()
- self.data_root = data_root
- self.tasks = tasks
- self.split = split
- self.variant = variant
- self.image_size=image_size
- self.max_images = max_images
-
- self.image_ids = pd.read_csv(
- os.path.join(os.path.dirname(__file__), 'splits', f'{self.variant}_{self.split}.csv')
- ).to_numpy()
-
- if isinstance(self.max_images, int):
- self.image_ids = self.image_ids[:self.max_images]
-
- print(f'Initialized TaskonomyDataset with {len(self.image_ids)} images from variant {self.variant} in split {self.split}.')
-
-
- def __len__(self):
- return len(self.image_ids)
-
- def __getitem__(self, index):
-
- # building / point / view
- building, point, view = self.image_ids[index]
-
- result = {}
- for task in self.tasks:
- task_id = 'depth_zbuffer' if task == 'mask_valid' else task
- path = os.path.join(
- self.data_root, task, building, f'point_{point}_view_{view}_domain_{task_id}.png'
- )
- img = Image.open(path)
- # Perform transformations
- img = task_transform(img, task=task, image_size=self.image_size)
- result[task] = img
-
- return result
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/__init__.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/__init__.py
deleted file mode 100644
index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import model modules for registry
-# scan all the files that end with '_model.py' under the model folder
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/models/vqgan.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/models/vqgan.py
deleted file mode 100644
index 121d01fd2e1641d409aa90635c367a7a1bb0b4d4..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/models/vqgan.py
+++ /dev/null
@@ -1,363 +0,0 @@
-import torch
-import torch.nn.functional as F
-import pytorch_lightning as pl
-
-from main import instantiate_from_config
-
-from taming.modules.diffusionmodules.model import Encoder, Decoder
-from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
-from taming.modules.vqvae.quantize import GumbelQuantize
-
-
-class VQModel(pl.LightningModule):
- def __init__(self,
- ddconfig,
- lossconfig,
- n_embed,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- remap=None,
- sane_index_shape=False, # tell vector quantizer to return indices as bhw
- ):
- super().__init__()
- self.image_key = image_key
- self.encoder = Encoder(**ddconfig)
- self.decoder = Decoder(**ddconfig)
- self.loss = instantiate_from_config(lossconfig)
- self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25,
- remap=remap, sane_index_shape=sane_index_shape)
- self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1)
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
- self.image_key = image_key
- if colorize_nlabels is not None:
- assert type(colorize_nlabels)==int
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
- if monitor is not None:
- self.monitor = monitor
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- self.load_state_dict(sd, strict=False)
- print(f"Restored from {path}")
-
- def encode(self, x):
- h = self.encoder(x)
- h = self.quant_conv(h)
- quant, emb_loss, info = self.quantize(h)
- return quant, emb_loss, info
-
- def decode(self, quant):
- quant = self.post_quant_conv(quant)
- dec = self.decoder(quant)
- return dec
-
- def decode_code(self, code_b):
- quant_b = self.quantize.embed_code(code_b)
- dec = self.decode(quant_b)
- return dec
-
- def forward(self, input):
- quant, diff, _ = self.encode(input)
- dec = self.decode(quant)
- return dec, diff
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format)
- return x.float()
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
-
- if optimizer_idx == 0:
- # autoencode
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
-
- self.log("train/aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return aeloss
-
- if optimizer_idx == 1:
- # discriminator
- discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log("train/discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, self.global_step,
- last_layer=self.get_last_layer(), split="val")
- rec_loss = log_dict_ae["val/rec_loss"]
- self.log("val/rec_loss", rec_loss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True, sync_dist=True)
- self.log("val/aeloss", aeloss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True, sync_dist=True)
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def configure_optimizers(self):
- lr = self.learning_rate
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quantize.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=lr, betas=(0.5, 0.9))
- opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
- lr=lr, betas=(0.5, 0.9))
- return [opt_ae, opt_disc], []
-
- def get_last_layer(self):
- return self.decoder.conv_out.weight
-
- def log_images(self, batch, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- xrec, _ = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["inputs"] = x
- log["reconstructions"] = xrec
- return log
-
- def to_rgb(self, x):
- assert self.image_key == "segmentation"
- if not hasattr(self, "colorize"):
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
- x = F.conv2d(x, weight=self.colorize)
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
- return x
-
-
-class VQSegmentationModel(VQModel):
- def __init__(self, n_labels, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.register_buffer("colorize", torch.randn(3, n_labels, 1, 1))
-
- def configure_optimizers(self):
- lr = self.learning_rate
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quantize.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=lr, betas=(0.5, 0.9))
- return opt_ae
-
- def training_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, split="train")
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return aeloss
-
- def validation_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, split="val")
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- total_loss = log_dict_ae["val/total_loss"]
- self.log("val/total_loss", total_loss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True, sync_dist=True)
- return aeloss
-
- @torch.no_grad()
- def log_images(self, batch, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- xrec, _ = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- # convert logits to indices
- xrec = torch.argmax(xrec, dim=1, keepdim=True)
- xrec = F.one_hot(xrec, num_classes=x.shape[1])
- xrec = xrec.squeeze(1).permute(0, 3, 1, 2).float()
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["inputs"] = x
- log["reconstructions"] = xrec
- return log
-
-
-class VQNoDiscModel(VQModel):
- def __init__(self,
- ddconfig,
- lossconfig,
- n_embed,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None
- ):
- super().__init__(ddconfig=ddconfig, lossconfig=lossconfig, n_embed=n_embed, embed_dim=embed_dim,
- ckpt_path=ckpt_path, ignore_keys=ignore_keys, image_key=image_key,
- colorize_nlabels=colorize_nlabels)
-
- def training_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
- # autoencode
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, self.global_step, split="train")
- output = pl.TrainResult(minimize=aeloss)
- output.log("train/aeloss", aeloss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True)
- output.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return output
-
- def validation_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, self.global_step, split="val")
- rec_loss = log_dict_ae["val/rec_loss"]
- output = pl.EvalResult(checkpoint_on=rec_loss)
- output.log("val/rec_loss", rec_loss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True)
- output.log("val/aeloss", aeloss,
- prog_bar=True, logger=True, on_step=True, on_epoch=True)
- output.log_dict(log_dict_ae)
-
- return output
-
- def configure_optimizers(self):
- optimizer = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quantize.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=self.learning_rate, betas=(0.5, 0.9))
- return optimizer
-
-
-class GumbelVQ(VQModel):
- def __init__(self,
- ddconfig,
- lossconfig,
- n_embed,
- embed_dim,
- temperature_scheduler_config,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- kl_weight=1e-8,
- remap=None,
- ):
-
- z_channels = ddconfig["z_channels"]
- super().__init__(ddconfig,
- lossconfig,
- n_embed,
- embed_dim,
- ckpt_path=None,
- ignore_keys=ignore_keys,
- image_key=image_key,
- colorize_nlabels=colorize_nlabels,
- monitor=monitor,
- )
-
- self.loss.n_classes = n_embed
- self.vocab_size = n_embed
-
- self.quantize = GumbelQuantize(z_channels, embed_dim,
- n_embed=n_embed,
- kl_weight=kl_weight, temp_init=1.0,
- remap=remap)
-
- self.temperature_scheduler = instantiate_from_config(temperature_scheduler_config) # annealing of temp
-
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
-
- def temperature_scheduling(self):
- self.quantize.temperature = self.temperature_scheduler(self.global_step)
-
- def encode_to_prequant(self, x):
- h = self.encoder(x)
- h = self.quant_conv(h)
- return h
-
- def decode_code(self, code_b):
- raise NotImplementedError
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- self.temperature_scheduling()
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x)
-
- if optimizer_idx == 0:
- # autoencode
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
-
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- self.log("temperature", self.quantize.temperature, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return aeloss
-
- if optimizer_idx == 1:
- # discriminator
- discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- x = self.get_input(batch, self.image_key)
- xrec, qloss = self(x, return_pred_indices=True)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, self.global_step,
- last_layer=self.get_last_layer(), split="val")
- rec_loss = log_dict_ae["val/rec_loss"]
- self.log("val/rec_loss", rec_loss,
- prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
- self.log("val/aeloss", aeloss,
- prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def log_images(self, batch, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- # encode
- h = self.encoder(x)
- h = self.quant_conv(h)
- quant, _, _ = self.quantize(h)
- # decode
- x_rec = self.decode(quant)
- log["inputs"] = x
- log["reconstructions"] = x_rec
- return log
diff --git a/spaces/Epitech/AiOnIot-Antoine-Quentin-Valentin-Maxime/README.md b/spaces/Epitech/AiOnIot-Antoine-Quentin-Valentin-Maxime/README.md
deleted file mode 100644
index d4e554ec9703f78258940492522259b9b4843f99..0000000000000000000000000000000000000000
--- a/spaces/Epitech/AiOnIot-Antoine-Quentin-Valentin-Maxime/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AiOnIot
-emoji: ⛔
-colorFrom: yellow
-colorTo: red
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Epitech/userbank/README.md b/spaces/Epitech/userbank/README.md
deleted file mode 100644
index 55fcd1327837be4957533dd6168ebbcc33e27027..0000000000000000000000000000000000000000
--- a/spaces/Epitech/userbank/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AIoT
-emoji: 😻
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index.py b/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index.py
deleted file mode 100644
index 44b447ef32148c181eb4bcd9013a22a82371b82c..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个
-"""
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import faiss
-import numpy as np
-
-# ###########如果是原始特征要先写save
-inp_root = r"E:\codes\py39\dataset\mi\2-co256"
-npys = []
-for name in sorted(list(os.listdir(inp_root))):
- phone = np.load("%s/%s" % (inp_root, name))
- npys.append(phone)
-big_npy = np.concatenate(npys, 0)
-logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G
-np.save("infer/big_src_feature_mi.npy", big_npy)
-
-##################train+add
-# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy")
-logger.debug(big_npy.shape)
-index = faiss.index_factory(256, "IVF512,Flat") # mi
-logger.info("Training...")
-index_ivf = faiss.extract_index_ivf(index) #
-index_ivf.nprobe = 9
-index.train(big_npy)
-faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index")
-logger.info("Adding...")
-index.add(big_npy)
-faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index")
-"""
-大小(都是FP32)
-big_src_feature 2.95G
- (3098036, 256)
-big_emb 4.43G
- (6196072, 192)
-big_emb双倍是因为求特征要repeat后再加pitch
-
-"""
diff --git a/spaces/EsoCode/text-generation-webui/css/html_instruct_style.css b/spaces/EsoCode/text-generation-webui/css/html_instruct_style.css
deleted file mode 100644
index 575281b1e50150c6b285edf0e8c04f4a5abf329b..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/css/html_instruct_style.css
+++ /dev/null
@@ -1,62 +0,0 @@
-.message {
- display: grid;
- grid-template-columns: 60px 1fr;
- padding-bottom: 25px;
- font-size: 15px;
- font-family: Helvetica, Arial, sans-serif;
- line-height: 1.428571429;
-}
-
-.username {
- display: none;
-}
-
-.message-body p {
- font-size: 15px !important;
- line-height: 1.75 !important;
- margin-bottom: 1.25em !important;
-}
-
-.message-body ul, .message-body ol {
- margin-bottom: 1.25em !important;
-}
-
-.dark .message-body p em {
- color: rgb(198, 202, 214) !important;
-}
-
-.message-body p em {
- color: rgb(110, 110, 110) !important;
-}
-
-.gradio-container .chat .assistant-message {
- padding: 15px;
- border-radius: 20px;
- background-color: #0000000f;
- margin-top: 9px !important;
- margin-bottom: 18px !important;
-}
-
-.gradio-container .chat .user-message {
- padding: 15px;
- border-radius: 20px;
- margin-bottom: 9px !important;
-}
-
-.dark .chat .assistant-message {
- background-color: #3741519e;
- border: 1px solid #4b5563;
-}
-
-.dark .chat .user-message {
- background-color: #111827;
- border: 1px solid #4b5563;
-}
-
-code {
- background-color: white !important;
-}
-
-.dark code {
- background-color: #1a212f !important;
-}
\ No newline at end of file
diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/train.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/train.py
deleted file mode 100644
index a24f212b87bb093062ad48a7f0639a8c23f109c7..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/train.py
+++ /dev/null
@@ -1,564 +0,0 @@
-import argparse
-import logging
-import os
-import random
-import shutil
-import time
-from pathlib import Path
-from warnings import warn
-
-import math
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-from tqdm import tqdm
-
-import test # import test.py to get mAP after each epoch
-from models.yolo import Model
-from utils.datasets import create_dataloader
-from utils.general import (
- torch_distributed_zero_first, labels_to_class_weights, plot_labels, check_anchors, labels_to_image_weights,
- compute_loss, plot_images, fitness, strip_optimizer, plot_results, get_latest_run, check_dataset, check_file,
- check_git_status, check_img_size, increment_dir, print_mutation, plot_evolution, set_logging, init_seeds)
-from utils.google_utils import attempt_download
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts
-
-logger = logging.getLogger(__name__)
-
-
-def train(hyp, opt, device, tb_writer=None, wandb=None):
- logger.info(f'Hyperparameters {hyp}')
- log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve' # logging directory
- wdir = log_dir / 'weights' # weights directory
- os.makedirs(wdir, exist_ok=True)
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = str(log_dir / 'results.txt')
- epochs, batch_size, total_batch_size, weights, rank = \
- opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank
-
- # Save run settings
- with open(log_dir / 'hyp.yaml', 'w') as f:
- yaml.dump(hyp, f, sort_keys=False)
- with open(log_dir / 'opt.yaml', 'w') as f:
- yaml.dump(vars(opt), f, sort_keys=False)
-
- # Configure
- cuda = device.type != 'cpu'
- init_seeds(2 + rank)
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.FullLoader) # data dict
- with torch_distributed_zero_first(rank):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
- nc, names = (1, ['item']) if opt.single_cls else (int(data_dict['nc']), data_dict['names']) # number classes, names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(rank):
- attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- if hyp.get('anchors'):
- ckpt['model'].yaml['anchors'] = round(hyp['anchors']) # force autoanchor
- model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc).to(device) # create
- exclude = ['anchor'] if opt.cfg or hyp.get('anchors') else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(opt.cfg, ch=3, nc=nc).to(device) # create
-
- # Freeze
- freeze = [] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - hyp['lrf']) + hyp['lrf'] # cosine
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # Logging
- if wandb and wandb.run is None:
- id = ckpt.get('wandb_id') if 'ckpt' in locals() else None
- wandb_run = wandb.init(config=opt, resume="allow", project="YOLOv5", name=os.path.basename(log_dir), id=id)
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # Results
- if ckpt.get('training_results') is not None:
- with open(results_file, 'w') as file:
- file.write(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if opt.resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- shutil.copytree(wdir, wdir.parent / f'weights_backup_epoch{start_epoch - 1}') # save previous weights
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = int(max(model.stride)) # grid size (max stride)
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- if cuda and rank == -1 and torch.cuda.device_count() > 1:
- model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and rank != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Exponential moving average
- ema = ModelEMA(model) if rank in [-1, 0] else None
-
- # DDP mode
- if cuda and rank != -1:
- model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank)
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
- hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect,
- rank=rank, world_size=opt.world_size, workers=opt.workers)
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
-
- # Process 0
- if rank in [-1, 0]:
- ema.updates = start_epoch * nb // accumulate # set EMA updates
- testloader = create_dataloader(test_path, imgsz_test, total_batch_size, gs, opt,
- hyp=hyp, augment=False, cache=opt.cache_images and not opt.notest, rect=True,
- rank=-1, world_size=opt.world_size, workers=opt.workers)[0] # testloader
-
- if not opt.resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- plot_labels(labels, save_dir=log_dir)
- if tb_writer:
- # tb_writer.add_hparams(hyp, {}) # causes duplicate https://github.com/ultralytics/yolov5/pull/384
- tb_writer.add_histogram('classes', c, 0)
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
-
- # Model parameters
- hyp['cls'] *= nc / 80. # scale coco-tuned hyp['cls'] to current dataset
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1e3) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- logger.info('Image sizes %g train, %g test\n'
- 'Using %g dataloader workers\nLogging results to %s\n'
- 'Starting training for %g epochs...' % (imgsz, imgsz_test, dataloader.num_workers, log_dir, epochs))
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if rank in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if rank != -1:
- indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if rank != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if rank != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'targets', 'img_size'))
- if rank in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- loss, loss_items = compute_loss(pred, targets.to(device), model) # loss scaled by batch_size
- if rank != -1:
- loss *= opt.world_size # gradient averaged between devices in DDP mode
-
- # Backward
- scaler.scale(loss).backward()
-
- # Optimize
- if ni % accumulate == 0:
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
-
- # Print
- if rank in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
- pbar.set_description(s)
-
- # Plot
- if ni < 3:
- f = str(log_dir / f'train_batch{ni}.jpg') # filename
- result = plot_images(images=imgs, targets=targets, paths=paths, fname=f)
- # if tb_writer and result is not None:
- # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
- # tb_writer.add_graph(model, imgs) # add model to tensorboard
-
- # end batch ------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard
- scheduler.step()
-
- # DDP process 0 or single-GPU
- if rank in [-1, 0]:
- # mAP
- if ema:
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride'])
- final_epoch = epoch + 1 == epochs
- if not opt.notest or final_epoch: # Calculate mAP
- results, maps, times = test.test(opt.data,
- batch_size=total_batch_size,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=log_dir,
- plots=epoch == 0 or final_epoch, # plot first and last
- log_imgs=opt.log_imgs)
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- if len(opt.name) and opt.bucket:
- os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
-
- # Log
- tags = ['train/giou_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/giou_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if tb_writer:
- tb_writer.add_scalar(tag, x, epoch) # tensorboard
- if wandb:
- wandb.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
-
- # Save model
- save = (not opt.nosave) or (final_epoch and not opt.evolve)
- if save:
- with open(results_file, 'r') as f: # create checkpoint
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': f.read(),
- 'model': ema.ema,
- 'optimizer': None if final_epoch else optimizer.state_dict(),
- 'wandb_id': wandb_run.id if wandb else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- del ckpt
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training
-
- if rank in [-1, 0]:
- # Strip optimizers
- n = opt.name if opt.name.isnumeric() else ''
- fresults, flast, fbest = log_dir / f'results{n}.txt', wdir / f'last{n}.pt', wdir / f'best{n}.pt'
- for f1, f2 in zip([wdir / 'last.pt', wdir / 'best.pt', results_file], [flast, fbest, fresults]):
- if os.path.exists(f1):
- os.rename(f1, f2) # rename
- if str(f2).endswith('.pt'): # is *.pt
- strip_optimizer(f2) # strip optimizer
- os.system('gsutil cp %s gs://%s/weights' % (f2, opt.bucket)) if opt.bucket else None # upload
- # Finish
- if not opt.evolve:
- plot_results(save_dir=log_dir) # save as results.png
- logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
-
- dist.destroy_process_group() if rank not in [-1, 0] else None
- torch.cuda.empty_cache()
- return results
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300)
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--name', default='', help='renames experiment folder exp{N} to exp{N}_{name} if supplied')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--logdir', type=str, default='runs/', help='logging directory')
- parser.add_argument('--log-imgs', type=int, default=10, help='number of images for W&B logging, max 100')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
-
- opt = parser.parse_args()
-
- # Set DDP variables
- opt.total_batch_size = opt.batch_size
- opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
- opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
- set_logging(opt.global_rank)
- if opt.global_rank in [-1, 0]:
- check_git_status()
-
- # Resume
- if opt.resume: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- log_dir = Path(ckpt).parent.parent # runs/exp0
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- with open(log_dir / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.load(f, Loader=yaml.FullLoader)) # replace
- opt.cfg, opt.weights, opt.resume = '', ckpt, True
- logger.info('Resuming training from %s' % ckpt)
-
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- log_dir = increment_dir(Path(opt.logdir) / 'exp', opt.name) # runs/exp1
-
- # DDP mode
- device = select_device(opt.device, batch_size=opt.batch_size)
- if opt.local_rank != -1:
- assert torch.cuda.device_count() > opt.local_rank
- torch.cuda.set_device(opt.local_rank)
- device = torch.device('cuda', opt.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
- assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
- opt.batch_size = opt.total_batch_size // opt.world_size
-
- # Hyperparameters
- with open(opt.hyp) as f:
- hyp = yaml.load(f, Loader=yaml.FullLoader) # load hyps
- if 'box' not in hyp:
- warn('Compatibility: %s missing "box" which was renamed from "giou" in %s' %
- (opt.hyp, 'https://github.com/ultralytics/yolov5/pull/1120'))
- hyp['box'] = hyp.pop('giou')
-
- # Train
- logger.info(opt)
- if not opt.evolve:
- tb_writer, wandb = None, None # init loggers
- if opt.global_rank in [-1, 0]:
- # Tensorboard
- logger.info(f'Start Tensorboard with "tensorboard --logdir {opt.logdir}", view at http://localhost:6006/')
- tb_writer = SummaryWriter(log_dir=log_dir) # runs/exp0
-
- # W&B
- try:
- import wandb
-
- assert os.environ.get('WANDB_DISABLED') != 'true'
- logger.info("Weights & Biases logging enabled, to disable set os.environ['WANDB_DISABLED'] = 'true'")
- except (ImportError, AssertionError):
- opt.log_imgs = 0
- logger.info("Install Weights & Biases for experiment logging via 'pip install wandb' (recommended)")
-
- train(hyp, opt, device, tb_writer, wandb)
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0)} # image mixup (probability)
-
- assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.logdir) / 'evolve' / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(300): # generations to evolve
- if os.path.exists('evolve.txt'): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() # weights
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/__init__.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool.py
deleted file mode 100644
index aa08db415a9b0af97a5d726d1f7e61834e1c4e1c..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool.py
+++ /dev/null
@@ -1,407 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-import gc
-
-import librosa
-import numpy as np
-# import onnxruntime
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-import utils
-from models import SynthesizerTrn
-
-from diffusion.unit2mel import load_model_vocoder
-import yaml
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt",
- nsf_hifigan_enhance = False,
- diffusion_model_path="logs/44k/diffusion/model_0.pt",
- diffusion_config_path="configs/diffusion.yaml",
- shallow_diffusion = False,
- only_diffusion = False,
- ):
- self.net_g_path = net_g_path
- self.only_diffusion = only_diffusion
- self.shallow_diffusion = shallow_diffusion
- if device is None:
- # self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.dev = torch.device("cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- if not self.only_diffusion:
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- try:
- self.speech_encoder = self.hps_ms.model.speech_encoder
- except Exception as e:
- self.speech_encoder = 'vec768l12'
-
- self.nsf_hifigan_enhance = nsf_hifigan_enhance
- if self.shallow_diffusion or self.only_diffusion:
- if os.path.exists(diffusion_model_path) and os.path.exists(diffusion_model_path):
- self.diffusion_model,self.vocoder,self.diffusion_args = load_model_vocoder(diffusion_model_path,self.dev,config_path=diffusion_config_path)
- if self.only_diffusion:
- self.target_sample = self.diffusion_args.data.sampling_rate
- self.hop_size = self.diffusion_args.data.block_size
- self.spk2id = self.diffusion_args.spk
- self.speech_encoder = self.diffusion_args.data.encoder
- else:
- print("No diffusion model or config found. Shallow diffusion mode will False")
- self.shallow_diffusion = self.only_diffusion = False
-
- # load hubert and model
- if not self.only_diffusion:
- self.load_model()
- self.hubert_model = utils.get_speech_encoder(self.speech_encoder,device=self.dev)
- self.volume_extractor = utils.Volume_Extractor(self.hop_size)
- else:
- self.hubert_model = utils.get_speech_encoder(self.diffusion_args.data.encoder,device=self.dev)
- self.volume_extractor = utils.Volume_Extractor(self.diffusion_args.data.block_size)
-
- if os.path.exists(cluster_model_path):
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
- if self.shallow_diffusion : self.nsf_hifigan_enhance = False
- if self.nsf_hifigan_enhance:
- from modules.enhancer import Enhancer
- self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev)
-
- def load_model(self):
- # get model configuration
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
-
-
-
- def get_unit_f0(self, wav, tran, cluster_infer_ratio, speaker, f0_filter ,f0_predictor,cr_threshold=0.05):
-
- f0_predictor_object = utils.get_f0_predictor(f0_predictor,hop_length=self.hop_size,sampling_rate=self.target_sample,device=self.dev,threshold=cr_threshold)
-
- f0, uv = f0_predictor_object.compute_f0_uv(wav)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("No voice detected")
- f0 = torch.FloatTensor(f0).to(self.dev)
- uv = torch.FloatTensor(uv).to(self.dev)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0)
- uv = uv.unsqueeze(0)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = self.hubert_model.encoder(wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio !=0:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- f0_predictor='pm',
- enhancer_adaptive_key = 0,
- cr_threshold = 0.05,
- k_step = 100
- ):
-
- speaker_id = self.spk2id.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- wav, sr = librosa.load(raw_path, sr=self.target_sample)
- c, f0, uv = self.get_unit_f0(wav, tran, cluster_infer_ratio, speaker, f0_filter,f0_predictor,cr_threshold=cr_threshold)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- if not self.only_diffusion:
- audio,f0 = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)
- audio = audio[0,0].data.float()
- if self.shallow_diffusion:
- audio_mel = self.vocoder.extract(audio[None,:],self.target_sample)
- else:
- audio = torch.FloatTensor(wav).to(self.dev)
- audio_mel = None
- if self.only_diffusion or self.shallow_diffusion:
- vol = self.volume_extractor.extract(audio[None,:])[None,:,None].to(self.dev)
- f0 = f0[:,:,None]
- c = c.transpose(-1,-2)
- audio_mel = self.diffusion_model(
- c,
- f0,
- vol,
- spk_id = sid,
- spk_mix_dict = None,
- gt_spec=audio_mel,
- infer=True,
- infer_speedup=self.diffusion_args.infer.speedup,
- method=self.diffusion_args.infer.method,
- k_step=k_step)
- audio = self.vocoder.infer(audio_mel, f0).squeeze()
- if self.nsf_hifigan_enhance:
- audio, _ = self.enhancer.enhance(
- audio[None,:],
- self.target_sample,
- f0[:,:,None],
- self.hps_ms.data.hop_length,
- adaptive_key = enhancer_adaptive_key)
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1]
-
- def clear_empty(self):
- # clean up vram
- torch.cuda.empty_cache()
-
- def unload_model(self):
- # unload model
- self.net_g_ms = self.net_g_ms.to("cpu")
- del self.net_g_ms
- if hasattr(self,"enhancer"):
- self.enhancer.enhancer = self.enhancer.enhancer.to("cpu")
- del self.enhancer.enhancer
- del self.enhancer
- gc.collect()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num =0.75,
- f0_predictor='pm',
- enhancer_adaptive_key = 0,
- cr_threshold = 0.05,
- k_step = 100
- ):
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds*audio_sr)
- lg_size = int(lg_num*audio_sr)
- lg_size_r = int(lg_size*lgr_num)
- lg_size_c_l = (lg_size-lg_size_r)//2
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size,lg_size)
- else:
- datas = [data]
- for k,dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_predictor = f0_predictor,
- enhancer_adaptive_key = enhancer_adaptive_key,
- cr_threshold = cr_threshold,
- k_step = k_step
- )
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size!=0 and k!=0:
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1*(1-lg)+lg2*lg
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # chunk length
- self.pre_len = 3840 # cross fade length, multiples of 640
-
- # Input and output are 1-dimensional numpy waveform arrays
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
-
\ No newline at end of file
diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/commons.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/commons.py
deleted file mode 100644
index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/commons.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- try:
- ret[i] = x[i, :, idx_str:idx_end]
- except RuntimeError:
- print("?")
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/FriendlyUser/bark/README.md b/spaces/FriendlyUser/bark/README.md
deleted file mode 100644
index 9a4f7ae56cc52a5efaaa2def75521356c1f54590..0000000000000000000000000000000000000000
--- a/spaces/FriendlyUser/bark/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Bark
-emoji: 🐶
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: cc-by-nc-4.0
-duplicated_from: suno/bark
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gabriel/Swe_summarizer/LexRank.py b/spaces/Gabriel/Swe_summarizer/LexRank.py
deleted file mode 100644
index 221e57b6e0c0727b52742c14da662e656e0badf1..0000000000000000000000000000000000000000
--- a/spaces/Gabriel/Swe_summarizer/LexRank.py
+++ /dev/null
@@ -1,120 +0,0 @@
-
-import numpy as np
-from scipy.sparse.csgraph import connected_components
-from scipy.special import softmax
-import logging
-
-logger = logging.getLogger(__name__)
-
-def degree_centrality_scores(
- similarity_matrix,
- threshold=None,
- increase_power=True,
-):
- if not (
- threshold is None
- or isinstance(threshold, float)
- and 0 <= threshold < 1
- ):
- raise ValueError(
- '\'threshold\' should be a floating-point number '
- 'from the interval [0, 1) or None',
- )
-
- if threshold is None:
- markov_matrix = create_markov_matrix(similarity_matrix)
-
- else:
- markov_matrix = create_markov_matrix_discrete(
- similarity_matrix,
- threshold,
- )
-
- scores = stationary_distribution(
- markov_matrix,
- increase_power=increase_power,
- normalized=False,
- )
-
- return scores
-
-
-def _power_method(transition_matrix, increase_power=True, max_iter=10000):
- eigenvector = np.ones(len(transition_matrix))
-
- if len(eigenvector) == 1:
- return eigenvector
-
- transition = transition_matrix.transpose()
-
- for _ in range(max_iter):
- eigenvector_next = np.dot(transition, eigenvector)
-
- if np.allclose(eigenvector_next, eigenvector):
- return eigenvector_next
-
- eigenvector = eigenvector_next
-
- if increase_power:
- transition = np.dot(transition, transition)
-
- logger.warning("Maximum number of iterations for power method exceeded without convergence!")
- return eigenvector_next
-
-
-def connected_nodes(matrix):
- _, labels = connected_components(matrix)
-
- groups = []
-
- for tag in np.unique(labels):
- group = np.where(labels == tag)[0]
- groups.append(group)
-
- return groups
-
-
-def create_markov_matrix(weights_matrix):
- n_1, n_2 = weights_matrix.shape
- if n_1 != n_2:
- raise ValueError('\'weights_matrix\' should be square')
-
- row_sum = weights_matrix.sum(axis=1, keepdims=True)
-
- # normalize probability distribution differently if we have negative transition values
- if np.min(weights_matrix) <= 0:
- return softmax(weights_matrix, axis=1)
-
- return weights_matrix / row_sum
-
-
-def create_markov_matrix_discrete(weights_matrix, threshold):
- discrete_weights_matrix = np.zeros(weights_matrix.shape)
- ixs = np.where(weights_matrix >= threshold)
- discrete_weights_matrix[ixs] = 1
-
- return create_markov_matrix(discrete_weights_matrix)
-
-
-def stationary_distribution(
- transition_matrix,
- increase_power=True,
- normalized=True,
-):
- n_1, n_2 = transition_matrix.shape
- if n_1 != n_2:
- raise ValueError('\'transition_matrix\' should be square')
-
- distribution = np.zeros(n_1)
-
- grouped_indices = connected_nodes(transition_matrix)
-
- for group in grouped_indices:
- t_matrix = transition_matrix[np.ix_(group, group)]
- eigenvector = _power_method(t_matrix, increase_power=increase_power)
- distribution[group] = eigenvector
-
- if normalized:
- distribution /= n_1
-
- return distribution
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
deleted file mode 100644
index 7fb8e82ece225ab6f88f1f4f83bea56a42cf1a57..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- norm_eval=False,
- plugins=[
- dict(
- cfg=dict(type='ContextBlock', ratio=1. / 16),
- stages=(False, True, True, True),
- position='after_conv3')
- ]))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py
deleted file mode 100644
index 63c8717182f2284ff1062be31bae43b4360c6887..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py
deleted file mode 100644
index f3a15b41054318d508e98685632921f262029de0..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_r50-d8_480x480_40k_pascal_context.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_80k_ade20k.py
deleted file mode 100644
index 7eca7fa4b8102c6225af3b484ffff5bdc7c0f201..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './fcn_hr18_512x512_80k_ade20k.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w48',
- backbone=dict(
- extra=dict(
- stage2=dict(num_channels=(48, 96)),
- stage3=dict(num_channels=(48, 96, 192)),
- stage4=dict(num_channels=(48, 96, 192, 384)))),
- decode_head=dict(
- in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2onnx.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2onnx.py
deleted file mode 100644
index 5660ed9004b9a75ea5e1206a4de0b152f13f66f0..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2onnx.py
+++ /dev/null
@@ -1,389 +0,0 @@
-import argparse
-from functools import partial
-
-import mmcv
-import numpy as np
-import onnxruntime as rt
-import torch
-import torch._C
-import torch.serialization
-from mmcv import DictAction
-from mmcv.onnx import register_extra_symbolics
-from mmcv.runner import load_checkpoint
-from torch import nn
-
-from mmseg.apis import show_result_pyplot
-from mmseg.apis.inference import LoadImage
-from mmseg.datasets.pipelines import Compose
-from mmseg.models import build_segmentor
-
-torch.manual_seed(3)
-
-
-def _convert_batchnorm(module):
- module_output = module
- if isinstance(module, torch.nn.SyncBatchNorm):
- module_output = torch.nn.BatchNorm2d(module.num_features, module.eps,
- module.momentum, module.affine,
- module.track_running_stats)
- if module.affine:
- module_output.weight.data = module.weight.data.clone().detach()
- module_output.bias.data = module.bias.data.clone().detach()
- # keep requires_grad unchanged
- module_output.weight.requires_grad = module.weight.requires_grad
- module_output.bias.requires_grad = module.bias.requires_grad
- module_output.running_mean = module.running_mean
- module_output.running_var = module.running_var
- module_output.num_batches_tracked = module.num_batches_tracked
- for name, child in module.named_children():
- module_output.add_module(name, _convert_batchnorm(child))
- del module
- return module_output
-
-
-def _demo_mm_inputs(input_shape, num_classes):
- """Create a superset of inputs needed to run test or train batches.
-
- Args:
- input_shape (tuple):
- input batch dimensions
- num_classes (int):
- number of semantic classes
- """
- (N, C, H, W) = input_shape
- rng = np.random.RandomState(0)
- imgs = rng.rand(*input_shape)
- segs = rng.randint(
- low=0, high=num_classes - 1, size=(N, 1, H, W)).astype(np.uint8)
- img_metas = [{
- 'img_shape': (H, W, C),
- 'ori_shape': (H, W, C),
- 'pad_shape': (H, W, C),
- 'filename': '.png',
- 'scale_factor': 1.0,
- 'flip': False,
- } for _ in range(N)]
- mm_inputs = {
- 'imgs': torch.FloatTensor(imgs).requires_grad_(True),
- 'img_metas': img_metas,
- 'gt_semantic_seg': torch.LongTensor(segs)
- }
- return mm_inputs
-
-
-def _prepare_input_img(img_path,
- test_pipeline,
- shape=None,
- rescale_shape=None):
- # build the data pipeline
- if shape is not None:
- test_pipeline[1]['img_scale'] = (shape[1], shape[0])
- test_pipeline[1]['transforms'][0]['keep_ratio'] = False
- test_pipeline = [LoadImage()] + test_pipeline[1:]
- test_pipeline = Compose(test_pipeline)
- # prepare data
- data = dict(img=img_path)
- data = test_pipeline(data)
- imgs = data['img']
- img_metas = [i.data for i in data['img_metas']]
-
- if rescale_shape is not None:
- for img_meta in img_metas:
- img_meta['ori_shape'] = tuple(rescale_shape) + (3, )
-
- mm_inputs = {'imgs': imgs, 'img_metas': img_metas}
-
- return mm_inputs
-
-
-def _update_input_img(img_list, img_meta_list):
- # update img and its meta list
- N = img_list[0].size(0)
- img_meta = img_meta_list[0][0]
- img_shape = img_meta['img_shape']
- ori_shape = img_meta['ori_shape']
- pad_shape = img_meta['pad_shape']
- new_img_meta_list = [[{
- 'img_shape':
- img_shape,
- 'ori_shape':
- ori_shape,
- 'pad_shape':
- pad_shape,
- 'filename':
- img_meta['filename'],
- 'scale_factor':
- (img_shape[1] / ori_shape[1], img_shape[0] / ori_shape[0]) * 2,
- 'flip':
- False,
- } for _ in range(N)]]
-
- return img_list, new_img_meta_list
-
-
-def pytorch2onnx(model,
- mm_inputs,
- opset_version=11,
- show=False,
- output_file='tmp.onnx',
- verify=False,
- dynamic_export=False):
- """Export Pytorch model to ONNX model and verify the outputs are same
- between Pytorch and ONNX.
-
- Args:
- model (nn.Module): Pytorch model we want to export.
- mm_inputs (dict): Contain the input tensors and img_metas information.
- opset_version (int): The onnx op version. Default: 11.
- show (bool): Whether print the computation graph. Default: False.
- output_file (string): The path to where we store the output ONNX model.
- Default: `tmp.onnx`.
- verify (bool): Whether compare the outputs between Pytorch and ONNX.
- Default: False.
- dynamic_export (bool): Whether to export ONNX with dynamic axis.
- Default: False.
- """
- model.cpu().eval()
- test_mode = model.test_cfg.mode
-
- if isinstance(model.decode_head, nn.ModuleList):
- num_classes = model.decode_head[-1].num_classes
- else:
- num_classes = model.decode_head.num_classes
-
- imgs = mm_inputs.pop('imgs')
- img_metas = mm_inputs.pop('img_metas')
-
- img_list = [img[None, :] for img in imgs]
- img_meta_list = [[img_meta] for img_meta in img_metas]
- # update img_meta
- img_list, img_meta_list = _update_input_img(img_list, img_meta_list)
-
- # replace original forward function
- origin_forward = model.forward
- model.forward = partial(
- model.forward,
- img_metas=img_meta_list,
- return_loss=False,
- rescale=True)
- dynamic_axes = None
- if dynamic_export:
- if test_mode == 'slide':
- dynamic_axes = {'input': {0: 'batch'}, 'output': {1: 'batch'}}
- else:
- dynamic_axes = {
- 'input': {
- 0: 'batch',
- 2: 'height',
- 3: 'width'
- },
- 'output': {
- 1: 'batch',
- 2: 'height',
- 3: 'width'
- }
- }
-
- register_extra_symbolics(opset_version)
- with torch.no_grad():
- torch.onnx.export(
- model, (img_list, ),
- output_file,
- input_names=['input'],
- output_names=['output'],
- export_params=True,
- keep_initializers_as_inputs=False,
- verbose=show,
- opset_version=opset_version,
- dynamic_axes=dynamic_axes)
- print(f'Successfully exported ONNX model: {output_file}')
- model.forward = origin_forward
-
- if verify:
- # check by onnx
- import onnx
- onnx_model = onnx.load(output_file)
- onnx.checker.check_model(onnx_model)
-
- if dynamic_export and test_mode == 'whole':
- # scale image for dynamic shape test
- img_list = [
- nn.functional.interpolate(_, scale_factor=1.5)
- for _ in img_list
- ]
- # concate flip image for batch test
- flip_img_list = [_.flip(-1) for _ in img_list]
- img_list = [
- torch.cat((ori_img, flip_img), 0)
- for ori_img, flip_img in zip(img_list, flip_img_list)
- ]
-
- # update img_meta
- img_list, img_meta_list = _update_input_img(
- img_list, img_meta_list)
-
- # check the numerical value
- # get pytorch output
- with torch.no_grad():
- pytorch_result = model(img_list, img_meta_list, return_loss=False)
- pytorch_result = np.stack(pytorch_result, 0)
-
- # get onnx output
- input_all = [node.name for node in onnx_model.graph.input]
- input_initializer = [
- node.name for node in onnx_model.graph.initializer
- ]
- net_feed_input = list(set(input_all) - set(input_initializer))
- assert (len(net_feed_input) == 1)
- sess = rt.InferenceSession(output_file)
- onnx_result = sess.run(
- None, {net_feed_input[0]: img_list[0].detach().numpy()})[0][0]
- # show segmentation results
- if show:
- import cv2
- import os.path as osp
- img = img_meta_list[0][0]['filename']
- if not osp.exists(img):
- img = imgs[0][:3, ...].permute(1, 2, 0) * 255
- img = img.detach().numpy().astype(np.uint8)
- ori_shape = img.shape[:2]
- else:
- ori_shape = LoadImage()({'img': img})['ori_shape']
-
- # resize onnx_result to ori_shape
- onnx_result_ = cv2.resize(onnx_result[0].astype(np.uint8),
- (ori_shape[1], ori_shape[0]))
- show_result_pyplot(
- model,
- img, (onnx_result_, ),
- palette=model.PALETTE,
- block=False,
- title='ONNXRuntime',
- opacity=0.5)
-
- # resize pytorch_result to ori_shape
- pytorch_result_ = cv2.resize(pytorch_result[0].astype(np.uint8),
- (ori_shape[1], ori_shape[0]))
- show_result_pyplot(
- model,
- img, (pytorch_result_, ),
- title='PyTorch',
- palette=model.PALETTE,
- opacity=0.5)
- # compare results
- np.testing.assert_allclose(
- pytorch_result.astype(np.float32) / num_classes,
- onnx_result.astype(np.float32) / num_classes,
- rtol=1e-5,
- atol=1e-5,
- err_msg='The outputs are different between Pytorch and ONNX')
- print('The outputs are same between Pytorch and ONNX')
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description='Convert MMSeg to ONNX')
- parser.add_argument('config', help='test config file path')
- parser.add_argument('--checkpoint', help='checkpoint file', default=None)
- parser.add_argument(
- '--input-img', type=str, help='Images for input', default=None)
- parser.add_argument(
- '--show',
- action='store_true',
- help='show onnx graph and segmentation results')
- parser.add_argument(
- '--verify', action='store_true', help='verify the onnx model')
- parser.add_argument('--output-file', type=str, default='tmp.onnx')
- parser.add_argument('--opset-version', type=int, default=11)
- parser.add_argument(
- '--shape',
- type=int,
- nargs='+',
- default=None,
- help='input image height and width.')
- parser.add_argument(
- '--rescale_shape',
- type=int,
- nargs='+',
- default=None,
- help='output image rescale height and width, work for slide mode.')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- help='Override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file. If the value to '
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
- 'Note that the quotation marks are necessary and that no white space '
- 'is allowed.')
- parser.add_argument(
- '--dynamic-export',
- action='store_true',
- help='Whether to export onnx with dynamic axis.')
- args = parser.parse_args()
- return args
-
-
-if __name__ == '__main__':
- args = parse_args()
-
- cfg = mmcv.Config.fromfile(args.config)
- if args.cfg_options is not None:
- cfg.merge_from_dict(args.cfg_options)
- cfg.model.pretrained = None
-
- if args.shape is None:
- img_scale = cfg.test_pipeline[1]['img_scale']
- input_shape = (1, 3, img_scale[1], img_scale[0])
- elif len(args.shape) == 1:
- input_shape = (1, 3, args.shape[0], args.shape[0])
- elif len(args.shape) == 2:
- input_shape = (
- 1,
- 3,
- ) + tuple(args.shape)
- else:
- raise ValueError('invalid input shape')
-
- test_mode = cfg.model.test_cfg.mode
-
- # build the model and load checkpoint
- cfg.model.train_cfg = None
- segmentor = build_segmentor(
- cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg'))
- # convert SyncBN to BN
- segmentor = _convert_batchnorm(segmentor)
-
- if args.checkpoint:
- checkpoint = load_checkpoint(
- segmentor, args.checkpoint, map_location='cpu')
- segmentor.CLASSES = checkpoint['meta']['CLASSES']
- segmentor.PALETTE = checkpoint['meta']['PALETTE']
-
- # read input or create dummpy input
- if args.input_img is not None:
- preprocess_shape = (input_shape[2], input_shape[3])
- rescale_shape = None
- if args.rescale_shape is not None:
- rescale_shape = [args.rescale_shape[0], args.rescale_shape[1]]
- mm_inputs = _prepare_input_img(
- args.input_img,
- cfg.data.test.pipeline,
- shape=preprocess_shape,
- rescale_shape=rescale_shape)
- else:
- if isinstance(segmentor.decode_head, nn.ModuleList):
- num_classes = segmentor.decode_head[-1].num_classes
- else:
- num_classes = segmentor.decode_head.num_classes
- mm_inputs = _demo_mm_inputs(input_shape, num_classes)
-
- # convert model to onnx file
- pytorch2onnx(
- segmentor,
- mm_inputs,
- opset_version=args.opset_version,
- show=args.show,
- output_file=args.output_file,
- verify=args.verify,
- dynamic_export=args.dynamic_export)
diff --git a/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/nvSTFT.py b/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/nvSTFT.py
deleted file mode 100644
index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/nvSTFT.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import math
-import os
-os.environ["LRU_CACHE_CAPACITY"] = "3"
-import random
-import torch
-import torch.utils.data
-import numpy as np
-import librosa
-from librosa.util import normalize
-from librosa.filters import mel as librosa_mel_fn
-from scipy.io.wavfile import read
-import soundfile as sf
-
-def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False):
- sampling_rate = None
- try:
- data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile.
- except Exception as ex:
- print(f"'{full_path}' failed to load.\nException:")
- print(ex)
- if return_empty_on_exception:
- return [], sampling_rate or target_sr or 32000
- else:
- raise Exception(ex)
-
- if len(data.shape) > 1:
- data = data[:, 0]
- assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension)
-
- if np.issubdtype(data.dtype, np.integer): # if audio data is type int
- max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX
- else: # if audio data is type fp32
- max_mag = max(np.amax(data), -np.amin(data))
- max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32
-
- data = torch.FloatTensor(data.astype(np.float32))/max_mag
-
- if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except
- return [], sampling_rate or target_sr or 32000
- if target_sr is not None and sampling_rate != target_sr:
- data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr))
- sampling_rate = target_sr
-
- return data, sampling_rate
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-class STFT():
- def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5):
- self.target_sr = sr
-
- self.n_mels = n_mels
- self.n_fft = n_fft
- self.win_size = win_size
- self.hop_length = hop_length
- self.fmin = fmin
- self.fmax = fmax
- self.clip_val = clip_val
- self.mel_basis = {}
- self.hann_window = {}
-
- def get_mel(self, y, center=False):
- sampling_rate = self.target_sr
- n_mels = self.n_mels
- n_fft = self.n_fft
- win_size = self.win_size
- hop_length = self.hop_length
- fmin = self.fmin
- fmax = self.fmax
- clip_val = self.clip_val
-
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- if fmax not in self.mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax)
- self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device)
- self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
- # print(111,spec)
- spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9))
- # print(222,spec)
- spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec)
- # print(333,spec)
- spec = dynamic_range_compression_torch(spec, clip_val=clip_val)
- # print(444,spec)
- return spec
-
- def __call__(self, audiopath):
- audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr)
- spect = self.get_mel(audio.unsqueeze(0)).squeeze(0)
- return spect
-
-stft = STFT()
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/HubertSoft.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/HubertSoft.py
deleted file mode 100644
index c7155e9edd8b3d898643f59111cd0c7a83067749..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vencoder/HubertSoft.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import torch
-from vencoder.hubert import hubert_model
-class HubertSoft(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/hubert-soft-0d54a1f4.pt",device=None):
- print("load model(s) from {}".format(vec_path))
- hubert_soft = hubert_model.hubert_soft(vec_path)
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.hidden_dim = 256
- self.model = hubert_soft.to(self.dev)
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats[None,None,:]
- with torch.no_grad():
- with torch.inference_mode():
- units = self.model.units(feats)
- return units.transpose(1,2)
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/functional.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/functional.py
deleted file mode 100644
index 7dc7a8c282e846bd633c4fdc4190c4dca3da5a6f..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/functional.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#! /usr/bin/env python3
-# -*- coding: utf-8 -*-
-# File : functional.py
-# Author : Jiayuan Mao, Tete Xiao
-# Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com
-# Date : 07/13/2018
-#
-# This file is part of PreciseRoIPooling.
-# Distributed under terms of the MIT license.
-# Copyright (c) 2017 Megvii Technology Limited.
-
-import torch
-import torch.autograd as ag
-
-try:
- from os.path import join as pjoin, dirname
- from torch.utils.cpp_extension import load as load_extension
- root_dir = pjoin(dirname(__file__), 'src')
- _prroi_pooling = load_extension(
- '_prroi_pooling',
- [pjoin(root_dir, 'prroi_pooling_gpu.c'), pjoin(root_dir, 'prroi_pooling_gpu_impl.cu')],
- verbose=False
- )
-except ImportError:
- raise ImportError('Can not compile Precise RoI Pooling library.')
-
-__all__ = ['prroi_pool2d']
-
-
-class PrRoIPool2DFunction(ag.Function):
- @staticmethod
- def forward(ctx, features, rois, pooled_height, pooled_width, spatial_scale):
- assert 'FloatTensor' in features.type() and 'FloatTensor' in rois.type(), \
- 'Precise RoI Pooling only takes float input, got {} for features and {} for rois.'.format(features.type(), rois.type())
-
- pooled_height = int(pooled_height)
- pooled_width = int(pooled_width)
- spatial_scale = float(spatial_scale)
-
- features = features.contiguous()
- rois = rois.contiguous()
- params = (pooled_height, pooled_width, spatial_scale)
-
- if features.is_cuda:
- output = _prroi_pooling.prroi_pooling_forward_cuda(features, rois, *params)
- ctx.params = params
- # everything here is contiguous.
- ctx.save_for_backward(features, rois, output)
- else:
- raise NotImplementedError('Precise RoI Pooling only supports GPU (cuda) implememtations.')
-
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- features, rois, output = ctx.saved_tensors
- grad_input = grad_coor = None
-
- if features.requires_grad:
- grad_output = grad_output.contiguous()
- grad_input = _prroi_pooling.prroi_pooling_backward_cuda(features, rois, output, grad_output, *ctx.params)
- if rois.requires_grad:
- grad_output = grad_output.contiguous()
- grad_coor = _prroi_pooling.prroi_pooling_coor_backward_cuda(features, rois, output, grad_output, *ctx.params)
-
- return grad_input, grad_coor, None, None, None
-
-
-prroi_pool2d = PrRoIPool2DFunction.apply
-
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/tokenizer/sentencepiece/pretrain_google_sp.sh b/spaces/HaloMaster/chinesesummary/fengshen/tokenizer/sentencepiece/pretrain_google_sp.sh
deleted file mode 100644
index e7dd39f59dac0314a9b285c02f05156fda67e622..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/tokenizer/sentencepiece/pretrain_google_sp.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=google_sp
-#SBATCH --nodes=1
-#SBATCH --cpus-per-task=100
-#SBATCH --ntasks-per-node=1
-#SBATCH -o %x-%j.log
-
-set -x -e
-
-echo "START TIME: $(date)"
-
-BIN_PATH=/cognitive_comp/gaoxinyu/sentencepiece/sentencepiece/bin/usr/local/bin/spm_train
-export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/cognitive_comp/gaoxinyu/sentencepiece/sentencepiece/bin/usr/local/lib
-INPUT_FILE=/cognitive_comp/gaoxinyu/github/Fengshenbang-LM/fengshen/tokenizer/sentencepiece/shuffle_corpus_59132213.txt
-INPUT_FILE_SMALL=/cognitive_comp/gaoxinyu/github/Fengshenbang-LM/fengshen/tokenizer/sentencepiece/shuffle_corpus_1000000.txt
-
-
-VOCAB_SIZE=40000
-COV=0.9995
-MAX_LENGTH=6
-TYPE=bpe
-SEED=42
-MAX_INPUT_LENGTH=100000
-
-OPTION="\
- --input=${INPUT_FILE} \
- --vocab_size=${VOCAB_SIZE} \
- --character_coverage=${COV} \
- --max_sentencepiece_length=${MAX_LENGTH} \
- --model_type=${TYPE} \
- --model_prefix=${TYPE}_v${VOCAB_SIZE}_s${SEED}_cov${COV}_max${MAX_LENGTH} \
- --random_seed=${SEED} \
- --max_sentence_length=100000 \
- --shuffle_input_sentence=true \
- --input_sentence_size=${MAX_INPUT_LENGTH} \
- --minloglevel 1 \
- --num_threads=100 \
- --train_extremely_large_corpus=true \
- "
-
-eval $BIN_PATH $OPTION
\ No newline at end of file
diff --git a/spaces/Hexii/Cat-Breed-Classifier/app.py b/spaces/Hexii/Cat-Breed-Classifier/app.py
deleted file mode 100644
index 4e556ec031157bafdb57b330039a7c319fd3f676..0000000000000000000000000000000000000000
--- a/spaces/Hexii/Cat-Breed-Classifier/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import torch
-import torchvision
-from timeit import default_timer as timer
-import gradio as gr
-from typing import Tuple ,Dict
-from model import create_effnetb2_model
-import os
-
-
-
-with open("classes.txt") as f:
- classes= [line.rstrip() for line in f.readlines()]
-
-effnetb2, effnetb2_transforms = create_effnetb2_model(
- num_classes=len(classes))
-
-effnetb2.load_state_dict(
- torch.load(
- f="Cat_Breed_Classifier_12_class_90_acc.pth",
- map_location=torch.device("cpu"), # load to CPU
- )
-)
-
-def predict(img):
- start_time = timer()
- img = effnetb2_transforms(img).unsqueeze(0)
- effnetb2.eval()
- with torch.inference_mode():
- pred_probs = torch.softmax(effnetb2(img), dim=1)
- pred_labels_and_probs = {
- classes[i]: float(pred_probs[0][i]) for i in range(len(classes))
- }
- pred_time = round(timer() - start_time, 5)
- return pred_labels_and_probs, pred_time
-
-title = "Cat Breed Classifier Demo 😸"
-description = "Gradio Demo for Classifying Cat Breeds of these [12 different types](https://huggingface.co/spaces/Hexii/Cat-Breed-Classifier/blob/main/classes.txt)."
-article = "
"
-
-
-
-example_list = [["examples/" + example] for example in os.listdir("examples")]
-
-app = gr.Interface(
- fn=predict,
- inputs=gr.Image(type="pil"),
- outputs=[
- gr.Label(num_top_classes=3, label="Predictions"),
- gr.Number(label="Prediction time (s)"),
- ],
- examples=example_list,
- title=title,
- description=description,
- article=article,
- )
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/HuggingFaceH4/falcon-chat/app.py b/spaces/HuggingFaceH4/falcon-chat/app.py
deleted file mode 100644
index 7d496eb4f9707a897f8035930856a0efa3bb18c9..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/falcon-chat/app.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import argparse
-import os
-
-import gradio as gr
-from text_generation import Client
-
-TITLE = """
🚀 Falcon-Chat demo
"""
-USER_NAME = "User"
-BOT_NAME = "Falcon"
-DEFAULT_INSTRUCTIONS = f"""The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Falcon, and a human user, called User. In the following interactions, User and Falcon will converse in natural language, and Falcon will answer User's questions. Falcon was built to be respectful, polite and inclusive. Falcon was built by the Technology Innovation Institute in Abu Dhabi. Falcon will never decline to answer a question, and always attempts to give an answer that User would be satisfied with. It knows a lot, and always tells the truth. The conversation begins.
-"""
-RETRY_COMMAND = "/retry"
-STOP_STR = f"\n{USER_NAME}:"
-STOP_SUSPECT_LIST = [":", "\n", "User"]
-
-INFERENCE_ENDPOINT = os.environ.get("INFERENCE_ENDPOINT")
-INFERENCE_AUTH = os.environ.get("INFERENCE_AUTH")
-
-
-def chat_accordion():
- with gr.Accordion("Parameters", open=False):
- temperature = gr.Slider(
- minimum=0.1,
- maximum=2.0,
- value=0.8,
- step=0.1,
- interactive=True,
- label="Temperature",
- )
- top_p = gr.Slider(
- minimum=0.1,
- maximum=0.99,
- value=0.9,
- step=0.01,
- interactive=True,
- label="p (nucleus sampling)",
- )
- return temperature, top_p
-
-
-def format_chat_prompt(message: str, chat_history, instructions: str) -> str:
- instructions = instructions.strip(" ").strip("\n")
- prompt = instructions
- for turn in chat_history:
- user_message, bot_message = turn
- prompt = f"{prompt}\n{USER_NAME}: {user_message}\n{BOT_NAME}: {bot_message}"
- prompt = f"{prompt}\n{USER_NAME}: {message}\n{BOT_NAME}:"
- return prompt
-
-
-def chat(client: Client):
- with gr.Column(elem_id="chat_container"):
- with gr.Row():
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- inputs = gr.Textbox(
- placeholder=f"Hello {BOT_NAME} !!",
- label="Type an input and press Enter",
- max_lines=3,
- )
-
- with gr.Row(elem_id="button_container"):
- with gr.Column():
- retry_button = gr.Button("♻️ Retry last turn")
- with gr.Column():
- delete_turn_button = gr.Button("🧽 Delete last turn")
- with gr.Column():
- clear_chat_button = gr.Button("✨ Delete all history")
-
- gr.Examples(
- [
- ["Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"],
- ["What's the Everett interpretation of quantum mechanics?"],
- ["Give me a list of the top 10 dive sites you would recommend around the world."],
- ["Can you tell me more about deep-water soloing?"],
- ["Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?"],
- ],
- inputs=inputs,
- label="Click on any example and press Enter in the input textbox!",
- )
-
- with gr.Row(elem_id="param_container"):
- with gr.Column():
- temperature, top_p = chat_accordion()
- with gr.Column():
- with gr.Accordion("Instructions", open=False):
- instructions = gr.Textbox(
- placeholder="LLM instructions",
- value=DEFAULT_INSTRUCTIONS,
- lines=10,
- interactive=True,
- label="Instructions",
- max_lines=16,
- show_label=False,
- )
-
- def run_chat(message: str, chat_history, instructions: str, temperature: float, top_p: float):
- if not message or (message == RETRY_COMMAND and len(chat_history) == 0):
- yield chat_history
- return
-
- if message == RETRY_COMMAND and chat_history:
- prev_turn = chat_history.pop(-1)
- user_message, _ = prev_turn
- message = user_message
-
- prompt = format_chat_prompt(message, chat_history, instructions)
- chat_history = chat_history + [[message, ""]]
- stream = client.generate_stream(
- prompt,
- do_sample=True,
- max_new_tokens=1024,
- stop_sequences=[STOP_STR, "<|endoftext|>"],
- temperature=temperature,
- top_p=top_p,
- )
- acc_text = ""
- for idx, response in enumerate(stream):
- text_token = response.token.text
-
- if response.details:
- return
-
- if text_token in STOP_SUSPECT_LIST:
- acc_text += text_token
- continue
-
- if idx == 0 and text_token.startswith(" "):
- text_token = text_token[1:]
-
- acc_text += text_token
- last_turn = list(chat_history.pop(-1))
- last_turn[-1] += acc_text
- chat_history = chat_history + [last_turn]
- yield chat_history
- acc_text = ""
-
- def delete_last_turn(chat_history):
- if chat_history:
- chat_history.pop(-1)
- return {chatbot: gr.update(value=chat_history)}
-
- def run_retry(message: str, chat_history, instructions: str, temperature: float, top_p: float):
- yield from run_chat(RETRY_COMMAND, chat_history, instructions, temperature, top_p)
-
- def clear_chat():
- return []
-
- inputs.submit(
- run_chat,
- [inputs, chatbot, instructions, temperature, top_p],
- outputs=[chatbot],
- show_progress=False,
- )
- inputs.submit(lambda: "", inputs=None, outputs=inputs)
- delete_turn_button.click(delete_last_turn, inputs=[chatbot], outputs=[chatbot])
- retry_button.click(
- run_retry,
- [inputs, chatbot, instructions, temperature, top_p],
- outputs=[chatbot],
- show_progress=False,
- )
- clear_chat_button.click(clear_chat, [], chatbot)
-
-
-def get_demo(client: Client):
- with gr.Blocks(
- # css=None
- # css="""#chat_container {width: 700px; margin-left: auto; margin-right: auto;}
- # #button_container {width: 700px; margin-left: auto; margin-right: auto;}
- # #param_container {width: 700px; margin-left: auto; margin-right: auto;}"""
- css="""#chatbot {
- font-size: 14px;
- min-height: 300px;
-}"""
- ) as demo:
- gr.HTML(TITLE)
-
- with gr.Row():
- with gr.Column():
- gr.Image("home-banner.jpg", elem_id="banner-image", show_label=False)
- with gr.Column():
- gr.Markdown(
- """**Chat with [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct), brainstorm ideas, discuss your holiday plans, and more!**
-
- ✨ This demo is powered by [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), finetuned on the [Baize](https://github.com/project-baize/baize-chatbot) dataset, and running with [Text Generation Inference](https://github.com/huggingface/text-generation-inference). [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is a state-of-the-art large language model built by the [Technology Innovation Institute](https://www.tii.ae) in Abu Dhabi. It is trained on 1 trillion tokens (including [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)) and available under the Apache 2.0 license. It currently holds the 🥇 1st place on the [🤗 Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This demo is made available by the [HuggingFace H4 team](https://huggingface.co/HuggingFaceH4).
-
- 🧪 This is only a **first experimental preview**: the [H4 team](https://huggingface.co/HuggingFaceH4) intends to provide increasingly capable versions of Falcon Chat in the future, based on improved datasets and RLHF/RLAIF.
-
- 👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/)
-
- ➡️️ **Intended Use**: this demo is intended to showcase an early finetuning of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), to illustrate the impact (and limitations) of finetuning on a dataset of conversations and instructions. We encourage the community to further build upon the base model, and to create even better instruct/chat versions!
-
- ⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words.
- """
- )
-
- chat(client)
-
- return demo
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser("Playground Demo")
- parser.add_argument(
- "--addr",
- type=str,
- required=False,
- default=INFERENCE_ENDPOINT,
- )
- args = parser.parse_args()
- client = Client(args.addr, headers={"Authorization": f"Basic {INFERENCE_AUTH}"})
- demo = get_demo(client)
- demo.queue(max_size=128, concurrency_count=16)
- demo.launch()
diff --git a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/app.py b/spaces/HuggingFaceH4/instruction-model-outputs-filtered/app.py
deleted file mode 100644
index 27d43bbe72d5fafa617a9e4b1350f12f02907dc1..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/app.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-from pathlib import Path
-
-import pandas as pd
-import streamlit as st
-from datasets import load_dataset
-from dotenv import load_dotenv
-
-if Path(".env").is_file():
- load_dotenv(".env")
-
-st.set_page_config(layout="wide")
-
-HF_TOKEN = os.getenv("HF_TOKEN")
-
-ds = load_dataset("HuggingFaceH4/instruction-pilot-outputs-filtered", split="train", use_auth_token=HF_TOKEN)
-
-st.markdown("# Instruction Model Outputs")
-st.markdown(
- """This app shows the outputs of various open-souce, instruction-trained models from a [dataset](https://huggingface.co/datasets/HuggingFaceH4/instruction-pilot-outputs-filtered) of human demonstrations filtered for overlap with the original prompt and canned responses. Hit the button below to view a few random samples from the generated outputs."""
-)
-st.markdown(
- """**Notes**
-* Some outputs contain a `Human:` prefix - this is likely due to the fact each model was prompted to be a dialogue agent.
-* The outputs were generated deterministically with `temperature=0` and `max_new_tokens=100`
-"""
-)
-
-button = st.button("Show me what you got!")
-
-if button is True:
- sample_ds = ds.shuffle().select(range(5))
-
- for sample in sample_ds:
- st.markdown(f'**Prompt:** {sample["prompt"]}')
-
- df = pd.DataFrame.from_records(sample["filtered_outputs"])
-
- # CSS to inject contained in a string
- hide_table_row_index = """
-
- """
-
- # Inject CSS with Markdown
- st.markdown(hide_table_row_index, unsafe_allow_html=True)
- st.table(df)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/README.md b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/README.md
deleted file mode 100644
index 02a68a5f0919a26a0468069bed46a5b1abc78941..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/README.md
+++ /dev/null
@@ -1,241 +0,0 @@
-# Beyond English-Centric Multilingual Machine Translation
-
-## Introduction
-In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively with the best single systems of WMT.
-
-If you are new to using fairseq, read the following walkthrough. Otherwise, skip to the sections below.
-
-0. **Generation Data**
-
-To download the generation data, follow the below commands. Note that all datasets need to be detokenized *before* applying SPM in the data preprocessing step. If you use these evaluation datasets, please cite their associated papers.
-```bash
-# WMT - use sacrebleu, example here:
-sacrebleu -t wmt14 -l fr-en --echo src > wmt.test.fr-en.fr
-sacrebleu -t wmt14 -l fr-en --echo ref > wmt.test.fr-en.en
-
-# WAT
-wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip
-unzip wat2020.my-en.zip
-
-# FLORES
-# download from: https://github.com/facebookresearch/flores
-
-# TED - need to detokenize with Moses!
-# from: https://github.com/neulab/word-embeddings-for-nmt
-wget http://phontron.com/data/ted_talks.tar.gz
-
-# Autshumato
-# request to download: https://repo.sadilar.org/handle/20.500.12185/397
-
-# Tatoeba Challenge
-# available here: https://github.com/Helsinki-NLP/Tatoeba-Challenge
-```
-
-1. **Training Data**
-
-To produce the training data, we use a combination of [CCMatrix](https://arxiv.org/abs/1911.04944) and [CCAligned](https://arxiv.org/abs/1911.06154). Check out the instructions [here](https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix) to download the raw data.
-
-2. **Preprocess Data**
-
-After downloading raw data, you will need to postprocess the data, then apply SPM, then binarize. Note that it is very important you run the postprocessing script, because this removes any instance of the evaluation data in the mined training data.
-
-```bash
-# preprocess data
-
-# remove sentences with more than 50% punctuation
-python /path/to/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py
-
-# deduplicate training data
-paste /path/to/datadir/train.$src /path/to/datadir/train.$tgt | awk '!x[$0]++' > /path/to/datadir/train.dedup
-echo "keeping $(wc -l /path/to/datadir/train.dedup) bitext out of $(wc -l /path/to/datadir/train.$src)"
-cut -f1 /path/to/datadir/train.dedup > /path/to/datadir/train.$src
-cut -f2 /path/to/datadir/train.dedup > /path/to/datadir/train.$tgt
-
-# remove all instances of evaluation data from the training data
-python /path/to/fairseq/examples/m2m_100/process_data/dedup_data.py
-
-# frequency cleaning
-wget https://dl.fbaipublicfiles.com/m2m_100/histograms.tar.gz
-tar -xvzf histograms.tar.gz
-python /path/to/fairseq/examples/m2m_100/process_data/clean_histogram.py --src $src --tgt $tgt --src-file /path/to/source/file --tgt-file /path/to/output/file --src-output-file source_output.$src --tgt-output-file target_output.$tgt --histograms /path/to/histograms
-
-# apply SPM
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-python /path/to/fairseq/scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=/path/to/input/file/here \
- --outputs=/path/to/output/file/here
-
-# length ratio cleaning
-perl mosesdecoder/scripts/training/clean-corpus-n.perl --ratio 3 /path/to/training/data/train.spm.$src-$tgt $src $tgt /path/to/output/directory/train.spm.$src-$tgt 1 250
-
-# binarize data
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang $src --target-lang $tgt \
- --testpref spm.$src.$tgt \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-3. **Training Scripts**
-
-To reproduce the training of our models, we train with fairseq-py's multilingual translation [task](https://github.com/pytorch/fairseq/tree/main/examples/multilingual). If you are interested in model parallel training, also check out [fairscale](https://github.com/facebookresearch/fairscale).
-
-4. **Generation**
-
-To generate from our models, follow the the commands in the generation section below.
-
-
-If you use any of the resources listed here, please cite:
-```bibtex
-@article{fan2020beyond,
- title={Beyond English-Centric Multilingual Machine Translation},
- author={Fan, Angela and Bhosale, Shruti and Schwenk, Holger and Ma, Zhiyi and El-Kishky, Ahmed and Goyal, Siddharth and Baines, Mandeep and Celebi, Onur and Wenzek, Guillaume and Chaudhary, Vishrav and Goyal, Naman and Birch, Tom and Liptchinsky, Vitaliy and Edunov, Sergey and Grave, Edouard and Auli, Michael and Joulin, Armand},
- journal={arXiv preprint},
- year={2020}
-}
-
-@article{schwenk2019ccmatrix,
- title={Ccmatrix: Mining billions of high-quality parallel sentences on the web},
- author={Schwenk, Holger and Wenzek, Guillaume and Edunov, Sergey and Grave, Edouard and Joulin, Armand},
- journal={arXiv preprint arXiv:1911.04944},
- year={2019}
-}
-
-@article{el2019massive,
- title={A Massive Collection of Cross-Lingual Web-Document Pairs},
- author={El-Kishky, Ahmed and Chaudhary, Vishrav and Guzman, Francisco and Koehn, Philipp},
- journal={arXiv preprint arXiv:1911.06154},
- year={2019}
-}
-```
-
-
-## Trained Models
-
-### 418M and 1.2B Model
-We include the last checkpoint for both of these models.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs_small_models.txt
-
-# 418M parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt
-
-# 1.2B parameter model
-wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt
-
-# Generation:
-fairseq-generate $binarized_data_path --batch-size 32 --path $path_to_model --fixed-dictionary model_dict.128k.txt -s en -t fr --remove-bpe 'sentencepiece' --beam 5 --task translation_multi_simple_epoch --lang-pairs language_pairs_small_models.txt --decoder-langtok --encoder-langtok src --gen-subset test > gen_out
-```
-
-### 12B Model
-12B parameter model trained on many-to-many training data for 100 languages. We include the last checkpoint, average of last 5 checkpoints, average of last 10 checkpoints. There isn't a universally best choice out of these three, but all three versions are pretty close in accuracy. You can either sweep over the 3 checkpoints on a dev test and use the best performing checkpoint for final testing. Or the last checkpoint can be a good default choice.
-
-**Model Download Links**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-Last Checkpoint | [12b_last_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_2_gpus.pt) | [12b_last_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt) | [12b_last_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_6_gpus.pt) | [12b_last_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_8_gpus.pt)
-Average of last 5 checkpoints | [12b_avg5_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_2_gpus.pt) | [12b_avg5_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_4_gpus.pt) | [12b_avg5_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_6_gpus.pt) | [12b_avg5_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_8_gpus.pt)
-Average of last 10 checkpoints | [12b_avg10_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_2_gpus.pt) | [12b_avg10_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_4_gpus.pt) | [12b_avg10_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_6_gpus.pt) | [12b_avg10_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_8_gpus.pt)
-
-**Generation Arguments**
-Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs
-:--|:--|:--|:--|:--
-`--pipeline-encoder-balance` | `[26]` | `[1,15,10]` | `[1,9,9,7]` | `[1,6,6,6,7]`
-`--pipeline-encoder-devices` | `[0]` | `[0,1,0]` | `[0,1,2,0]` | `[0,4,5,1,0]`
-`--pipeline-decoder-balance` | `[3,22,1]` | `[3,11,11,1]` | `[3,7,7,8,1]` | `[1,6,6,6,6,1]`
-`--pipeline-decoder-devices` | `[0,1,0]` | `[0,2,3,0]` | `[0,3,4,5,0]` | `[0,2,6,7,3,0]`
-
-
-## SentencePiece Model
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-```
-
-## Generation with M2M-100
-
-### Encode using our SentencePiece Model
-
-Note: Install SentencePiece from [here](https://github.com/google/sentencepiece)
-
-```bash
-fairseq=/path/to/fairseq
-cd $fairseq
-sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de
-sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr
-wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model
-for lang in de fr ; do
- python scripts/spm_encode.py \
- --model spm.128k.model \
- --output_format=piece \
- --inputs=raw_input.de-fr.${lang} \
- --outputs=spm.de-fr.${lang}
-done
-```
-
-### Binarization
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt
-fairseq-preprocess \
- --source-lang de --target-lang fr \
- --testpref spm.de-fr \
- --thresholdsrc 0 --thresholdtgt 0 \
- --destdir data_bin \
- --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt
-```
-
-### Generation for the 12B model
-
-Note that generation can currently be run using 2 32GB / 4 16GB / 6 12GB / 8 8GB GPUs, and the corresponding model checkpoints and pipeline arguments can be found in the [12B Model Section](#12b-model).
-Generation on CPUs will be added in the future.
-
-```bash
-wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs.txt
-wget https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt
-fairseq-generate \
- data_bin \
- --batch-size 1 \
- --path 12b_last_chk_4_gpus.pt \
- --fixed-dictionary model_dict.128k.txt \
- -s de -t fr \
- --remove-bpe 'sentencepiece' \
- --beam 5 \
- --task translation_multi_simple_epoch \
- --lang-pairs language_pairs.txt \
- --decoder-langtok --encoder-langtok src \
- --gen-subset test \
- --fp16 \
- --dataset-impl mmap \
- --distributed-world-size 1 --distributed-no-spawn \
- --pipeline-model-parallel \
- --pipeline-chunks 1 \
- --pipeline-encoder-balance '[1,15,10]' \
- --pipeline-encoder-devices '[0,1,0]' \
- --pipeline-decoder-balance '[3,11,11,1]' \
- --pipeline-decoder-devices '[0,2,3,0]' > gen_out
-```
-## Evaluation with M2M-100
-
-### Tokenization
-
-Note: Refer to tokenizers/README.md for more details on tokenization.
-
-```bash
-cd ${fairseq}/examples/m2m_100
-cat ${fairseq}/gen_out | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh fr > hyp
-cat ${fairseq}/raw_input.de-fr.fr | sh tok.sh fr > ref
-```
-
-### BLEU
-
-```bash
-sacrebleu -tok 'none' ref < hyp
-```
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/__init__.py
deleted file mode 100644
index 05fe822487c3bcde8346648d5826f1669c6bc1ca..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/nat/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-from .fairseq_nat_model import *
-from .nonautoregressive_transformer import *
-from .nat_crf_transformer import *
-from .iterative_nonautoregressive_transformer import *
-from .cmlm_transformer import *
-from .levenshtein_transformer import *
-from .insertion_transformer import *
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_sentence_encoder.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_sentence_encoder.py
deleted file mode 100644
index d0540d69229fb994b9e573a5016c9f239b7929e2..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/transformer_sentence_encoder.py
+++ /dev/null
@@ -1,291 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Optional, Tuple
-
-import torch
-import torch.nn as nn
-from fairseq.modules import (
- FairseqDropout,
- LayerDropModuleList,
- LayerNorm,
- MultiheadAttention,
- PositionalEmbedding,
- TransformerSentenceEncoderLayer,
-)
-from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_
-
-
-def init_bert_params(module):
- """
- Initialize the weights specific to the BERT Model.
- This overrides the default initializations depending on the specified arguments.
- 1. If normal_init_linear_weights is set then weights of linear
- layer will be initialized using the normal distribution and
- bais will be set to the specified value.
- 2. If normal_init_embed_weights is set then weights of embedding
- layer will be initialized using the normal distribution.
- 3. If normal_init_proj_weights is set then weights of
- in_project_weight for MultiHeadAttention initialized using
- the normal distribution (to be validated).
- """
-
- def normal_(data):
- # with FSDP, module params will be on CUDA, so we cast them back to CPU
- # so that the RNG is consistent with and without FSDP
- data.copy_(
- data.cpu().normal_(mean=0.0, std=0.02).to(data.device)
- )
-
- if isinstance(module, nn.Linear):
- normal_(module.weight.data)
- if module.bias is not None:
- module.bias.data.zero_()
- if isinstance(module, nn.Embedding):
- normal_(module.weight.data)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
- if isinstance(module, MultiheadAttention):
- normal_(module.q_proj.weight.data)
- normal_(module.k_proj.weight.data)
- normal_(module.v_proj.weight.data)
-
-
-class TransformerSentenceEncoder(nn.Module):
- """
- Implementation for a Bi-directional Transformer based Sentence Encoder used
- in BERT/XLM style pre-trained models.
-
- This first computes the token embedding using the token embedding matrix,
- position embeddings (if specified) and segment embeddings
- (if specified). After applying the specified number of
- TransformerEncoderLayers, it outputs all the internal states of the
- encoder as well as the final representation associated with the first
- token (usually CLS token).
-
- Input:
- - tokens: B x T matrix representing sentences
- - segment_labels: B x T matrix representing segment label for tokens
-
- Output:
- - a tuple of the following:
- - a list of internal model states used to compute the
- predictions where each tensor has shape T x B x C
- - sentence representation associated with first input token
- in format B x C.
- """
-
- def __init__(
- self,
- padding_idx: int,
- vocab_size: int,
- num_encoder_layers: int = 6,
- embedding_dim: int = 768,
- ffn_embedding_dim: int = 3072,
- num_attention_heads: int = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- layerdrop: float = 0.0,
- max_seq_len: int = 256,
- num_segments: int = 2,
- use_position_embeddings: bool = True,
- offset_positions_by_padding: bool = True,
- encoder_normalize_before: bool = False,
- apply_bert_init: bool = False,
- activation_fn: str = "relu",
- learned_pos_embedding: bool = True,
- embed_scale: float = None,
- freeze_embeddings: bool = False,
- n_trans_layers_to_freeze: int = 0,
- export: bool = False,
- traceable: bool = False,
- q_noise: float = 0.0,
- qn_block_size: int = 8,
- ) -> None:
-
- super().__init__()
- self.padding_idx = padding_idx
- self.vocab_size = vocab_size
- self.dropout_module = FairseqDropout(
- dropout, module_name=self.__class__.__name__
- )
- self.layerdrop = layerdrop
- self.max_seq_len = max_seq_len
- self.embedding_dim = embedding_dim
- self.num_segments = num_segments
- self.use_position_embeddings = use_position_embeddings
- self.apply_bert_init = apply_bert_init
- self.learned_pos_embedding = learned_pos_embedding
- self.traceable = traceable
-
- self.embed_tokens = self.build_embedding(
- self.vocab_size, self.embedding_dim, self.padding_idx
- )
- self.embed_scale = embed_scale
-
- if q_noise > 0:
- self.quant_noise = apply_quant_noise_(
- nn.Linear(self.embedding_dim, self.embedding_dim, bias=False),
- q_noise,
- qn_block_size,
- )
- else:
- self.quant_noise = None
-
- self.segment_embeddings = (
- nn.Embedding(self.num_segments, self.embedding_dim, padding_idx=None)
- if self.num_segments > 0
- else None
- )
-
- self.embed_positions = (
- PositionalEmbedding(
- self.max_seq_len,
- self.embedding_dim,
- padding_idx=(self.padding_idx if offset_positions_by_padding else None),
- learned=self.learned_pos_embedding,
- )
- if self.use_position_embeddings
- else None
- )
-
- if encoder_normalize_before:
- self.emb_layer_norm = LayerNorm(self.embedding_dim, export=export)
- else:
- self.emb_layer_norm = None
-
- if self.layerdrop > 0.0:
- self.layers = LayerDropModuleList(p=self.layerdrop)
- else:
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- self.build_transformer_sentence_encoder_layer(
- embedding_dim=self.embedding_dim,
- ffn_embedding_dim=ffn_embedding_dim,
- num_attention_heads=num_attention_heads,
- dropout=self.dropout_module.p,
- attention_dropout=attention_dropout,
- activation_dropout=activation_dropout,
- activation_fn=activation_fn,
- export=export,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
- for _ in range(num_encoder_layers)
- ]
- )
-
- # Apply initialization of model params after building the model
- if self.apply_bert_init:
- self.apply(init_bert_params)
-
- def freeze_module_params(m):
- if m is not None:
- for p in m.parameters():
- p.requires_grad = False
-
- if freeze_embeddings:
- freeze_module_params(self.embed_tokens)
- freeze_module_params(self.segment_embeddings)
- freeze_module_params(self.embed_positions)
- freeze_module_params(self.emb_layer_norm)
-
- for layer in range(n_trans_layers_to_freeze):
- freeze_module_params(self.layers[layer])
-
- def build_embedding(self, vocab_size, embedding_dim, padding_idx):
- return nn.Embedding(vocab_size, embedding_dim, padding_idx)
-
- def build_transformer_sentence_encoder_layer(
- self,
- embedding_dim,
- ffn_embedding_dim,
- num_attention_heads,
- dropout,
- attention_dropout,
- activation_dropout,
- activation_fn,
- export,
- q_noise,
- qn_block_size,
- ):
- return TransformerSentenceEncoderLayer(
- embedding_dim=embedding_dim,
- ffn_embedding_dim=ffn_embedding_dim,
- num_attention_heads=num_attention_heads,
- dropout=dropout,
- attention_dropout=attention_dropout,
- activation_dropout=activation_dropout,
- activation_fn=activation_fn,
- export=export,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
-
- def forward(
- self,
- tokens: torch.Tensor,
- segment_labels: torch.Tensor = None,
- last_state_only: bool = False,
- positions: Optional[torch.Tensor] = None,
- token_embeddings: Optional[torch.Tensor] = None,
- attn_mask: Optional[torch.Tensor] = None,
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- is_tpu = tokens.device.type == "xla"
-
- # compute padding mask. This is needed for multi-head attention
- padding_mask = tokens.eq(self.padding_idx)
- if not self.traceable and not is_tpu and not padding_mask.any():
- padding_mask = None
-
- if token_embeddings is not None:
- x = token_embeddings
- else:
- x = self.embed_tokens(tokens)
-
- if self.embed_scale is not None:
- x = x * self.embed_scale
-
- if self.embed_positions is not None:
- x = x + self.embed_positions(tokens, positions=positions)
-
- if self.segment_embeddings is not None and segment_labels is not None:
- x = x + self.segment_embeddings(segment_labels)
-
- if self.quant_noise is not None:
- x = self.quant_noise(x)
-
- if self.emb_layer_norm is not None:
- x = self.emb_layer_norm(x)
-
- x = self.dropout_module(x)
-
- # account for padding while computing the representation
- if padding_mask is not None:
- x = x * (1 - padding_mask.unsqueeze(-1).type_as(x))
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- inner_states = []
- if not last_state_only:
- inner_states.append(x)
-
- for layer in self.layers:
- x, _ = layer(x, self_attn_padding_mask=padding_mask, self_attn_mask=attn_mask)
- if not last_state_only:
- inner_states.append(x)
-
- sentence_rep = x[0, :, :]
-
- if last_state_only:
- inner_states = [x]
-
- if self.traceable:
- return torch.stack(inner_states), sentence_rep
- else:
- return inner_states, sentence_rep
diff --git a/spaces/Illumotion/Koboldcpp/otherarch/neox_v2.cpp b/spaces/Illumotion/Koboldcpp/otherarch/neox_v2.cpp
deleted file mode 100644
index 8cfd821adb4dedbb926114d70c2ca0b67313c6fc..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/otherarch/neox_v2.cpp
+++ /dev/null
@@ -1,618 +0,0 @@
-#include "ggml_v2.h"
-#include "otherarch.h"
-
-#include "utils.h"
-
-#include
-#include
-#include
-#include
-#include
-#include