diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devathai Sonna Kavithai Tamil Full Movie Download BETTER.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devathai Sonna Kavithai Tamil Full Movie Download BETTER.md
deleted file mode 100644
index 436dd98179310c33724ed3537b8bf0f75754eeb8..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devathai Sonna Kavithai Tamil Full Movie Download BETTER.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download Devathai Sonna Kavithai Tamil Full Movie Online
-
Devathai Sonna Kavithai is a 2014 Tamil romantic movie directed by Thesigan and starring newcomers in the lead roles. The movie is about a young man who falls in love with a girl who speaks to him through poetry. The movie was released on January 1, 2014 and received mixed reviews from critics and audiences.
-
If you are looking for a way to download Devathai Sonna Kavithai Tamil full movie online, you have come to the right place. In this article, we will show you some of the best websites and platforms where you can watch or download this movie legally and safely. We will also give you some tips on how to optimize your search and avoid any malware or viruses.
Best Websites to Download Devathai Sonna Kavithai Tamil Full Movie Online
-
There are many websites that offer Tamil movies for download or streaming, but not all of them are reliable or trustworthy. Some of them may contain harmful ads, pop-ups, or links that can infect your device with malware or viruses. Some of them may also violate the copyright laws and infringe on the rights of the movie makers and distributors.
-
To avoid any such risks, we recommend you to use only the following websites that are legal and safe to download Devathai Sonna Kavithai Tamil full movie online:
-
-
YouTube: YouTube is one of the most popular and widely used platforms for watching and downloading videos online. You can find Devathai Sonna Kavithai Tamil full movie on YouTube by searching for its title or using the keyword "Devathai Sonna Kavithai Tamil Full Movie Download". You can also use filters such as duration, upload date, or quality to narrow down your results. You can watch the movie for free on YouTube or download it using a third-party app or website that allows YouTube video downloads.
-
Dailymotion: Dailymotion is another video-sharing platform that hosts a variety of content, including movies, TV shows, music, sports, and more. You can find Devathai Sonna Kavithai Tamil full movie on Dailymotion by searching for its title or using the keyword "Devathai Sonna Kavithai Tamil Full Movie Download". You can also use filters such as duration, upload date, or quality to narrow down your results. You can watch the movie for free on Dailymotion or download it using a third-party app or website that allows Dailymotion video downloads.
-
-
Tips to Optimize Your Search and Avoid Malware or Viruses
-
While using the above websites to download Devathai Sonna Kavithai Tamil full movie online, you should keep in mind some tips to optimize your search and avoid any malware or viruses:
-
-
Use a VPN: A VPN (Virtual Private Network) is a service that encrypts your internet traffic and hides your IP address and location from prying eyes. This can help you access geo-restricted content, bypass censorship, and protect your privacy and security online. You can use a VPN to access the above websites from anywhere in the world and enjoy Devathai Sonna Kavithai Tamil full movie without any hassle.
-
Use an antivirus: An antivirus is a software that detects and removes any malicious programs or files that may harm your device or data. You should always use an antivirus to scan your device before and after downloading any file from the internet. This can help you prevent any malware or viruses from infecting your device or stealing your information.
-
Use a trusted source: As mentioned earlier, not all websites that offer Tamil movies for download or streaming are reliable or trustworthy. You should always use a trusted source that has a good reputation and reviews from other users. You should also avoid clicking on any suspicious ads, pop-ups, or links that may redirect you to unwanted or harmful sites.
-
-
Conclusion
-
Devathai Sonna Kavithai is a 2014 Tamil romantic movie that you
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 12 [CRACK ONLY] 100 WORKING Serial Key BETTER.md b/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 12 [CRACK ONLY] 100 WORKING Serial Key BETTER.md
deleted file mode 100644
index e909703bb775ec1a7d2db175c8e4555f92cdd49d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Fifa 12 [CRACK ONLY] 100 WORKING Serial Key BETTER.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
How to Download Alias TV Series in 480p Resolution
-
If you are a fan of action, thriller, and science fiction genres, you might have heard of Alias, a popular TV series that ran from 2001 to 2006. The show starred Jennifer Garner as Sydney Bristow, a double agent for the CIA who poses as an operative for a criminal organization called SD-6. The series was created by J.J. Abrams, who also produced other hit shows like Lost, Fringe, and Westworld.
-
In this article, we will tell you more about Alias TV series and why you should watch it. We will also explain what 480p resolution is and why you need it for downloading videos. Finally, we will show you how to download Alias TV series in 480p resolution using a free tool called YouTube-DL.
What is Alias TV Series and Why You Should Watch It
-
The Plot and Characters of Alias
-
The plot of Alias revolves around Sydney Bristow, who was recruited as a spy by a man who claimed to work for the CIA when she was a college student. She later discovered that she was actually working for SD-6, a rogue faction of the CIA that was part of a larger alliance of criminal organizations. She then decided to become a double agent for the real CIA and work to bring down SD-6 and its allies.
-
Along the way, she faced many dangers and challenges, such as dealing with her estranged father Jack Bristow, who was also a double agent, her complicated relationship with her fellow agent Michael Vaughn, her best friend Francie Calfo, who was replaced by a look-alike assassin, and her mother Irina Derevko, who was a former KGB spy and a key figure in a global conspiracy involving an ancient prophet named Rambaldi.
-
The show featured many twists and turns, cliffhangers, action sequences, gadgets, disguises, and exotic locations. It also had a stellar cast of supporting characters, such as Arvin Sloane, the leader of SD-6 who had a personal connection to Sydney; Marshall Flinkman, the quirky tech genius who helped Sydney on her missions; Marcus Dixon, Sydney's loyal partner at SD-6; Julian Sark, a ruthless mercenary who worked for various factions; Lauren Reed, Vaughn's wife who turned out to be a double agent; Nadia Santos, Sydney's half-sister who was also involved in the Rambaldi prophecy; Rachel Gibson, a young hacker who joined Sydney's team after being betrayed by her employer; Thomas Grace, a former Delta Force soldier who became Sydney's new partner; Kelly Peyton, a former friend of Rachel who became an enemy agent; and Renée Rienne, a mysterious freelance spy who had ties to Sydney's past.
-
The Awards and Recognition of Alias
-
Alias was well received by critics and audiences alike. It won four Emmy Awards out of 11 nominations, including Outstanding Lead Actress in a Drama Series for Jennifer Garner in 2002. It also won a Golden Globe Award for Best Actress in a Television Series – Drama for Garner in 2002. The show was nominated for several other awards, such as Screen Actors Guild Awards, Teen Choice Awards, Saturn Awards, and People's Choice Awards.
-
Alias was also included in several "best of" lists by various media outlets. For example, it was ranked number 36 on TV Guide's list of "50 Greatest TV Shows of All Time" in 2002. It was also ranked number seven on Entertainment Weekly's list of "The New Classics: TV" in 2008. The American Film Institute named it one of the top ten television programs of the year in
2003 and 2005. The show also influenced other spy-themed shows, such as Chuck, Nikita, and Covert Affairs.
-
What is 480p Resolution and Why You Need It
-
The Definition and Features of 480p Resolution
-
480p resolution is a term that refers to the video quality of a digital display. It means that the video has 480 horizontal lines of pixels that are progressively scanned, meaning that each line is displayed in sequence. The "p" stands for progressive scan, as opposed to interlaced scan, which alternates between odd and even lines of pixels. Progressive scan produces a smoother and clearer image than interlaced scan.
-
The aspect ratio of 480p resolution is usually 4:3, which means that the width of the screen is four times the height. However, some widescreen formats, such as 16:9, can also use 480p resolution. The pixel dimensions of 480p resolution are typically 640 x 480 for 4:3 aspect ratio and 854 x 480 for 16:9 aspect ratio.
-
alias season 1 480p download
-alias tv series 480p download
-alias 480p mkv download
-alias 480p free download
-alias 480p torrent download
-alias 480p direct download
-alias 480p google drive download
-alias 480p mega.nz download
-alias 480p english subtitles download
-alias 480p all episodes download
-alias season 2 480p download
-alias season 3 480p download
-alias season 4 480p download
-alias season 5 480p download
-alias complete series 480p download
-alias industrial design software 480p download
-alias autodesk free trial 480p download
-alias autodesk tutorial 480p download
-alias autodesk crack 480p download
-alias autodesk keygen 480p download
-alias youtube-dl quality selection 480p download
-alias youtube-dl best format 480p download
-alias youtube-dl command line 480p download
-alias youtube-dl video downloader 480p download
-alias youtube-dl ask ubuntu 480p download
-vincenzo s01e01 alias 480p download
-vincenzo s01e02 alias 480p download
-vincenzo s01e03 alias 480p download
-vincenzo s01e04 alias 480p download
-vincenzo s01e05 alias 480p download
-vincenzo s01e06 alias 480p download
-vincenzo s01e07 alias 480p download
-vincenzo s01e08 alias 480p download
-vincenzo s01e09 alias 480p download
-vincenzo s01e10 alias 480p download
-vincenzo s01e11 alias 480p download
-vincenzo s01e12 alias 480p download
-vincenzo s01e13 alias 480p download
-vincenzo s01e14 alias 480p download
-vincenzo s01e15 alias 480p download
-vincenzo s01e16 alias 480p download
-vincenzo s01e17 alias 480p download
-vincenzo s01e18 alias 480p download
-vincenzo s01e19 alias 480p download
-vincenzo s01e20 alias 480p download
-vincenzo korean drama alias 480p download
-vincenzo english subtitles alias 480p download
-vincenzo internet archive alias 480p download
-vincenzo mp4 format alias 480p download
-
The Benefits and Drawbacks of 480p Resolution
-
One of the main benefits of 480p resolution is that it requires less bandwidth and storage space than higher resolutions, such as 720p, 1080p, or 4K. This means that you can download and stream videos faster and easier with 480p resolution. It also means that you can fit more videos on your device or hard drive with 480p resolution.
-
Another benefit of 480p resolution is that it is compatible with most devices and platforms, such as TVs, computers, smartphones, tablets, DVD players, and game consoles. You can watch videos in 480p resolution on almost any screen without worrying about compatibility issues or format conversions.
-
However, 480p resolution also has some drawbacks. One of them is that it has lower image quality than higher resolutions, especially when viewed on larger screens or from close distances. You might notice pixelation, blurriness, or distortion when watching videos in 480p resolution on a big screen or a high-definition display. You might also miss some details or colors that are present in higher resolutions.
-
Another drawback of 480p resolution is that it might not be suitable for some types of videos, such as those that have fast motion, complex graphics, or high contrast. These videos might look choppy, blurry, or noisy when viewed in 480p resolution. You might also experience some lagging or buffering when streaming these videos in 480p resolution.
-
How to Download Alias TV Series in 480p Resolution Using YouTube-DL
-
What is YouTube-DL and How to Install It
-
YouTube-DL is a free and open-source command-line tool that allows you to download videos from YouTube and other websites. You can use it to download videos in various formats and resolutions, including 480p. You can also use it to download audio files, subtitles, playlists, channels, and live streams.
-
To install YouTube-DL on your device, you need to follow these steps:
-
-
Download the latest version of YouTube-DL from its official website: https://youtube-dl.org/
-
Extract the zip file and save the youtube-dl.exe file in a folder of your choice.
-
Add the folder to your system's PATH environment variable so that you can run YouTube-DL from any directory.
-
Open a command prompt window and type youtube-dl --version to check if YouTube-DL is installed correctly.
-
-
How to Find and Select the Video Quality from YouTube-DL
-
To find and select the video quality from YouTube-DL, you need to follow these steps:
-
-
Copy the URL of the video that you want to download from YouTube or any other website.
-
Open a command prompt window and type youtube-dl -F [URL] to list all the available formats and resolutions for the video.
-
Look for the format code that corresponds to the video quality that you want to download. For example, if you want to download Alias TV series in 480p resolution with MP4 format, you might look for something like this: 18 mp4 640x360 medium , avc1.42001E, mp4a.40.2@96k (best)
-
Note down the format code (in this case, 18) for later use.
-
-
How to Download the Video Using YouTube-DL
-
To download the video using YouTube-DL , you need to follow these steps:
-
-
Open a command prompt window and type youtube-dl -f [format code] [URL] to download the video with the selected format and resolution. For example, if you want to download Alias TV series in 480p resolution with MP4 format, you might type something like this: youtube-dl -f 18 https://www.youtube.com/watch?v=0hX-YLAjI_A
-
Wait for the download to finish. You can check the progress and speed of the download on the command prompt window.
-
Find the downloaded video file in the same folder where you saved the youtube-dl.exe file. You can rename or move the file as you wish.
-
-
Conclusion
-
In this article, we have shown you how to download Alias TV series in 480p resolution using YouTube-DL. We have also given you some information about Alias TV series and why you should watch it, as well as 480p resolution and why you need it. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Q: Is YouTube-DL legal?
-
A: YouTube-DL is legal as long as you use it for personal and non-commercial purposes. However, downloading videos from YouTube or other websites might violate their terms of service or copyright laws, so you should always check the source and legality of the videos before downloading them.
-
Q: Can I use YouTube-DL to download videos from other websites besides YouTube?
-
A: Yes, YouTube-DL supports many other websites, such as Vimeo, Dailymotion, Facebook, Instagram, Twitter, and more. You can check the full list of supported websites here: https://ytdl-org.github.io/youtube-dl/supportedsites.html
-
Q: Can I use YouTube-DL to download videos in other resolutions besides 480p?
-
A: Yes, YouTube-DL can download videos in various resolutions, such as 240p, 360p, 720p, 1080p, or even 4K. You just need to find and select the appropriate format code from the list of available formats and resolutions for each video.
-
Q: Can I use YouTube-DL to download audio files or subtitles from videos?
-
A: Yes, YouTube-DL can download audio files or subtitles from videos. You can use the -x option to extract audio files from videos, or the --write-sub option to download subtitles from videos. You can also specify the format and language of the audio files or subtitles using other options. You can check the full list of options and examples here: https://github.com/ytdl-org/youtube-dl/blob/master/README.md#readme
-
Q: Can I use YouTube-DL to download playlists or channels from YouTube?
-
A: Yes, YouTube-DL can download playlists or channels from YouTube. You just need to copy and paste the URL of the playlist or channel instead of a single video. You can also use the --playlist-start and --playlist-end options to specify which videos from the playlist or channel you want to download.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Nicoo Free Fire Max APK and Enhance Your Gaming Experience.md b/spaces/1phancelerku/anime-remove-background/Download Nicoo Free Fire Max APK and Enhance Your Gaming Experience.md
deleted file mode 100644
index 90353327ea7d4e7e488b7b806790d4741cc9eab1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Nicoo Free Fire Max APK and Enhance Your Gaming Experience.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Nicoo Free Fire Max APK Download 2023: How to Get Free Skins and More
-
If you are a fan of the popular FPS game Free Fire, you might have heard of Nicoo Free Fire Max, a third-party app that allows you to customize your avatars with various skins and accessories. But what is Nicoo Free Fire Max exactly, and how can you download and use it safely? In this article, we will answer these questions and more, so keep reading!
-
What is Nicoo Free Fire Max?
-
Nicoo Free Fire Max is an action app developed by Naviemu.inc that works as a skin injector for Free Fire. It lets you unlock and apply different skins for your characters, weapons, vehicles, parachutes, and more. You can also change the background and theme of the game, as well as the interface and sound effects. With Nicoo Free Fire Max, you can personalize your gaming experience and make it more fun and unique.
Some of the features that Nicoo Free Fire Max offers are:
-
-
Free access to all skins in the game, including premium and exclusive ones.
-
Easy to use interface with a simple tap-to-apply function.
-
No need to root your device or modify the game files.
-
Compatible with both Android and PC devices.
-
Regular updates with new skins and features.
-
-
How to Download and Install Nicoo Free Fire Max APK
-
To download and install Nicoo Free Fire Max APK on your device, follow these steps:
-
-
Go to the official website of Nicoo Free Fire Max or click on this link .
-
Select the latest version of the app and click on the download button.
-
Wait for the download to finish and then locate the APK file on your device.
-
Enable the installation from unknown sources on your device settings.
-
Tap on the APK file and follow the instructions to install the app.
-
Launch the app and grant it the necessary permissions.
-
Open Free Fire from the app and enjoy the free skins!
-
-
Why Use Nicoo Free Fire Max?
-
Nicoo Free Fire Max is a great app for those who want to spice up their gameplay with different skins and accessories. But what are the benefits and risks of using it?
-
nicoo app for free fire skins 2023
-how to install nicoo free fire max apk
-nicoo free fire max latest version download
-unlock all free fire skins with nicoo apk
-nicoo free fire max mod apk 2023
-nicoo app free fire bundle and weapons
-download nicoo apk for android 5.0 and above
-nicoo free fire max hack apk 2023
-nicoo app secured source for free fire
-nicoo free fire max apk no root 2023
-nicoo app review and features for free fire
-nicoo free fire max apk unlimited diamonds
-nicoo app download link for free fire 2023
-nicoo free fire max apk obb download
-nicoo app tutorial and guide for free fire
-nicoo free fire max apk update 2023
-nicoo app support and feedback for free fire
-nicoo free fire max apk online generator
-nicoo app alternative and similar apps for free fire
-nicoo free fire max apk offline installer
-nicoo app benefits and advantages for free fire
-nicoo free fire max apk compatible devices
-nicoo app requirements and specifications for free fire
-nicoo free fire max apk file size and format
-nicoo app license and terms of service for free fire
-
Benefits of Using Nicoo Free Fire Max
-
Some of the benefits of using Nicoo Free Fire Max are:
-
-
You can save money by not having to buy diamonds or coins to get skins in the game.
-
You can impress your friends and enemies with your cool and stylish appearance.
-
You can enhance your performance and confidence in the game with better skins.
-
You can explore different combinations and styles with various skins.
-
-
Risks of Using Nicoo Free Fire Max
-
Some of the risks of using Nicoo Free Fire Max are:
-
-
You might get banned from the game if you are detected by the anti-cheat system.
-
You might expose your device to malware or viruses if you download from untrusted sources.
-
You might lose your account data or personal information if you give them to fake or phishing websites.
-
You might violate the terms and conditions of the game by using an unauthorized app.
-
-
Alternatives to Nicoo Free Fire Max
-
If you are not comfortable with using Nicoo Free Fire Max, or you want to try other apps that offer similar features, you can check out these alternatives:
-
Lulubox
-
Lulubox is another popular app that allows you to get free skins and mods for various games, including Free Fire. It also has a built-in game booster that can improve your device performance and battery life. You can download Lulubox from its official website or from the Google Play Store .
-
Tool Skin
-
Tool Skin is a simple and lightweight app that lets you change the skins of your characters, weapons, backpacks, and more in Free Fire. It has a user-friendly interface and a large collection of skins to choose from. You can download Tool Skin from its official website or from the Google Play Store .
-
Conclusion
-
Nicoo Free Fire Max is an app that can help you customize your Free Fire gameplay with various skins and accessories. It is easy to use and compatible with both Android and PC devices. However, it also comes with some risks, such as getting banned or infected by malware. Therefore, you should use it at your own discretion and with caution. Alternatively, you can try other apps like Lulubox or Tool Skin that offer similar features.
-
Summary of the article
-
In this article, we have discussed the following points:
-
-
What is Nicoo Free Fire Max and what are its features?
-
How to download and install Nicoo Free Fire Max APK on your device?
-
Why use Nicoo Free Fire Max and what are the benefits and risks of using it?
-
What are some alternatives to Nicoo Free Fire Max that you can try?
-
-
FAQs
-
Here are some frequently asked questions about Nicoo Free Fire Max:
-
-
Is Nicoo Free Fire Max safe to use?
-Nicoo Free Fire Max is not an official app from the developers of Free Fire, so it is not guaranteed to be safe or secure. You should only download it from trusted sources and scan it for viruses before installing it. You should also avoid giving your account details or personal information to any website or app that claims to be associated with Nicoo Free Fire Max.
-
Is Nicoo Free Fire Max legal to use?
-Nicoo Free Fire Max is not legal to use, as it violates the terms and conditions of Free Fire. Using it may result in your account being banned or suspended by the game authorities. You should only use it at your own risk and responsibility.
-
Do other players see my skins when I use Nicoo Free Fire Max?
-No, other players do not see your skins when you use Nicoo Free Fire Max. The skins are only visible to you on your device, as they are not part of the game data. Therefore, using Nicoo Free Fire Max does not give you any advantage or disadvantage over other players.
-
Does Nicoo Free Fire Max work with Free Fire Max?
-Yes, Nicoo Free Fire Max works with both Free Fire and Free Fire Max, as they are based on the same game engine. However, you may need to update the app regularly to match the latest version of the game.
-
How can I contact the developers of Nicoo Free Fire Max?
-You can contact the developers of Nicoo Free Fire Max by visiting their official website or by sending them an email at support@naviemu.com. You can also follow them on their social media accounts for updates and news.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/unet_1d_blocks.py b/spaces/1toTree/lora_test/ppdiffusers/models/unet_1d_blocks.py
deleted file mode 100644
index a895423756b7a19bb6c6f42327fb1d24fa623c50..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/models/unet_1d_blocks.py
+++ /dev/null
@@ -1,668 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import math
-
-import paddle
-import paddle.nn.functional as F
-from paddle import nn
-
-from .resnet import Downsample1D, ResidualTemporalBlock1D, Upsample1D, rearrange_dims
-
-
-class DownResnetBlock1D(nn.Layer):
- def __init__(
- self,
- in_channels,
- out_channels=None,
- num_layers=1,
- conv_shortcut=False,
- temb_channels=32,
- groups=32,
- groups_out=None,
- non_linearity=None,
- time_embedding_norm="default",
- output_scale_factor=1.0,
- add_downsample=True,
- ):
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
- self.time_embedding_norm = time_embedding_norm
- self.add_downsample = add_downsample
- self.output_scale_factor = output_scale_factor
-
- if groups_out is None:
- groups_out = groups
-
- # there will always be at least one resnet
- resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=temb_channels)]
-
- for _ in range(num_layers):
- resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels))
-
- self.resnets = nn.LayerList(resnets)
-
- if non_linearity == "swish":
- self.nonlinearity = lambda x: F.silu(x)
- elif non_linearity == "mish":
- self.nonlinearity = nn.Mish()
- elif non_linearity == "silu":
- self.nonlinearity = nn.Silu()
- else:
- self.nonlinearity = None
-
- self.downsample = None
- if add_downsample:
- self.downsample = Downsample1D(out_channels, use_conv=True, padding=1)
-
- def forward(self, hidden_states, temb=None):
- output_states = ()
-
- hidden_states = self.resnets[0](hidden_states, temb)
- for resnet in self.resnets[1:]:
- hidden_states = resnet(hidden_states, temb)
-
- output_states += (hidden_states,)
-
- if self.nonlinearity is not None:
- hidden_states = self.nonlinearity(hidden_states)
-
- if self.downsample is not None:
- hidden_states = self.downsample(hidden_states)
-
- return hidden_states, output_states
-
-
-class UpResnetBlock1D(nn.Layer):
- def __init__(
- self,
- in_channels,
- out_channels=None,
- num_layers=1,
- temb_channels=32,
- groups=32,
- groups_out=None,
- non_linearity=None,
- time_embedding_norm="default",
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.time_embedding_norm = time_embedding_norm
- self.add_upsample = add_upsample
- self.output_scale_factor = output_scale_factor
-
- if groups_out is None:
- groups_out = groups
-
- # there will always be at least one resnet
- resnets = [ResidualTemporalBlock1D(2 * in_channels, out_channels, embed_dim=temb_channels)]
-
- for _ in range(num_layers):
- resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels))
-
- self.resnets = nn.LayerList(resnets)
-
- if non_linearity == "swish":
- self.nonlinearity = lambda x: F.silu(x)
- elif non_linearity == "mish":
- self.nonlinearity = nn.Mish()
- elif non_linearity == "silu":
- self.nonlinearity = nn.Silu()
- else:
- self.nonlinearity = None
-
- self.upsample = None
- if add_upsample:
- self.upsample = Upsample1D(out_channels, use_conv_transpose=True)
-
- def forward(self, hidden_states, res_hidden_states_tuple=None, temb=None):
- if res_hidden_states_tuple is not None:
- res_hidden_states = res_hidden_states_tuple[-1]
- hidden_states = paddle.concat((hidden_states, res_hidden_states), axis=1)
-
- hidden_states = self.resnets[0](hidden_states, temb)
- for resnet in self.resnets[1:]:
- hidden_states = resnet(hidden_states, temb)
-
- if self.nonlinearity is not None:
- hidden_states = self.nonlinearity(hidden_states)
-
- if self.upsample is not None:
- hidden_states = self.upsample(hidden_states)
-
- return hidden_states
-
-
-class ValueFunctionMidBlock1D(nn.Layer):
- def __init__(self, in_channels, out_channels, embed_dim):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.embed_dim = embed_dim
-
- self.res1 = ResidualTemporalBlock1D(in_channels, in_channels // 2, embed_dim=embed_dim)
- self.down1 = Downsample1D(out_channels // 2, use_conv=True)
- self.res2 = ResidualTemporalBlock1D(in_channels // 2, in_channels // 4, embed_dim=embed_dim)
- self.down2 = Downsample1D(out_channels // 4, use_conv=True)
-
- def forward(self, x, temb=None):
- x = self.res1(x, temb)
- x = self.down1(x)
- x = self.res2(x, temb)
- x = self.down2(x)
- return x
-
-
-class MidResTemporalBlock1D(nn.Layer):
- def __init__(
- self,
- in_channels,
- out_channels,
- embed_dim,
- num_layers: int = 1,
- add_downsample: bool = False,
- add_upsample: bool = False,
- non_linearity=None,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.add_downsample = add_downsample
-
- # there will always be at least one resnet
- resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=embed_dim)]
-
- for _ in range(num_layers):
- resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=embed_dim))
-
- self.resnets = nn.LayerList(resnets)
-
- if non_linearity == "swish":
- self.nonlinearity = lambda x: F.silu(x)
- elif non_linearity == "mish":
- self.nonlinearity = nn.Mish()
- elif non_linearity == "silu":
- self.nonlinearity = nn.Silu()
- else:
- self.nonlinearity = None
-
- self.upsample = None
- if add_upsample:
- self.upsample = Downsample1D(out_channels, use_conv=True)
-
- self.downsample = None
- if add_downsample:
- self.downsample = Downsample1D(out_channels, use_conv=True)
-
- if self.upsample and self.downsample:
- raise ValueError("Block cannot downsample and upsample")
-
- def forward(self, hidden_states, temb):
- hidden_states = self.resnets[0](hidden_states, temb)
- for resnet in self.resnets[1:]:
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsample:
- hidden_states = self.upsample(hidden_states)
- if self.downsample:
- self.downsample = self.downsample(hidden_states)
-
- return hidden_states
-
-
-class OutConv1DBlock(nn.Layer):
- def __init__(self, num_groups_out, out_channels, embed_dim, act_fn):
- super().__init__()
- self.final_conv1d_1 = nn.Conv1D(embed_dim, embed_dim, 5, padding=2)
- self.final_conv1d_gn = nn.GroupNorm(num_groups_out, embed_dim)
- if act_fn == "silu":
- self.final_conv1d_act = nn.Silu()
- if act_fn == "mish":
- self.final_conv1d_act = nn.Mish()
- self.final_conv1d_2 = nn.Conv1D(embed_dim, out_channels, 1)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = self.final_conv1d_1(hidden_states)
- hidden_states = rearrange_dims(hidden_states)
- hidden_states = self.final_conv1d_gn(hidden_states)
- hidden_states = rearrange_dims(hidden_states)
- hidden_states = self.final_conv1d_act(hidden_states)
- hidden_states = self.final_conv1d_2(hidden_states)
- return hidden_states
-
-
-class OutValueFunctionBlock(nn.Layer):
- def __init__(self, fc_dim, embed_dim):
- super().__init__()
- self.final_block = nn.LayerList(
- [
- nn.Linear(fc_dim + embed_dim, fc_dim // 2),
- nn.Mish(),
- nn.Linear(fc_dim // 2, 1),
- ]
- )
-
- def forward(self, hidden_states, temb):
- hidden_states = hidden_states.reshape([hidden_states.shape[0], -1])
- hidden_states = paddle.concat((hidden_states, temb), axis=-1)
- for layer in self.final_block:
- hidden_states = layer(hidden_states)
-
- return hidden_states
-
-
-_kernels = {
- "linear": [1 / 8, 3 / 8, 3 / 8, 1 / 8],
- "cubic": [-0.01171875, -0.03515625, 0.11328125, 0.43359375, 0.43359375, 0.11328125, -0.03515625, -0.01171875],
- "lanczos3": [
- 0.003689131001010537,
- 0.015056144446134567,
- -0.03399861603975296,
- -0.066637322306633,
- 0.13550527393817902,
- 0.44638532400131226,
- 0.44638532400131226,
- 0.13550527393817902,
- -0.066637322306633,
- -0.03399861603975296,
- 0.015056144446134567,
- 0.003689131001010537,
- ],
-}
-
-
-class Downsample1d(nn.Layer):
- def __init__(self, kernel="linear", pad_mode="reflect"):
- super().__init__()
- self.pad_mode = pad_mode
- kernel_1d = paddle.to_tensor(_kernels[kernel])
- self.pad = kernel_1d.shape[0] // 2 - 1
- self.register_buffer("kernel", kernel_1d)
-
- def forward(self, hidden_states):
- hidden_states = F.pad(hidden_states, (self.pad,) * 2, self.pad_mode, data_format="NCL")
- weight = paddle.zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]])
- indices = paddle.arange(hidden_states.shape[1])
- weight[indices, indices] = self.kernel.cast(weight.dtype)
- return F.conv1d(hidden_states, weight, stride=2)
-
-
-class Upsample1d(nn.Layer):
- def __init__(self, kernel="linear", pad_mode="reflect"):
- super().__init__()
- self.pad_mode = pad_mode
- kernel_1d = paddle.to_tensor(_kernels[kernel]) * 2
- self.pad = kernel_1d.shape[0] // 2 - 1
- self.register_buffer("kernel", kernel_1d)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = F.pad(hidden_states, ((self.pad + 1) // 2,) * 2, self.pad_mode, data_format="NCL")
- weight = paddle.zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]])
- indices = paddle.arange(hidden_states.shape[1])
- weight[indices, indices] = self.kernel.cast(weight.dtype)
- return F.conv1d_transpose(hidden_states, weight, stride=2, padding=self.pad * 2 + 1)
-
-
-class SelfAttention1d(nn.Layer):
- def __init__(self, in_channels, n_head=1, dropout_rate=0.0):
- super().__init__()
- self.channels = in_channels
- self.group_norm = nn.GroupNorm(1, num_channels=in_channels)
- self.num_heads = n_head
-
- self.query = nn.Linear(self.channels, self.channels)
- self.key = nn.Linear(self.channels, self.channels)
- self.value = nn.Linear(self.channels, self.channels)
-
- self.proj_attn = nn.Linear(self.channels, self.channels)
-
- self.dropout = nn.Dropout(dropout_rate)
-
- # (TODO junnyu) refactor self attention
- def transpose_for_scores(self, projection: paddle.Tensor) -> paddle.Tensor:
- new_projection_shape = projection.shape[:-1] + [self.num_heads, -1]
- # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D)
- new_projection = projection.reshape(new_projection_shape).transpose([0, 2, 1, 3])
- return new_projection
-
- def forward(self, hidden_states):
- residual = hidden_states
-
- hidden_states = self.group_norm(hidden_states)
- hidden_states = hidden_states.transpose([0, 2, 1])
-
- query_proj = self.query(hidden_states)
- key_proj = self.key(hidden_states)
- value_proj = self.value(hidden_states)
-
- query_states = self.transpose_for_scores(query_proj)
- key_states = self.transpose_for_scores(key_proj)
- value_states = self.transpose_for_scores(value_proj)
-
- scale = 1 / math.sqrt(math.sqrt(key_states.shape[-1]))
-
- attention_scores = paddle.matmul(query_states * scale, key_states * scale, transpose_y=True)
- attention_probs = F.softmax(attention_scores, axis=-1)
-
- # compute attention output
- hidden_states = paddle.matmul(attention_probs, value_states)
-
- hidden_states = hidden_states.transpose([0, 2, 1, 3])
- new_hidden_states_shape = hidden_states.shape[:-2] + [
- self.channels,
- ]
- hidden_states = hidden_states.reshape(new_hidden_states_shape)
-
- # compute next hidden_states
- hidden_states = self.proj_attn(hidden_states)
- hidden_states = hidden_states.transpose([0, 2, 1])
- hidden_states = self.dropout(hidden_states)
- output = hidden_states + residual
-
- return output
-
-
-class ResConvBlock(nn.Layer):
- def __init__(self, in_channels, mid_channels, out_channels, is_last=False):
- super().__init__()
- self.is_last = is_last
- self.has_conv_skip = in_channels != out_channels
-
- if self.has_conv_skip:
- self.conv_skip = nn.Conv1D(in_channels, out_channels, 1, bias_attr=False)
-
- self.conv_1 = nn.Conv1D(in_channels, mid_channels, 5, padding=2)
- self.group_norm_1 = nn.GroupNorm(1, mid_channels)
- self.gelu_1 = nn.GELU()
- self.conv_2 = nn.Conv1D(mid_channels, out_channels, 5, padding=2)
-
- if not self.is_last:
- self.group_norm_2 = nn.GroupNorm(1, out_channels)
- self.gelu_2 = nn.GELU()
-
- def forward(self, hidden_states):
- residual = self.conv_skip(hidden_states) if self.has_conv_skip else hidden_states
-
- hidden_states = self.conv_1(hidden_states)
- hidden_states = self.group_norm_1(hidden_states)
- hidden_states = self.gelu_1(hidden_states)
- hidden_states = self.conv_2(hidden_states)
-
- if not self.is_last:
- hidden_states = self.group_norm_2(hidden_states)
- hidden_states = self.gelu_2(hidden_states)
-
- output = hidden_states + residual
- return output
-
-
-class UNetMidBlock1D(nn.Layer):
- def __init__(self, mid_channels, in_channels, out_channels=None):
- super().__init__()
-
- out_channels = in_channels if out_channels is None else out_channels
-
- # there is always at least one resnet
- self.down = Downsample1d("cubic")
- resnets = [
- ResConvBlock(in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
- attentions = [
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(out_channels, out_channels // 32),
- ]
- self.up = Upsample1d(kernel="cubic")
-
- self.attentions = nn.LayerList(attentions)
- self.resnets = nn.LayerList(resnets)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = self.down(hidden_states)
- for attn, resnet in zip(self.attentions, self.resnets):
- hidden_states = resnet(hidden_states)
- hidden_states = attn(hidden_states)
-
- hidden_states = self.up(hidden_states)
-
- return hidden_states
-
-
-class AttnDownBlock1D(nn.Layer):
- def __init__(self, out_channels, in_channels, mid_channels=None):
- super().__init__()
- mid_channels = out_channels if mid_channels is None else mid_channels
-
- self.down = Downsample1d("cubic")
- resnets = [
- ResConvBlock(in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
- attentions = [
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(out_channels, out_channels // 32),
- ]
-
- self.attentions = nn.LayerList(attentions)
- self.resnets = nn.LayerList(resnets)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = self.down(hidden_states)
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states)
- hidden_states = attn(hidden_states)
-
- return hidden_states, (hidden_states,)
-
-
-class DownBlock1D(nn.Layer):
- def __init__(self, out_channels, in_channels, mid_channels=None):
- super().__init__()
- mid_channels = out_channels if mid_channels is None else mid_channels
-
- self.down = Downsample1d("cubic")
- resnets = [
- ResConvBlock(in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
-
- self.resnets = nn.LayerList(resnets)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = self.down(hidden_states)
-
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states)
-
- return hidden_states, (hidden_states,)
-
-
-class DownBlock1DNoSkip(nn.Layer):
- def __init__(self, out_channels, in_channels, mid_channels=None):
- super().__init__()
- mid_channels = out_channels if mid_channels is None else mid_channels
-
- resnets = [
- ResConvBlock(in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
-
- self.resnets = nn.LayerList(resnets)
-
- def forward(self, hidden_states, temb=None):
- hidden_states = paddle.concat([hidden_states, temb], axis=1)
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states)
-
- return hidden_states, (hidden_states,)
-
-
-class AttnUpBlock1D(nn.Layer):
- def __init__(self, in_channels, out_channels, mid_channels=None):
- super().__init__()
- mid_channels = out_channels if mid_channels is None else mid_channels
-
- resnets = [
- ResConvBlock(2 * in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
- attentions = [
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(mid_channels, mid_channels // 32),
- SelfAttention1d(out_channels, out_channels // 32),
- ]
-
- self.attentions = nn.LayerList(attentions)
- self.resnets = nn.LayerList(resnets)
- self.up = Upsample1d(kernel="cubic")
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
- res_hidden_states = res_hidden_states_tuple[-1]
- hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states)
- hidden_states = attn(hidden_states)
-
- hidden_states = self.up(hidden_states)
-
- return hidden_states
-
-
-class UpBlock1D(nn.Layer):
- def __init__(self, in_channels, out_channels, mid_channels=None):
- super().__init__()
- mid_channels = in_channels if mid_channels is None else mid_channels
-
- resnets = [
- ResConvBlock(2 * in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels),
- ]
-
- self.resnets = nn.LayerList(resnets)
- self.up = Upsample1d(kernel="cubic")
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
- res_hidden_states = res_hidden_states_tuple[-1]
- hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states)
-
- hidden_states = self.up(hidden_states)
-
- return hidden_states
-
-
-class UpBlock1DNoSkip(nn.Layer):
- def __init__(self, in_channels, out_channels, mid_channels=None):
- super().__init__()
- mid_channels = in_channels if mid_channels is None else mid_channels
-
- resnets = [
- ResConvBlock(2 * in_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, mid_channels),
- ResConvBlock(mid_channels, mid_channels, out_channels, is_last=True),
- ]
-
- self.resnets = nn.LayerList(resnets)
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
- res_hidden_states = res_hidden_states_tuple[-1]
- hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states)
-
- return hidden_states
-
-
-def get_down_block(down_block_type, num_layers, in_channels, out_channels, temb_channels, add_downsample):
- if down_block_type == "DownResnetBlock1D":
- return DownResnetBlock1D(
- in_channels=in_channels,
- num_layers=num_layers,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- )
- elif down_block_type == "DownBlock1D":
- return DownBlock1D(out_channels=out_channels, in_channels=in_channels)
- elif down_block_type == "AttnDownBlock1D":
- return AttnDownBlock1D(out_channels=out_channels, in_channels=in_channels)
- elif down_block_type == "DownBlock1DNoSkip":
- return DownBlock1DNoSkip(out_channels=out_channels, in_channels=in_channels)
- raise ValueError(f"{down_block_type} does not exist.")
-
-
-def get_up_block(up_block_type, num_layers, in_channels, out_channels, temb_channels, add_upsample):
- if up_block_type == "UpResnetBlock1D":
- return UpResnetBlock1D(
- in_channels=in_channels,
- num_layers=num_layers,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- )
- elif up_block_type == "UpBlock1D":
- return UpBlock1D(in_channels=in_channels, out_channels=out_channels)
- elif up_block_type == "AttnUpBlock1D":
- return AttnUpBlock1D(in_channels=in_channels, out_channels=out_channels)
- elif up_block_type == "UpBlock1DNoSkip":
- return UpBlock1DNoSkip(in_channels=in_channels, out_channels=out_channels)
- raise ValueError(f"{up_block_type} does not exist.")
-
-
-def get_mid_block(mid_block_type, num_layers, in_channels, mid_channels, out_channels, embed_dim, add_downsample):
- if mid_block_type == "MidResTemporalBlock1D":
- return MidResTemporalBlock1D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- embed_dim=embed_dim,
- add_downsample=add_downsample,
- )
- elif mid_block_type == "ValueFunctionMidBlock1D":
- return ValueFunctionMidBlock1D(in_channels=in_channels, out_channels=out_channels, embed_dim=embed_dim)
- elif mid_block_type == "UNetMidBlock1D":
- return UNetMidBlock1D(in_channels=in_channels, mid_channels=mid_channels, out_channels=out_channels)
- raise ValueError(f"{mid_block_type} does not exist.")
-
-
-def get_out_block(*, out_block_type, num_groups_out, embed_dim, out_channels, act_fn, fc_dim):
- if out_block_type == "OutConv1DBlock":
- return OutConv1DBlock(num_groups_out, out_channels, embed_dim, act_fn)
- elif out_block_type == "ValueFunction":
- return OutValueFunctionBlock(fc_dim, embed_dim)
- return None
diff --git a/spaces/AIFILMS/StyleGANEX/scripts/inference.py b/spaces/AIFILMS/StyleGANEX/scripts/inference.py
deleted file mode 100644
index 9250d4b5b05d8a31527603d42823fd8b10234ce9..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/scripts/inference.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import os
-from argparse import Namespace
-
-from tqdm import tqdm
-import time
-import numpy as np
-import torch
-from PIL import Image
-from torch.utils.data import DataLoader
-import sys
-
-sys.path.append(".")
-sys.path.append("..")
-
-from configs import data_configs
-from datasets.inference_dataset import InferenceDataset
-from utils.common import tensor2im, log_input_image
-from options.test_options import TestOptions
-from models.psp import pSp
-
-
-def run():
- test_opts = TestOptions().parse()
-
- if test_opts.resize_factors is not None:
- assert len(
- test_opts.resize_factors.split(',')) == 1, "When running inference, provide a single downsampling factor!"
- out_path_results = os.path.join(test_opts.exp_dir, 'inference_results',
- 'downsampling_{}'.format(test_opts.resize_factors))
- out_path_coupled = os.path.join(test_opts.exp_dir, 'inference_coupled',
- 'downsampling_{}'.format(test_opts.resize_factors))
- else:
- out_path_results = os.path.join(test_opts.exp_dir, 'inference_results')
- out_path_coupled = os.path.join(test_opts.exp_dir, 'inference_coupled')
-
- os.makedirs(out_path_results, exist_ok=True)
- os.makedirs(out_path_coupled, exist_ok=True)
-
- # update test options with options used during training
- ckpt = torch.load(test_opts.checkpoint_path, map_location='cpu')
- opts = ckpt['opts']
- opts.update(vars(test_opts))
- if 'learn_in_w' not in opts:
- opts['learn_in_w'] = False
- if 'output_size' not in opts:
- opts['output_size'] = 1024
- opts = Namespace(**opts)
-
- net = pSp(opts)
- net.eval()
- net.cuda()
-
- print('Loading dataset for {}'.format(opts.dataset_type))
- dataset_args = data_configs.DATASETS[opts.dataset_type]
- transforms_dict = dataset_args['transforms'](opts).get_transforms()
- dataset = InferenceDataset(root=opts.data_path,
- transform=transforms_dict['transform_inference'],
- opts=opts)
- dataloader = DataLoader(dataset,
- batch_size=opts.test_batch_size,
- shuffle=False,
- num_workers=int(opts.test_workers),
- drop_last=True)
-
- if opts.n_images is None:
- opts.n_images = len(dataset)
-
- global_i = 0
- global_time = []
- for input_batch in tqdm(dataloader):
- if global_i >= opts.n_images:
- break
- with torch.no_grad():
- input_cuda = input_batch.cuda().float()
- tic = time.time()
- result_batch = run_on_batch(input_cuda, net, opts)
- toc = time.time()
- global_time.append(toc - tic)
-
- for i in range(opts.test_batch_size):
- result = tensor2im(result_batch[i])
- im_path = dataset.paths[global_i]
-
- if opts.couple_outputs or global_i % 100 == 0:
- input_im = log_input_image(input_batch[i], opts)
- resize_amount = (256, 256) if opts.resize_outputs else (opts.output_size, opts.output_size)
- if opts.resize_factors is not None:
- # for super resolution, save the original, down-sampled, and output
- source = Image.open(im_path)
- res = np.concatenate([np.array(source.resize(resize_amount)),
- np.array(input_im.resize(resize_amount, resample=Image.NEAREST)),
- np.array(result.resize(resize_amount))], axis=1)
- else:
- # otherwise, save the original and output
- res = np.concatenate([np.array(input_im.resize(resize_amount)),
- np.array(result.resize(resize_amount))], axis=1)
- Image.fromarray(res).save(os.path.join(out_path_coupled, os.path.basename(im_path)))
-
- im_save_path = os.path.join(out_path_results, os.path.basename(im_path))
- Image.fromarray(np.array(result)).save(im_save_path)
-
- global_i += 1
-
- stats_path = os.path.join(opts.exp_dir, 'stats.txt')
- result_str = 'Runtime {:.4f}+-{:.4f}'.format(np.mean(global_time), np.std(global_time))
- print(result_str)
-
- with open(stats_path, 'w') as f:
- f.write(result_str)
-
-
-def run_on_batch(inputs, net, opts):
- if opts.latent_mask is None:
- result_batch = net(inputs, randomize_noise=False, resize=opts.resize_outputs)
- else:
- latent_mask = [int(l) for l in opts.latent_mask.split(",")]
- result_batch = []
- for image_idx, input_image in enumerate(inputs):
- # get latent vector to inject into our input image
- vec_to_inject = np.random.randn(1, 512).astype('float32')
- _, latent_to_inject = net(torch.from_numpy(vec_to_inject).to("cuda"),
- input_code=True,
- return_latents=True)
- # get output image with injected style vector
- res = net(input_image.unsqueeze(0).to("cuda").float(),
- latent_mask=latent_mask,
- inject_latent=latent_to_inject,
- alpha=opts.mix_alpha,
- resize=opts.resize_outputs)
- result_batch.append(res)
- result_batch = torch.cat(result_batch, dim=0)
- return result_batch
-
-
-if __name__ == '__main__':
- run()
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/README.md b/spaces/AIFILMS/audioldm-text-to-audio-generation/README.md
deleted file mode 100644
index 54ccb465bab6f54b115103a1f06a7259738980a7..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: Audioldm Text To Audio Generation
-emoji: 🔊
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
-duplicated_from: haoheliu/audioldm-text-to-audio-generation
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-## Reference
-Part of the code from this repo is borrowed from the following repos. We would like to thank the authors of them for their contribution.
-
-> https://github.com/LAION-AI/CLAP
-> https://github.com/CompVis/stable-diffusion
-> https://github.com/v-iashin/SpecVQGAN
-> https://github.com/toshas/torch-fidelity
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/__init__.py
deleted file mode 100644
index 7815fc6d95bd38518a6213df09d2a020b77106f8..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from . import en, zh, zh_aishell_no_tone_sing
\ No newline at end of file
diff --git a/spaces/AIZero2Hero4Health/8-NLPSimilarityHeatmapCluster-SL/app.py b/spaces/AIZero2Hero4Health/8-NLPSimilarityHeatmapCluster-SL/app.py
deleted file mode 100644
index 8b53f979d9f3ac86b100b5f19647e5ac4a7fa8ea..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/8-NLPSimilarityHeatmapCluster-SL/app.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import streamlit as st
-import nltk
-from transformers import pipeline
-from sentence_transformers import SentenceTransformer
-from scipy.spatial.distance import cosine
-import numpy as np
-import seaborn as sns
-import matplotlib.pyplot as plt
-from sklearn.cluster import KMeans
-import tensorflow as tf
-import tensorflow_hub as hub
-
-
-def cluster_examples(messages, embed, nc=3):
- km = KMeans(
- n_clusters=nc, init='random',
- n_init=10, max_iter=300,
- tol=1e-04, random_state=0
- )
- km = km.fit_predict(embed)
- for n in range(nc):
- idxs = [i for i in range(len(km)) if km[i] == n]
- ms = [messages[i] for i in idxs]
- st.markdown ("CLUSTER : %d"%n)
- for m in ms:
- st.markdown (m)
-
-
-def plot_heatmap(labels, heatmap, rotation=90):
- sns.set(font_scale=1.2)
- fig, ax = plt.subplots()
- g = sns.heatmap(
- heatmap,
- xticklabels=labels,
- yticklabels=labels,
- vmin=-1,
- vmax=1,
- cmap="coolwarm")
- g.set_xticklabels(labels, rotation=rotation)
- g.set_title("Textual Similarity")
-
- st.pyplot(fig)
- #plt.show()
-
-#st.header("Sentence Similarity Demo")
-
-# Streamlit text boxes
-text = st.text_area('Enter sentences:', value="Self confidence in outcomes helps us win and to make us successful.\nShe has a seriously impressive intellect and mind.\nStimulating and deep conversation helps us develop and grow.\nFrom basic quantum particles we get aerodynamics, friction, surface tension, weather, electromagnetism.\nIf she actively engages and comments positively, her anger disappears adapting into win-win's favor.\nI love interesting topics of conversation and the understanding and exploration of thoughts.\nThere is the ability to manipulate things the way you want in your mind to go how you want when you are self confident, that we don’t understand yet.")
-
-nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3)
-
-model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0)
-
-# Model setup
-if model_type == "Sentence Transformer":
- model = SentenceTransformer('paraphrase-distilroberta-base-v1')
-elif model_type == "Universal Sentence Encoder":
- model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5"
- model = hub.load(model_url)
-
-nltk.download('punkt')
-
-# Run model
-if text:
- sentences = nltk.tokenize.sent_tokenize(text)
- if model_type == "Sentence Transformer":
- embed = model.encode(sentences)
- elif model_type == "Universal Sentence Encoder":
- embed = model(sentences).numpy()
- sim = np.zeros([len(embed), len(embed)])
- for i,em in enumerate(embed):
- for j,ea in enumerate(embed):
- sim[i][j] = 1.0-cosine(em,ea)
- st.subheader("Similarity Heatmap")
- plot_heatmap(sentences, sim)
- st.subheader("Results from K-Means Clustering")
- cluster_examples(sentences, embed, nc)
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py
deleted file mode 100644
index c9da2a8fe992607a34f4afd307745a7d822b3cb8..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py
+++ /dev/null
@@ -1,2861 +0,0 @@
-default_scope = 'mmpose'
-default_hooks = dict(
- timer=dict(type='IterTimerHook'),
- logger=dict(type='LoggerHook', interval=50),
- param_scheduler=dict(type='ParamSchedulerHook'),
- checkpoint=dict(
- type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
- sampler_seed=dict(type='DistSamplerSeedHook'),
- visualization=dict(type='PoseVisualizationHook', enable=False))
-custom_hooks = [dict(type='SyncBuffersHook')]
-env_cfg = dict(
- cudnn_benchmark=False,
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
- dist_cfg=dict(backend='nccl'))
-vis_backends = [dict(type='LocalVisBackend')]
-visualizer = dict(
- type='PoseLocalVisualizer',
- vis_backends=[dict(type='LocalVisBackend'),
- dict(type='WandbVisBackend')],
- name='visualizer')
-log_processor = dict(
- type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
-log_level = 'INFO'
-load_from = None
-resume = False
-backend_args = dict(backend='local')
-train_cfg = dict(by_epoch=True, max_epochs=150, val_interval=10)
-val_cfg = dict()
-test_cfg = dict()
-colors = dict(
- sss=[255, 128, 0],
- lss=[255, 0, 128],
- sso=[128, 0, 255],
- lso=[0, 128, 255],
- vest=[0, 128, 128],
- sling=[0, 0, 128],
- shorts=[128, 128, 128],
- trousers=[128, 0, 128],
- skirt=[64, 128, 128],
- ssd=[64, 64, 128],
- lsd=[128, 64, 0],
- vd=[128, 64, 255],
- sd=[128, 64, 0])
-dataset_info = dict(
- dataset_name='deepfashion2',
- paper_info=dict(
- author=
- 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
- title=
- 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
- container=
- 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
- year='2019',
- homepage='https://github.com/switchablenorms/DeepFashion2'),
- keypoint_info=dict({
- 0:
- dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
- 1:
- dict(
- name='sss_kpt2',
- id=1,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt6'),
- 2:
- dict(
- name='sss_kpt3',
- id=2,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt5'),
- 3:
- dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
- 4:
- dict(
- name='sss_kpt5',
- id=4,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt3'),
- 5:
- dict(
- name='sss_kpt6',
- id=5,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt2'),
- 6:
- dict(
- name='sss_kpt7',
- id=6,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt25'),
- 7:
- dict(
- name='sss_kpt8',
- id=7,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt24'),
- 8:
- dict(
- name='sss_kpt9',
- id=8,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt23'),
- 9:
- dict(
- name='sss_kpt10',
- id=9,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt22'),
- 10:
- dict(
- name='sss_kpt11',
- id=10,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt21'),
- 11:
- dict(
- name='sss_kpt12',
- id=11,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt20'),
- 12:
- dict(
- name='sss_kpt13',
- id=12,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt19'),
- 13:
- dict(
- name='sss_kpt14',
- id=13,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt18'),
- 14:
- dict(
- name='sss_kpt15',
- id=14,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt17'),
- 15:
- dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
- 16:
- dict(
- name='sss_kpt17',
- id=16,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt15'),
- 17:
- dict(
- name='sss_kpt18',
- id=17,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt14'),
- 18:
- dict(
- name='sss_kpt19',
- id=18,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt13'),
- 19:
- dict(
- name='sss_kpt20',
- id=19,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt12'),
- 20:
- dict(
- name='sss_kpt21',
- id=20,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt11'),
- 21:
- dict(
- name='sss_kpt22',
- id=21,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt10'),
- 22:
- dict(
- name='sss_kpt23',
- id=22,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt9'),
- 23:
- dict(
- name='sss_kpt24',
- id=23,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt8'),
- 24:
- dict(
- name='sss_kpt25',
- id=24,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt7'),
- 25:
- dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
- 26:
- dict(
- name='lss_kpt2',
- id=26,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt6'),
- 27:
- dict(
- name='lss_kpt3',
- id=27,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt5'),
- 28:
- dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
- 29:
- dict(
- name='lss_kpt5',
- id=29,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt3'),
- 30:
- dict(
- name='lss_kpt6',
- id=30,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt2'),
- 31:
- dict(
- name='lss_kpt7',
- id=31,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt33'),
- 32:
- dict(
- name='lss_kpt8',
- id=32,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt32'),
- 33:
- dict(
- name='lss_kpt9',
- id=33,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt31'),
- 34:
- dict(
- name='lss_kpt10',
- id=34,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt30'),
- 35:
- dict(
- name='lss_kpt11',
- id=35,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt29'),
- 36:
- dict(
- name='lss_kpt12',
- id=36,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt28'),
- 37:
- dict(
- name='lss_kpt13',
- id=37,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt27'),
- 38:
- dict(
- name='lss_kpt14',
- id=38,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt26'),
- 39:
- dict(
- name='lss_kpt15',
- id=39,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt25'),
- 40:
- dict(
- name='lss_kpt16',
- id=40,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt24'),
- 41:
- dict(
- name='lss_kpt17',
- id=41,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt23'),
- 42:
- dict(
- name='lss_kpt18',
- id=42,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt22'),
- 43:
- dict(
- name='lss_kpt19',
- id=43,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt21'),
- 44:
- dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
- 45:
- dict(
- name='lss_kpt21',
- id=45,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt19'),
- 46:
- dict(
- name='lss_kpt22',
- id=46,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt18'),
- 47:
- dict(
- name='lss_kpt23',
- id=47,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt17'),
- 48:
- dict(
- name='lss_kpt24',
- id=48,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt16'),
- 49:
- dict(
- name='lss_kpt25',
- id=49,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt15'),
- 50:
- dict(
- name='lss_kpt26',
- id=50,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt14'),
- 51:
- dict(
- name='lss_kpt27',
- id=51,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt13'),
- 52:
- dict(
- name='lss_kpt28',
- id=52,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt12'),
- 53:
- dict(
- name='lss_kpt29',
- id=53,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt11'),
- 54:
- dict(
- name='lss_kpt30',
- id=54,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt10'),
- 55:
- dict(
- name='lss_kpt31',
- id=55,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt9'),
- 56:
- dict(
- name='lss_kpt32',
- id=56,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt8'),
- 57:
- dict(
- name='lss_kpt33',
- id=57,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt7'),
- 58:
- dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
- 59:
- dict(
- name='sso_kpt2',
- id=59,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt26'),
- 60:
- dict(
- name='sso_kpt3',
- id=60,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt5'),
- 61:
- dict(
- name='sso_kpt4',
- id=61,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt6'),
- 62:
- dict(
- name='sso_kpt5',
- id=62,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt3'),
- 63:
- dict(
- name='sso_kpt6',
- id=63,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt4'),
- 64:
- dict(
- name='sso_kpt7',
- id=64,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt25'),
- 65:
- dict(
- name='sso_kpt8',
- id=65,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt24'),
- 66:
- dict(
- name='sso_kpt9',
- id=66,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt23'),
- 67:
- dict(
- name='sso_kpt10',
- id=67,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt22'),
- 68:
- dict(
- name='sso_kpt11',
- id=68,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt21'),
- 69:
- dict(
- name='sso_kpt12',
- id=69,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt20'),
- 70:
- dict(
- name='sso_kpt13',
- id=70,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt19'),
- 71:
- dict(
- name='sso_kpt14',
- id=71,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt18'),
- 72:
- dict(
- name='sso_kpt15',
- id=72,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt17'),
- 73:
- dict(
- name='sso_kpt16',
- id=73,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt29'),
- 74:
- dict(
- name='sso_kpt17',
- id=74,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt15'),
- 75:
- dict(
- name='sso_kpt18',
- id=75,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt14'),
- 76:
- dict(
- name='sso_kpt19',
- id=76,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt13'),
- 77:
- dict(
- name='sso_kpt20',
- id=77,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt12'),
- 78:
- dict(
- name='sso_kpt21',
- id=78,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt11'),
- 79:
- dict(
- name='sso_kpt22',
- id=79,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt10'),
- 80:
- dict(
- name='sso_kpt23',
- id=80,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt9'),
- 81:
- dict(
- name='sso_kpt24',
- id=81,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt8'),
- 82:
- dict(
- name='sso_kpt25',
- id=82,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt7'),
- 83:
- dict(
- name='sso_kpt26',
- id=83,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt2'),
- 84:
- dict(
- name='sso_kpt27',
- id=84,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt30'),
- 85:
- dict(
- name='sso_kpt28',
- id=85,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt31'),
- 86:
- dict(
- name='sso_kpt29',
- id=86,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt16'),
- 87:
- dict(
- name='sso_kpt30',
- id=87,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt27'),
- 88:
- dict(
- name='sso_kpt31',
- id=88,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt28'),
- 89:
- dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
- 90:
- dict(
- name='lso_kpt2',
- id=90,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt6'),
- 91:
- dict(
- name='lso_kpt3',
- id=91,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt5'),
- 92:
- dict(
- name='lso_kpt4',
- id=92,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt34'),
- 93:
- dict(
- name='lso_kpt5',
- id=93,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt3'),
- 94:
- dict(
- name='lso_kpt6',
- id=94,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt2'),
- 95:
- dict(
- name='lso_kpt7',
- id=95,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt33'),
- 96:
- dict(
- name='lso_kpt8',
- id=96,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt32'),
- 97:
- dict(
- name='lso_kpt9',
- id=97,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt31'),
- 98:
- dict(
- name='lso_kpt10',
- id=98,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt30'),
- 99:
- dict(
- name='lso_kpt11',
- id=99,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt29'),
- 100:
- dict(
- name='lso_kpt12',
- id=100,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt28'),
- 101:
- dict(
- name='lso_kpt13',
- id=101,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt27'),
- 102:
- dict(
- name='lso_kpt14',
- id=102,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt26'),
- 103:
- dict(
- name='lso_kpt15',
- id=103,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt25'),
- 104:
- dict(
- name='lso_kpt16',
- id=104,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt24'),
- 105:
- dict(
- name='lso_kpt17',
- id=105,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt23'),
- 106:
- dict(
- name='lso_kpt18',
- id=106,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt22'),
- 107:
- dict(
- name='lso_kpt19',
- id=107,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt21'),
- 108:
- dict(
- name='lso_kpt20',
- id=108,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt37'),
- 109:
- dict(
- name='lso_kpt21',
- id=109,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt19'),
- 110:
- dict(
- name='lso_kpt22',
- id=110,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt18'),
- 111:
- dict(
- name='lso_kpt23',
- id=111,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt17'),
- 112:
- dict(
- name='lso_kpt24',
- id=112,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt16'),
- 113:
- dict(
- name='lso_kpt25',
- id=113,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt15'),
- 114:
- dict(
- name='lso_kpt26',
- id=114,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt14'),
- 115:
- dict(
- name='lso_kpt27',
- id=115,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt13'),
- 116:
- dict(
- name='lso_kpt28',
- id=116,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt12'),
- 117:
- dict(
- name='lso_kpt29',
- id=117,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt11'),
- 118:
- dict(
- name='lso_kpt30',
- id=118,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt10'),
- 119:
- dict(
- name='lso_kpt31',
- id=119,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt9'),
- 120:
- dict(
- name='lso_kpt32',
- id=120,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt8'),
- 121:
- dict(
- name='lso_kpt33',
- id=121,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt7'),
- 122:
- dict(
- name='lso_kpt34',
- id=122,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt4'),
- 123:
- dict(
- name='lso_kpt35',
- id=123,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt38'),
- 124:
- dict(
- name='lso_kpt36',
- id=124,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt39'),
- 125:
- dict(
- name='lso_kpt37',
- id=125,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt20'),
- 126:
- dict(
- name='lso_kpt38',
- id=126,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt35'),
- 127:
- dict(
- name='lso_kpt39',
- id=127,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt36'),
- 128:
- dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
- 129:
- dict(
- name='vest_kpt2',
- id=129,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt6'),
- 130:
- dict(
- name='vest_kpt3',
- id=130,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt5'),
- 131:
- dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
- 132:
- dict(
- name='vest_kpt5',
- id=132,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt3'),
- 133:
- dict(
- name='vest_kpt6',
- id=133,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt2'),
- 134:
- dict(
- name='vest_kpt7',
- id=134,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt15'),
- 135:
- dict(
- name='vest_kpt8',
- id=135,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt14'),
- 136:
- dict(
- name='vest_kpt9',
- id=136,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt13'),
- 137:
- dict(
- name='vest_kpt10',
- id=137,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt12'),
- 138:
- dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
- 139:
- dict(
- name='vest_kpt12',
- id=139,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt10'),
- 140:
- dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
- 141:
- dict(
- name='vest_kpt14',
- id=141,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt8'),
- 142:
- dict(
- name='vest_kpt15',
- id=142,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt7'),
- 143:
- dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
- 144:
- dict(
- name='sling_kpt2',
- id=144,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt6'),
- 145:
- dict(
- name='sling_kpt3',
- id=145,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt5'),
- 146:
- dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
- 147:
- dict(
- name='sling_kpt5',
- id=147,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt3'),
- 148:
- dict(
- name='sling_kpt6',
- id=148,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt2'),
- 149:
- dict(
- name='sling_kpt7',
- id=149,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt15'),
- 150:
- dict(
- name='sling_kpt8',
- id=150,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt14'),
- 151:
- dict(
- name='sling_kpt9',
- id=151,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt13'),
- 152:
- dict(
- name='sling_kpt10',
- id=152,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt12'),
- 153:
- dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
- 154:
- dict(
- name='sling_kpt12',
- id=154,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt10'),
- 155:
- dict(
- name='sling_kpt13',
- id=155,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt9'),
- 156:
- dict(
- name='sling_kpt14',
- id=156,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt8'),
- 157:
- dict(
- name='sling_kpt15',
- id=157,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt7'),
- 158:
- dict(
- name='shorts_kpt1',
- id=158,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt3'),
- 159:
- dict(
- name='shorts_kpt2',
- id=159,
- color=[128, 128, 128],
- type='',
- swap=''),
- 160:
- dict(
- name='shorts_kpt3',
- id=160,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt1'),
- 161:
- dict(
- name='shorts_kpt4',
- id=161,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt10'),
- 162:
- dict(
- name='shorts_kpt5',
- id=162,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt9'),
- 163:
- dict(
- name='shorts_kpt6',
- id=163,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt8'),
- 164:
- dict(
- name='shorts_kpt7',
- id=164,
- color=[128, 128, 128],
- type='',
- swap=''),
- 165:
- dict(
- name='shorts_kpt8',
- id=165,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt6'),
- 166:
- dict(
- name='shorts_kpt9',
- id=166,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt5'),
- 167:
- dict(
- name='shorts_kpt10',
- id=167,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt4'),
- 168:
- dict(
- name='trousers_kpt1',
- id=168,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt3'),
- 169:
- dict(
- name='trousers_kpt2',
- id=169,
- color=[128, 0, 128],
- type='',
- swap=''),
- 170:
- dict(
- name='trousers_kpt3',
- id=170,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt1'),
- 171:
- dict(
- name='trousers_kpt4',
- id=171,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt14'),
- 172:
- dict(
- name='trousers_kpt5',
- id=172,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt13'),
- 173:
- dict(
- name='trousers_kpt6',
- id=173,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt12'),
- 174:
- dict(
- name='trousers_kpt7',
- id=174,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt11'),
- 175:
- dict(
- name='trousers_kpt8',
- id=175,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt10'),
- 176:
- dict(
- name='trousers_kpt9',
- id=176,
- color=[128, 0, 128],
- type='',
- swap=''),
- 177:
- dict(
- name='trousers_kpt10',
- id=177,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt8'),
- 178:
- dict(
- name='trousers_kpt11',
- id=178,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt7'),
- 179:
- dict(
- name='trousers_kpt12',
- id=179,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt6'),
- 180:
- dict(
- name='trousers_kpt13',
- id=180,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt5'),
- 181:
- dict(
- name='trousers_kpt14',
- id=181,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt4'),
- 182:
- dict(
- name='skirt_kpt1',
- id=182,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt3'),
- 183:
- dict(
- name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
- 184:
- dict(
- name='skirt_kpt3',
- id=184,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt1'),
- 185:
- dict(
- name='skirt_kpt4',
- id=185,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt8'),
- 186:
- dict(
- name='skirt_kpt5',
- id=186,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt7'),
- 187:
- dict(
- name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
- 188:
- dict(
- name='skirt_kpt7',
- id=188,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt5'),
- 189:
- dict(
- name='skirt_kpt8',
- id=189,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt4'),
- 190:
- dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
- 191:
- dict(
- name='ssd_kpt2',
- id=191,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt6'),
- 192:
- dict(
- name='ssd_kpt3',
- id=192,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt5'),
- 193:
- dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
- 194:
- dict(
- name='ssd_kpt5',
- id=194,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt3'),
- 195:
- dict(
- name='ssd_kpt6',
- id=195,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt2'),
- 196:
- dict(
- name='ssd_kpt7',
- id=196,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt29'),
- 197:
- dict(
- name='ssd_kpt8',
- id=197,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt28'),
- 198:
- dict(
- name='ssd_kpt9',
- id=198,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt27'),
- 199:
- dict(
- name='ssd_kpt10',
- id=199,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt26'),
- 200:
- dict(
- name='ssd_kpt11',
- id=200,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt25'),
- 201:
- dict(
- name='ssd_kpt12',
- id=201,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt24'),
- 202:
- dict(
- name='ssd_kpt13',
- id=202,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt23'),
- 203:
- dict(
- name='ssd_kpt14',
- id=203,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt22'),
- 204:
- dict(
- name='ssd_kpt15',
- id=204,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt21'),
- 205:
- dict(
- name='ssd_kpt16',
- id=205,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt20'),
- 206:
- dict(
- name='ssd_kpt17',
- id=206,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt19'),
- 207:
- dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
- 208:
- dict(
- name='ssd_kpt19',
- id=208,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt17'),
- 209:
- dict(
- name='ssd_kpt20',
- id=209,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt16'),
- 210:
- dict(
- name='ssd_kpt21',
- id=210,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt15'),
- 211:
- dict(
- name='ssd_kpt22',
- id=211,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt14'),
- 212:
- dict(
- name='ssd_kpt23',
- id=212,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt13'),
- 213:
- dict(
- name='ssd_kpt24',
- id=213,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt12'),
- 214:
- dict(
- name='ssd_kpt25',
- id=214,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt11'),
- 215:
- dict(
- name='ssd_kpt26',
- id=215,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt10'),
- 216:
- dict(
- name='ssd_kpt27',
- id=216,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt9'),
- 217:
- dict(
- name='ssd_kpt28',
- id=217,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt8'),
- 218:
- dict(
- name='ssd_kpt29',
- id=218,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt7'),
- 219:
- dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
- 220:
- dict(
- name='lsd_kpt2',
- id=220,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt6'),
- 221:
- dict(
- name='lsd_kpt3',
- id=221,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt5'),
- 222:
- dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
- 223:
- dict(
- name='lsd_kpt5',
- id=223,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt3'),
- 224:
- dict(
- name='lsd_kpt6',
- id=224,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt2'),
- 225:
- dict(
- name='lsd_kpt7',
- id=225,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt37'),
- 226:
- dict(
- name='lsd_kpt8',
- id=226,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt36'),
- 227:
- dict(
- name='lsd_kpt9',
- id=227,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt35'),
- 228:
- dict(
- name='lsd_kpt10',
- id=228,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt34'),
- 229:
- dict(
- name='lsd_kpt11',
- id=229,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt33'),
- 230:
- dict(
- name='lsd_kpt12',
- id=230,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt32'),
- 231:
- dict(
- name='lsd_kpt13',
- id=231,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt31'),
- 232:
- dict(
- name='lsd_kpt14',
- id=232,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt30'),
- 233:
- dict(
- name='lsd_kpt15',
- id=233,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt29'),
- 234:
- dict(
- name='lsd_kpt16',
- id=234,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt28'),
- 235:
- dict(
- name='lsd_kpt17',
- id=235,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt27'),
- 236:
- dict(
- name='lsd_kpt18',
- id=236,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt26'),
- 237:
- dict(
- name='lsd_kpt19',
- id=237,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt25'),
- 238:
- dict(
- name='lsd_kpt20',
- id=238,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt24'),
- 239:
- dict(
- name='lsd_kpt21',
- id=239,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt23'),
- 240:
- dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
- 241:
- dict(
- name='lsd_kpt23',
- id=241,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt21'),
- 242:
- dict(
- name='lsd_kpt24',
- id=242,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt20'),
- 243:
- dict(
- name='lsd_kpt25',
- id=243,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt19'),
- 244:
- dict(
- name='lsd_kpt26',
- id=244,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt18'),
- 245:
- dict(
- name='lsd_kpt27',
- id=245,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt17'),
- 246:
- dict(
- name='lsd_kpt28',
- id=246,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt16'),
- 247:
- dict(
- name='lsd_kpt29',
- id=247,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt15'),
- 248:
- dict(
- name='lsd_kpt30',
- id=248,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt14'),
- 249:
- dict(
- name='lsd_kpt31',
- id=249,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt13'),
- 250:
- dict(
- name='lsd_kpt32',
- id=250,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt12'),
- 251:
- dict(
- name='lsd_kpt33',
- id=251,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt11'),
- 252:
- dict(
- name='lsd_kpt34',
- id=252,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt10'),
- 253:
- dict(
- name='lsd_kpt35',
- id=253,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt9'),
- 254:
- dict(
- name='lsd_kpt36',
- id=254,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt8'),
- 255:
- dict(
- name='lsd_kpt37',
- id=255,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt7'),
- 256:
- dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
- 257:
- dict(
- name='vd_kpt2',
- id=257,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt6'),
- 258:
- dict(
- name='vd_kpt3',
- id=258,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt5'),
- 259:
- dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
- 260:
- dict(
- name='vd_kpt5',
- id=260,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt3'),
- 261:
- dict(
- name='vd_kpt6',
- id=261,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt2'),
- 262:
- dict(
- name='vd_kpt7',
- id=262,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt19'),
- 263:
- dict(
- name='vd_kpt8',
- id=263,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt18'),
- 264:
- dict(
- name='vd_kpt9',
- id=264,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt17'),
- 265:
- dict(
- name='vd_kpt10',
- id=265,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt16'),
- 266:
- dict(
- name='vd_kpt11',
- id=266,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt15'),
- 267:
- dict(
- name='vd_kpt12',
- id=267,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt14'),
- 268:
- dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
- 269:
- dict(
- name='vd_kpt14',
- id=269,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt12'),
- 270:
- dict(
- name='vd_kpt15',
- id=270,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt11'),
- 271:
- dict(
- name='vd_kpt16',
- id=271,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt10'),
- 272:
- dict(
- name='vd_kpt17',
- id=272,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt9'),
- 273:
- dict(
- name='vd_kpt18',
- id=273,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt8'),
- 274:
- dict(
- name='vd_kpt19',
- id=274,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt7'),
- 275:
- dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
- 276:
- dict(
- name='sd_kpt2',
- id=276,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt6'),
- 277:
- dict(
- name='sd_kpt3',
- id=277,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt5'),
- 278:
- dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
- 279:
- dict(
- name='sd_kpt5',
- id=279,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt3'),
- 280:
- dict(
- name='sd_kpt6',
- id=280,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt2'),
- 281:
- dict(
- name='sd_kpt7',
- id=281,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt19'),
- 282:
- dict(
- name='sd_kpt8',
- id=282,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt18'),
- 283:
- dict(
- name='sd_kpt9',
- id=283,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt17'),
- 284:
- dict(
- name='sd_kpt10',
- id=284,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt16'),
- 285:
- dict(
- name='sd_kpt11',
- id=285,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt15'),
- 286:
- dict(
- name='sd_kpt12',
- id=286,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt14'),
- 287:
- dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
- 288:
- dict(
- name='sd_kpt14',
- id=288,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt12'),
- 289:
- dict(
- name='sd_kpt15',
- id=289,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt11'),
- 290:
- dict(
- name='sd_kpt16',
- id=290,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt10'),
- 291:
- dict(
- name='sd_kpt17',
- id=291,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt9'),
- 292:
- dict(
- name='sd_kpt18',
- id=292,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt8'),
- 293:
- dict(
- name='sd_kpt19',
- id=293,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt7')
- }),
- skeleton_info=dict({
- 0:
- dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
- 1:
- dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
- 2:
- dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
- 3:
- dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
- 4:
- dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
- 5:
- dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
- 6:
- dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
- 7:
- dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
- 8:
- dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
- 9:
- dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
- 10:
- dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
- 11:
- dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
- 12:
- dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
- 13:
- dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
- 14:
- dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
- 15:
- dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
- 16:
- dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
- 17:
- dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
- 18:
- dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
- 19:
- dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
- 20:
- dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
- 21:
- dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
- 22:
- dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
- 23:
- dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
- 24:
- dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
- 25:
- dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
- 26:
- dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
- 27:
- dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
- 28:
- dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
- 29:
- dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
- 30:
- dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
- 31:
- dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
- 32:
- dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
- 33:
- dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
- 34:
- dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
- 35:
- dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
- 36:
- dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
- 37:
- dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
- 38:
- dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
- 39:
- dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
- 40:
- dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
- 41:
- dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
- 42:
- dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
- 43:
- dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
- 44:
- dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
- 45:
- dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
- 46:
- dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
- 47:
- dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
- 48:
- dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
- 49:
- dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
- 50:
- dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
- 51:
- dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
- 52:
- dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
- 53:
- dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
- 54:
- dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
- 55:
- dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
- 56:
- dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
- 57:
- dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
- 58:
- dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
- 59:
- dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
- 60:
- dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
- 61:
- dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
- 62:
- dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
- 63:
- dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
- 64:
- dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
- 65:
- dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
- 66:
- dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
- 67:
- dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
- 68:
- dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
- 69:
- dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
- 70:
- dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
- 71:
- dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
- 72:
- dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
- 73:
- dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
- 74:
- dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
- 75:
- dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
- 76:
- dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
- 77:
- dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
- 78:
- dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
- 79:
- dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
- 80:
- dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
- 81:
- dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
- 82:
- dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
- 83:
- dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
- 84:
- dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
- 85:
- dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
- 86:
- dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
- 87:
- dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
- 88:
- dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
- 89:
- dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
- 90:
- dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
- 91:
- dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
- 92:
- dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
- 93:
- dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
- 94:
- dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
- 95:
- dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
- 96:
- dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
- 97:
- dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
- 98:
- dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
- 99:
- dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
- 100:
- dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
- 101:
- dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
- 102:
- dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
- 103:
- dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
- 104:
- dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
- 105:
- dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
- 106:
- dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
- 107:
- dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
- 108:
- dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
- 109:
- dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
- 110:
- dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
- 111:
- dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
- 112:
- dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
- 113:
- dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
- 114:
- dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
- 115:
- dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
- 116:
- dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
- 117:
- dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
- 118:
- dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
- 119:
- dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
- 120:
- dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
- 121:
- dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
- 122:
- dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
- 123:
- dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
- 124:
- dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
- 125:
- dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
- 126:
- dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
- 127:
- dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
- 128:
- dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
- 129:
- dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
- 130:
- dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
- 131:
- dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
- 132:
- dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
- 133:
- dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
- 134:
- dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
- 135:
- dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
- 136:
- dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
- 137:
- dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
- 138:
- dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
- 139:
- dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
- 140:
- dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
- 141:
- dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
- 142:
- dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
- 143:
- dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
- 144:
- dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
- 145:
- dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
- 146:
- dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
- 147:
- dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
- 148:
- dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
- 149:
- dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
- 150:
- dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
- 151:
- dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
- 152:
- dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
- 153:
- dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
- 154:
- dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
- 155:
- dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
- 156:
- dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
- 157:
- dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
- 158:
- dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
- 159:
- dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
- 160:
- dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
- 161:
- dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
- 162:
- dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
- 163:
- dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
- 164:
- dict(
- link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
- 128]),
- 165:
- dict(
- link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
- 128]),
- 166:
- dict(
- link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
- 128]),
- 167:
- dict(
- link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
- 128]),
- 168:
- dict(
- link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
- 128]),
- 169:
- dict(
- link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
- 128]),
- 170:
- dict(
- link=('shorts_kpt9', 'shorts_kpt10'),
- id=170,
- color=[128, 128, 128]),
- 171:
- dict(
- link=('shorts_kpt10', 'shorts_kpt3'),
- id=171,
- color=[128, 128, 128]),
- 172:
- dict(
- link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
- 128]),
- 173:
- dict(
- link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
- 128]),
- 174:
- dict(
- link=('trousers_kpt1', 'trousers_kpt4'),
- id=174,
- color=[128, 0, 128]),
- 175:
- dict(
- link=('trousers_kpt4', 'trousers_kpt5'),
- id=175,
- color=[128, 0, 128]),
- 176:
- dict(
- link=('trousers_kpt5', 'trousers_kpt6'),
- id=176,
- color=[128, 0, 128]),
- 177:
- dict(
- link=('trousers_kpt6', 'trousers_kpt7'),
- id=177,
- color=[128, 0, 128]),
- 178:
- dict(
- link=('trousers_kpt7', 'trousers_kpt8'),
- id=178,
- color=[128, 0, 128]),
- 179:
- dict(
- link=('trousers_kpt8', 'trousers_kpt9'),
- id=179,
- color=[128, 0, 128]),
- 180:
- dict(
- link=('trousers_kpt9', 'trousers_kpt10'),
- id=180,
- color=[128, 0, 128]),
- 181:
- dict(
- link=('trousers_kpt10', 'trousers_kpt11'),
- id=181,
- color=[128, 0, 128]),
- 182:
- dict(
- link=('trousers_kpt11', 'trousers_kpt12'),
- id=182,
- color=[128, 0, 128]),
- 183:
- dict(
- link=('trousers_kpt12', 'trousers_kpt13'),
- id=183,
- color=[128, 0, 128]),
- 184:
- dict(
- link=('trousers_kpt13', 'trousers_kpt14'),
- id=184,
- color=[128, 0, 128]),
- 185:
- dict(
- link=('trousers_kpt14', 'trousers_kpt3'),
- id=185,
- color=[128, 0, 128]),
- 186:
- dict(
- link=('trousers_kpt3', 'trousers_kpt2'),
- id=186,
- color=[128, 0, 128]),
- 187:
- dict(
- link=('trousers_kpt2', 'trousers_kpt1'),
- id=187,
- color=[128, 0, 128]),
- 188:
- dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
- 189:
- dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
- 190:
- dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
- 191:
- dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
- 192:
- dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
- 193:
- dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
- 194:
- dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
- 195:
- dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
- 196:
- dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
- 197:
- dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
- 198:
- dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
- 199:
- dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
- 200:
- dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
- 201:
- dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
- 202:
- dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
- 203:
- dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
- 204:
- dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
- 205:
- dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
- 206:
- dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
- 207:
- dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
- 208:
- dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
- 209:
- dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
- 210:
- dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
- 211:
- dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
- 212:
- dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
- 213:
- dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
- 214:
- dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
- 215:
- dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
- 216:
- dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
- 217:
- dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
- 218:
- dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
- 219:
- dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
- 220:
- dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
- 221:
- dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
- 222:
- dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
- 223:
- dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
- 224:
- dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
- 225:
- dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
- 226:
- dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
- 227:
- dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
- 228:
- dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
- 229:
- dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
- 230:
- dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
- 231:
- dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
- 232:
- dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
- 233:
- dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
- 234:
- dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
- 235:
- dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
- 236:
- dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
- 237:
- dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
- 238:
- dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
- 239:
- dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
- 240:
- dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
- 241:
- dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
- 242:
- dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
- 243:
- dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
- 244:
- dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
- 245:
- dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
- 246:
- dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
- 247:
- dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
- 248:
- dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
- 249:
- dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
- 250:
- dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
- 251:
- dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
- 252:
- dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
- 253:
- dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
- 254:
- dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
- 255:
- dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
- 256:
- dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
- 257:
- dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
- 258:
- dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
- 259:
- dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
- 260:
- dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
- 261:
- dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
- 262:
- dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
- 263:
- dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
- 264:
- dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
- 265:
- dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
- 266:
- dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
- 267:
- dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
- 268:
- dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
- 269:
- dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
- 270:
- dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
- 271:
- dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
- 272:
- dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
- 273:
- dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
- 274:
- dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
- 275:
- dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
- 276:
- dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
- 277:
- dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
- 278:
- dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
- 279:
- dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
- 280:
- dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
- 281:
- dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
- 282:
- dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
- 283:
- dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
- 284:
- dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
- 285:
- dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
- 286:
- dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
- 287:
- dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
- 288:
- dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
- 289:
- dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
- 290:
- dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
- 291:
- dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
- 292:
- dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
- 293:
- dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
- 294:
- dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
- 295:
- dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
- 296:
- dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
- 297:
- dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
- 298:
- dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
- 299:
- dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
- 300:
- dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
- 301:
- dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
- 302:
- dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
- 303:
- dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
- }),
- joint_weights=[
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
- ],
- sigmas=[])
-param_scheduler = [
- dict(
- type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
- dict(
- type='MultiStepLR',
- begin=0,
- end=150,
- milestones=[100, 130],
- gamma=0.1,
- by_epoch=True)
-]
-optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
-auto_scale_lr = dict(base_batch_size=512)
-dataset_type = 'DeepFashion2Dataset'
-data_mode = 'topdown'
-data_root = 'data/deepfashion2/'
-codec = dict(
- type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
-train_pipeline = [
- dict(type='LoadImage'),
- dict(type='GetBBoxCenterScale'),
- dict(type='RandomFlip', direction='horizontal'),
- dict(
- type='RandomBBoxTransform',
- shift_prob=0,
- rotate_factor=60,
- scale_factor=(0.75, 1.25)),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(
- type='GenerateTarget',
- encoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- dict(type='PackPoseInputs')
-]
-val_pipeline = [
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
-]
-train_dataloader = dict(
- batch_size=64,
- num_workers=6,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='train/deepfashion2_short_sleeved_dress.json',
- data_prefix=dict(img='train/image/'),
- pipeline=[
- dict(type='LoadImage'),
- dict(type='GetBBoxCenterScale'),
- dict(type='RandomFlip', direction='horizontal'),
- dict(
- type='RandomBBoxTransform',
- shift_prob=0,
- rotate_factor=60,
- scale_factor=(0.75, 1.25)),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(
- type='GenerateTarget',
- encoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- dict(type='PackPoseInputs')
- ]))
-val_dataloader = dict(
- batch_size=32,
- num_workers=6,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='validation/deepfashion2_short_sleeved_dress.json',
- data_prefix=dict(img='validation/image/'),
- test_mode=True,
- pipeline=[
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
- ]))
-test_dataloader = dict(
- batch_size=32,
- num_workers=6,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='validation/deepfashion2_short_sleeved_dress.json',
- data_prefix=dict(img='validation/image/'),
- test_mode=True,
- pipeline=[
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
- ]))
-channel_cfg = dict(
- num_output_channels=294,
- dataset_joints=294,
- dataset_channel=[[
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
- 290, 291, 292, 293
- ]],
- inference_channel=[
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
- 290, 291, 292, 293
- ])
-model = dict(
- type='TopdownPoseEstimator',
- data_preprocessor=dict(
- type='PoseDataPreprocessor',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- bgr_to_rgb=True),
- backbone=dict(
- type='ResNet',
- depth=50,
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
- head=dict(
- type='HeatmapHead',
- in_channels=2048,
- out_channels=294,
- loss=dict(type='KeypointMSELoss', use_target_weight=True),
- decoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
-val_evaluator = [
- dict(type='PCKAccuracy', thr=0.2),
- dict(type='AUC'),
- dict(type='EPE')
-]
-test_evaluator = [
- dict(type='PCKAccuracy', thr=0.2),
- dict(type='AUC'),
- dict(type='EPE')
-]
-launcher = 'pytorch'
-work_dir = './work_dirs/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192'
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_cifar.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_cifar.py
deleted file mode 100644
index 55d033bc30bcbde7aef8e57ad950f59c248ad74b..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_cifar.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNet_CIFAR',
- depth=34,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=10,
- in_channels=512,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- ))
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/LayoutWritable.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/LayoutWritable.js
deleted file mode 100644
index 2f4da9393ed0595b45c252e31c70cd0c2c446d6f..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/LayoutWritable.js
+++ /dev/null
@@ -1,9 +0,0 @@
-import { writable } from "svelte/store";
-
-export const isloading_writable = writable(false);
-export const is_init_writable = writable(false);
-export const cancel_writable = writable(false);
-export const refresh_chats_writable = writable([]);
-export const refresh_chats_writable_empty = writable(false);
-export const curr_model_writable = writable(0);
-export const curr_model_writable_string = writable("");
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements.d.ts
deleted file mode 100644
index b993b156a3131302c71c241d51a95d414bb64c88..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Achievements from './logic/achievements/ymlachievements/Achievements';
-export default Achievements;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/CreatExpandContainer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/CreatExpandContainer.js
deleted file mode 100644
index 702ea005792329787a498422b7aa05d0a7f16c07..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/CreatExpandContainer.js
+++ /dev/null
@@ -1,11 +0,0 @@
-import Sizer from '../../sizer/Sizer.js';
-
-var CreatExpandContainer = function (scene, orientation) {
- var container = new Sizer(scene, {
- orientation: orientation
- })
- scene.add.existing(container);
- return container;
-}
-
-export default CreatExpandContainer;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateCanvas.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateCanvas.js
deleted file mode 100644
index 9425db8b74ee279d76002603c74e663d648c8d93..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateCanvas.js
+++ /dev/null
@@ -1,23 +0,0 @@
-import MergeStyle from './utils/MergeStyle.js';
-import Canvas from '../../canvas/Canvas.js';
-import SetTextureProperties from './utils/SetTextureProperties.js';
-
-
-var CreateCanvas = function (scene, data, view, styles, customBuilders) {
- data = MergeStyle(data, styles);
-
- var width = data.width || 1;
- var height = data.height || 1;
- var gameObject = new Canvas(scene, 0, 0, width, height);
-
- if (data.fill !== undefined) {
- gameObject.fill(data.fill);
- }
-
- SetTextureProperties(gameObject, data);
-
- scene.add.existing(gameObject);
- return gameObject;
-}
-
-export default CreateCanvas;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateNinePatch2.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateNinePatch2.js
deleted file mode 100644
index b4477b0e8ca3600651282068630055ac6ea3aa09..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateNinePatch2.js
+++ /dev/null
@@ -1,12 +0,0 @@
-import MergeStyle from './utils/MergeStyle.js';
-import NinePatch from '../../ninepatch2/NinePatch.js';
-
-var CreateNinePatch2 = function (scene, data, view, styles, customBuilders) {
- data = MergeStyle(data, styles);
-
- var gameObject = new NinePatch(scene, data);
-
- scene.add.existing(gameObject);
- return gameObject;
-}
-export default CreateNinePatch2;
\ No newline at end of file
diff --git a/spaces/Agusbs98/automatic-ecg-diagnosis/nets/nets.py b/spaces/Agusbs98/automatic-ecg-diagnosis/nets/nets.py
deleted file mode 100644
index 052695901923d58df8b865810e38ec1fd8edd913..0000000000000000000000000000000000000000
--- a/spaces/Agusbs98/automatic-ecg-diagnosis/nets/nets.py
+++ /dev/null
@@ -1,73 +0,0 @@
-
-import os, sys
-from libs import *
-from .layers import *
-from .modules import *
-from .bblocks import *
-from .backbones import *
-
-class LightX3ECG(nn.Module):
- def __init__(self,
- base_channels = 64,
- num_classes = 1,
- ):
- super(LightX3ECG, self).__init__()
- self.backbone_0 = LightSEResNet18(base_channels)
- self.backbone_1 = LightSEResNet18(base_channels)
- self.backbone_2 = LightSEResNet18(base_channels)
- self.lw_attention = nn.Sequential(
- nn.Linear(
- base_channels*24, base_channels*8,
- ),
- nn.BatchNorm1d(base_channels*8),
- nn.ReLU(),
- nn.Dropout(0.3),
- nn.Linear(
- base_channels*8, 3,
- ),
- )
-
- self.classifier = nn.Sequential(
- nn.Dropout(0.2),
- nn.Linear(
- base_channels*8, num_classes,
- ),
- )
-
- def forward(self,
- input,
- return_attention_scores = False,
- ):
- features_0 = self.backbone_0(input[:, 0, :].unsqueeze(1)).squeeze(2)
- features_1 = self.backbone_1(input[:, 1, :].unsqueeze(1)).squeeze(2)
- features_2 = self.backbone_2(input[:, 2, :].unsqueeze(1)).squeeze(2)
- attention_scores = torch.sigmoid(
- self.lw_attention(
- torch.cat(
- [
- features_0,
- features_1,
- features_2,
- ],
- dim = 1,
- )
- )
- )
- merged_features = torch.sum(
- torch.stack(
- [
- features_0,
- features_1,
- features_2,
- ],
- dim = 1,
- )*attention_scores.unsqueeze(-1),
- dim = 1,
- )
-
- output = self.classifier(merged_features)
-
- if not return_attention_scores:
- return output
- else:
- return output, attention_scores
\ No newline at end of file
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/README.md b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/README.md
deleted file mode 100644
index 1b24e6efdb04cb1460e4fe3257d2303677c5a0e1..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Multilingual Anime TTS
-emoji: 🎙🐴
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.7
-app_file: app.py
-pinned: false
-duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlphaDragon/Voice-Clone/app.py b/spaces/AlphaDragon/Voice-Clone/app.py
deleted file mode 100644
index ca085e087f220d46b95e5455ee6da92bf72ce764..0000000000000000000000000000000000000000
--- a/spaces/AlphaDragon/Voice-Clone/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import gradio as gr
-from TTS.api import TTS
-
-# Init TTS
-tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=False)
-zh_tts = TTS(model_name="tts_models/zh-CN/baker/tacotron2-DDC-GST", progress_bar=False, gpu=False)
-de_tts = TTS(model_name="tts_models/de/thorsten/vits", gpu=False)
-es_tts = TTS(model_name="tts_models/es/mai/tacotron2-DDC", progress_bar=False, gpu=False)
-
-def text_to_speech(text: str, speaker_wav, speaker_wav_file, language: str):
- if speaker_wav_file and not speaker_wav:
- speaker_wav = speaker_wav_file
- file_path = "output.wav"
- if language == "zh-CN":
- # if speaker_wav is not None:
- # zh_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
- # else:
- zh_tts.tts_to_file(text, file_path=file_path)
- elif language == "de":
- # if speaker_wav is not None:
- # de_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
- # else:
- de_tts.tts_to_file(text, file_path=file_path)
- elif language == "es":
- # if speaker_wav is not None:
- # es_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
- # else:
- es_tts.tts_to_file(text, file_path=file_path)
- else:
- if speaker_wav is not None:
- tts.tts_to_file(text, speaker_wav=speaker_wav, language=language, file_path=file_path)
- else:
- tts.tts_to_file(text, speaker=tts.speakers[0], language=language, file_path=file_path)
- return file_path
-
-
-
-title = "Voice-Cloning-Demo"
-
-def toggle(choice):
- if choice == "mic":
- return gr.update(visible=True, value=None), gr.update(visible=False, value=None)
- else:
- return gr.update(visible=False, value=None), gr.update(visible=True, value=None)
-
-def handle_language_change(choice):
- if choice == "zh-CN" or choice == "de" or choice == "es":
- return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False)
- else:
- return gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
-
-warming_text = """Please note that Chinese, German, and Spanish are currently not supported for voice cloning."""
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- text_input = gr.Textbox(label="Input the text", value="", max_lines=3)
- lan_input = gr.Radio(label="Language", choices=["en", "fr-fr", "pt-br", "zh-CN", "de", "es"], value="en")
- gr.Markdown(warming_text)
- radio = gr.Radio(["mic", "file"], value="mic",
- label="How would you like to upload your audio?")
- audio_input_mic = gr.Audio(label="Voice to clone", source="microphone", type="filepath", visible=True)
- audio_input_file = gr.Audio(label="Voice to clone", type="filepath", visible=False)
-
- with gr.Row():
- with gr.Column():
- btn_clear = gr.Button("Clear")
- with gr.Column():
- btn = gr.Button("Submit", variant="primary")
- with gr.Column():
- audio_output = gr.Audio(label="Output")
-
- # gr.Examples(examples, fn=inference, inputs=[audio_file, text_input],
- # outputs=audio_output, cache_examples=True)
- btn.click(text_to_speech, inputs=[text_input, audio_input_mic,
- audio_input_file, lan_input], outputs=audio_output)
- radio.change(toggle, radio, [audio_input_mic, audio_input_file])
- lan_input.change(handle_language_change, lan_input, [radio, audio_input_mic, audio_input_file])
-
-demo.launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Amrrs/pdf-table-extractor/README.md b/spaces/Amrrs/pdf-table-extractor/README.md
deleted file mode 100644
index 46660a936b00d80857d8c27fef7c7150590e24f2..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/pdf-table-extractor/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Pdf Table Extractor
-emoji: 📄
-colorFrom: yellow
-colorTo: green
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae.py
deleted file mode 100644
index 0abb2f056e3cf882755be13343d76b1c98c1e1f7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae.py
+++ /dev/null
@@ -1,600 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-import torch
-from parameterized import parameterized
-
-from diffusers import AsymmetricAutoencoderKL, AutoencoderKL
-from diffusers.utils import floats_tensor, load_hf_numpy, require_torch_gpu, slow, torch_all_close, torch_device
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism
-
-from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
-
-
-enable_full_determinism()
-
-
-class AutoencoderKLTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
- model_class = AutoencoderKL
- main_input_name = "sample"
- base_precision = 1e-2
-
- @property
- def dummy_input(self):
- batch_size = 4
- num_channels = 3
- sizes = (32, 32)
-
- image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
-
- return {"sample": image}
-
- @property
- def input_shape(self):
- return (3, 32, 32)
-
- @property
- def output_shape(self):
- return (3, 32, 32)
-
- def prepare_init_args_and_inputs_for_common(self):
- init_dict = {
- "block_out_channels": [32, 64],
- "in_channels": 3,
- "out_channels": 3,
- "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
- "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
- "latent_channels": 4,
- }
- inputs_dict = self.dummy_input
- return init_dict, inputs_dict
-
- def test_forward_signature(self):
- pass
-
- def test_training(self):
- pass
-
- @unittest.skipIf(torch_device == "mps", "Gradient checkpointing skipped on MPS")
- def test_gradient_checkpointing(self):
- # enable deterministic behavior for gradient checkpointing
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- assert not model.is_gradient_checkpointing and model.training
-
- out = model(**inputs_dict).sample
- # run the backwards pass on the model. For backwards pass, for simplicity purpose,
- # we won't calculate the loss and rather backprop on out.sum()
- model.zero_grad()
-
- labels = torch.randn_like(out)
- loss = (out - labels).mean()
- loss.backward()
-
- # re-instantiate the model now enabling gradient checkpointing
- model_2 = self.model_class(**init_dict)
- # clone model
- model_2.load_state_dict(model.state_dict())
- model_2.to(torch_device)
- model_2.enable_gradient_checkpointing()
-
- assert model_2.is_gradient_checkpointing and model_2.training
-
- out_2 = model_2(**inputs_dict).sample
- # run the backwards pass on the model. For backwards pass, for simplicity purpose,
- # we won't calculate the loss and rather backprop on out.sum()
- model_2.zero_grad()
- loss_2 = (out_2 - labels).mean()
- loss_2.backward()
-
- # compare the output and parameters gradients
- self.assertTrue((loss - loss_2).abs() < 1e-5)
- named_params = dict(model.named_parameters())
- named_params_2 = dict(model_2.named_parameters())
- for name, param in named_params.items():
- self.assertTrue(torch_all_close(param.grad.data, named_params_2[name].grad.data, atol=5e-5))
-
- def test_from_pretrained_hub(self):
- model, loading_info = AutoencoderKL.from_pretrained("fusing/autoencoder-kl-dummy", output_loading_info=True)
- self.assertIsNotNone(model)
- self.assertEqual(len(loading_info["missing_keys"]), 0)
-
- model.to(torch_device)
- image = model(**self.dummy_input)
-
- assert image is not None, "Make sure output is not None"
-
- def test_output_pretrained(self):
- model = AutoencoderKL.from_pretrained("fusing/autoencoder-kl-dummy")
- model = model.to(torch_device)
- model.eval()
-
- if torch_device == "mps":
- generator = torch.manual_seed(0)
- else:
- generator = torch.Generator(device=torch_device).manual_seed(0)
-
- image = torch.randn(
- 1,
- model.config.in_channels,
- model.config.sample_size,
- model.config.sample_size,
- generator=torch.manual_seed(0),
- )
- image = image.to(torch_device)
- with torch.no_grad():
- output = model(image, sample_posterior=True, generator=generator).sample
-
- output_slice = output[0, -1, -3:, -3:].flatten().cpu()
-
- # Since the VAE Gaussian prior's generator is seeded on the appropriate device,
- # the expected output slices are not the same for CPU and GPU.
- if torch_device == "mps":
- expected_output_slice = torch.tensor(
- [
- -4.0078e-01,
- -3.8323e-04,
- -1.2681e-01,
- -1.1462e-01,
- 2.0095e-01,
- 1.0893e-01,
- -8.8247e-02,
- -3.0361e-01,
- -9.8644e-03,
- ]
- )
- elif torch_device == "cpu":
- expected_output_slice = torch.tensor(
- [-0.1352, 0.0878, 0.0419, -0.0818, -0.1069, 0.0688, -0.1458, -0.4446, -0.0026]
- )
- else:
- expected_output_slice = torch.tensor(
- [-0.2421, 0.4642, 0.2507, -0.0438, 0.0682, 0.3160, -0.2018, -0.0727, 0.2485]
- )
-
- self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2))
-
-
-class AsymmetricAutoencoderKLTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
- model_class = AsymmetricAutoencoderKL
- main_input_name = "sample"
- base_precision = 1e-2
-
- @property
- def dummy_input(self):
- batch_size = 4
- num_channels = 3
- sizes = (32, 32)
-
- image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
- mask = torch.ones((batch_size, 1) + sizes).to(torch_device)
-
- return {"sample": image, "mask": mask}
-
- @property
- def input_shape(self):
- return (3, 32, 32)
-
- @property
- def output_shape(self):
- return (3, 32, 32)
-
- def prepare_init_args_and_inputs_for_common(self):
- init_dict = {
- "in_channels": 3,
- "out_channels": 3,
- "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
- "down_block_out_channels": [32, 64],
- "layers_per_down_block": 1,
- "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
- "up_block_out_channels": [32, 64],
- "layers_per_up_block": 1,
- "act_fn": "silu",
- "latent_channels": 4,
- "norm_num_groups": 32,
- "sample_size": 32,
- "scaling_factor": 0.18215,
- }
- inputs_dict = self.dummy_input
- return init_dict, inputs_dict
-
- def test_forward_signature(self):
- pass
-
- def test_forward_with_norm_groups(self):
- pass
-
-
-@slow
-class AutoencoderKLIntegrationTests(unittest.TestCase):
- def get_file_format(self, seed, shape):
- return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy"
-
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_sd_image(self, seed=0, shape=(4, 3, 512, 512), fp16=False):
- dtype = torch.float16 if fp16 else torch.float32
- image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
- return image
-
- def get_sd_vae_model(self, model_id="CompVis/stable-diffusion-v1-4", fp16=False):
- revision = "fp16" if fp16 else None
- torch_dtype = torch.float16 if fp16 else torch.float32
-
- model = AutoencoderKL.from_pretrained(
- model_id,
- subfolder="vae",
- torch_dtype=torch_dtype,
- revision=revision,
- )
- model.to(torch_device)
-
- return model
-
- def get_generator(self, seed=0):
- if torch_device == "mps":
- return torch.manual_seed(seed)
- return torch.Generator(device=torch_device).manual_seed(seed)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.1603, 0.9878, -0.0495, -0.0790, -0.2709, 0.8375, -0.2060, -0.0824], [-0.2395, 0.0098, 0.0102, -0.0709, -0.2840, -0.0274, -0.0718, -0.1824]],
- [47, [-0.2376, 0.1168, 0.1332, -0.4840, -0.2508, -0.0791, -0.0493, -0.4089], [0.0350, 0.0847, 0.0467, 0.0344, -0.0842, -0.0547, -0.0633, -0.1131]],
- # fmt: on
- ]
- )
- def test_stable_diffusion(self, seed, expected_slice, expected_slice_mps):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
- generator = self.get_generator(seed)
-
- with torch.no_grad():
- sample = model(image, generator=generator, sample_posterior=True).sample
-
- assert sample.shape == image.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.0513, 0.0289, 1.3799, 0.2166, -0.2573, -0.0871, 0.5103, -0.0999]],
- [47, [-0.4128, -0.1320, -0.3704, 0.1965, -0.4116, -0.2332, -0.3340, 0.2247]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_stable_diffusion_fp16(self, seed, expected_slice):
- model = self.get_sd_vae_model(fp16=True)
- image = self.get_sd_image(seed, fp16=True)
- generator = self.get_generator(seed)
-
- with torch.no_grad():
- sample = model(image, generator=generator, sample_posterior=True).sample
-
- assert sample.shape == image.shape
-
- output_slice = sample[-1, -2:, :2, -2:].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=1e-2)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.1609, 0.9866, -0.0487, -0.0777, -0.2716, 0.8368, -0.2055, -0.0814], [-0.2395, 0.0098, 0.0102, -0.0709, -0.2840, -0.0274, -0.0718, -0.1824]],
- [47, [-0.2377, 0.1147, 0.1333, -0.4841, -0.2506, -0.0805, -0.0491, -0.4085], [0.0350, 0.0847, 0.0467, 0.0344, -0.0842, -0.0547, -0.0633, -0.1131]],
- # fmt: on
- ]
- )
- def test_stable_diffusion_mode(self, seed, expected_slice, expected_slice_mps):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
-
- with torch.no_grad():
- sample = model(image).sample
-
- assert sample.shape == image.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [13, [-0.2051, -0.1803, -0.2311, -0.2114, -0.3292, -0.3574, -0.2953, -0.3323]],
- [37, [-0.2632, -0.2625, -0.2199, -0.2741, -0.4539, -0.4990, -0.3720, -0.4925]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_stable_diffusion_decode(self, seed, expected_slice):
- model = self.get_sd_vae_model()
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- output_slice = sample[-1, -2:, :2, -2:].flatten().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [27, [-0.0369, 0.0207, -0.0776, -0.0682, -0.1747, -0.1930, -0.1465, -0.2039]],
- [16, [-0.1628, -0.2134, -0.2747, -0.2642, -0.3774, -0.4404, -0.3687, -0.4277]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_stable_diffusion_decode_fp16(self, seed, expected_slice):
- model = self.get_sd_vae_model(fp16=True)
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64), fp16=True)
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- output_slice = sample[-1, -2:, :2, -2:].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
-
- @parameterized.expand([(13,), (16,), (27,)])
- @require_torch_gpu
- @unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
- def test_stable_diffusion_decode_xformers_vs_2_0_fp16(self, seed):
- model = self.get_sd_vae_model(fp16=True)
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64), fp16=True)
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- model.enable_xformers_memory_efficient_attention()
- with torch.no_grad():
- sample_2 = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- assert torch_all_close(sample, sample_2, atol=1e-1)
-
- @parameterized.expand([(13,), (16,), (37,)])
- @require_torch_gpu
- @unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
- def test_stable_diffusion_decode_xformers_vs_2_0(self, seed):
- model = self.get_sd_vae_model()
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- model.enable_xformers_memory_efficient_attention()
- with torch.no_grad():
- sample_2 = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- assert torch_all_close(sample, sample_2, atol=1e-2)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.3001, 0.0918, -2.6984, -3.9720, -3.2099, -5.0353, 1.7338, -0.2065, 3.4267]],
- [47, [-1.5030, -4.3871, -6.0355, -9.1157, -1.6661, -2.7853, 2.1607, -5.0823, 2.5633]],
- # fmt: on
- ]
- )
- def test_stable_diffusion_encode_sample(self, seed, expected_slice):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
- generator = self.get_generator(seed)
-
- with torch.no_grad():
- dist = model.encode(image).latent_dist
- sample = dist.sample(generator=generator)
-
- assert list(sample.shape) == [image.shape[0], 4] + [i // 8 for i in image.shape[2:]]
-
- output_slice = sample[0, -1, -3:, -3:].flatten().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- tolerance = 3e-3 if torch_device != "mps" else 1e-2
- assert torch_all_close(output_slice, expected_output_slice, atol=tolerance)
-
- def test_stable_diffusion_model_local(self):
- model_id = "stabilityai/sd-vae-ft-mse"
- model_1 = AutoencoderKL.from_pretrained(model_id).to(torch_device)
-
- url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors"
- model_2 = AutoencoderKL.from_single_file(url).to(torch_device)
- image = self.get_sd_image(33)
-
- with torch.no_grad():
- sample_1 = model_1(image).sample
- sample_2 = model_2(image).sample
-
- assert sample_1.shape == sample_2.shape
-
- output_slice_1 = sample_1[-1, -2:, -2:, :2].flatten().float().cpu()
- output_slice_2 = sample_2[-1, -2:, -2:, :2].flatten().float().cpu()
-
- assert torch_all_close(output_slice_1, output_slice_2, atol=3e-3)
-
-
-@slow
-class AsymmetricAutoencoderKLIntegrationTests(unittest.TestCase):
- def get_file_format(self, seed, shape):
- return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy"
-
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_sd_image(self, seed=0, shape=(4, 3, 512, 512), fp16=False):
- dtype = torch.float16 if fp16 else torch.float32
- image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
- return image
-
- def get_sd_vae_model(self, model_id="cross-attention/asymmetric-autoencoder-kl-x-1-5", fp16=False):
- revision = "main"
- torch_dtype = torch.float32
-
- model = AsymmetricAutoencoderKL.from_pretrained(
- model_id,
- torch_dtype=torch_dtype,
- revision=revision,
- )
- model.to(torch_device).eval()
-
- return model
-
- def get_generator(self, seed=0):
- if torch_device == "mps":
- return torch.manual_seed(seed)
- return torch.Generator(device=torch_device).manual_seed(seed)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.0344, 0.2912, 0.1687, -0.0137, -0.3462, 0.3552, -0.1337, 0.1078], [-0.1603, 0.9878, -0.0495, -0.0790, -0.2709, 0.8375, -0.2060, -0.0824]],
- [47, [0.4400, 0.0543, 0.2873, 0.2946, 0.0553, 0.0839, -0.1585, 0.2529], [-0.2376, 0.1168, 0.1332, -0.4840, -0.2508, -0.0791, -0.0493, -0.4089]],
- # fmt: on
- ]
- )
- def test_stable_diffusion(self, seed, expected_slice, expected_slice_mps):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
- generator = self.get_generator(seed)
-
- with torch.no_grad():
- sample = model(image, generator=generator, sample_posterior=True).sample
-
- assert sample.shape == image.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.0340, 0.2870, 0.1698, -0.0105, -0.3448, 0.3529, -0.1321, 0.1097], [-0.0344, 0.2912, 0.1687, -0.0137, -0.3462, 0.3552, -0.1337, 0.1078]],
- [47, [0.4397, 0.0550, 0.2873, 0.2946, 0.0567, 0.0855, -0.1580, 0.2531], [0.4397, 0.0550, 0.2873, 0.2946, 0.0567, 0.0855, -0.1580, 0.2531]],
- # fmt: on
- ]
- )
- def test_stable_diffusion_mode(self, seed, expected_slice, expected_slice_mps):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
-
- with torch.no_grad():
- sample = model(image).sample
-
- assert sample.shape == image.shape
-
- output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
- expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
-
- @parameterized.expand(
- [
- # fmt: off
- [13, [-0.0521, -0.2939, 0.1540, -0.1855, -0.5936, -0.3138, -0.4579, -0.2275]],
- [37, [-0.1820, -0.4345, -0.0455, -0.2923, -0.8035, -0.5089, -0.4795, -0.3106]],
- # fmt: on
- ]
- )
- @require_torch_gpu
- def test_stable_diffusion_decode(self, seed, expected_slice):
- model = self.get_sd_vae_model()
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- output_slice = sample[-1, -2:, :2, -2:].flatten().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- assert torch_all_close(output_slice, expected_output_slice, atol=2e-3)
-
- @parameterized.expand([(13,), (16,), (37,)])
- @require_torch_gpu
- @unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
- def test_stable_diffusion_decode_xformers_vs_2_0(self, seed):
- model = self.get_sd_vae_model()
- encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
-
- with torch.no_grad():
- sample = model.decode(encoding).sample
-
- model.enable_xformers_memory_efficient_attention()
- with torch.no_grad():
- sample_2 = model.decode(encoding).sample
-
- assert list(sample.shape) == [3, 3, 512, 512]
-
- assert torch_all_close(sample, sample_2, atol=5e-2)
-
- @parameterized.expand(
- [
- # fmt: off
- [33, [-0.3001, 0.0918, -2.6984, -3.9720, -3.2099, -5.0353, 1.7338, -0.2065, 3.4267]],
- [47, [-1.5030, -4.3871, -6.0355, -9.1157, -1.6661, -2.7853, 2.1607, -5.0823, 2.5633]],
- # fmt: on
- ]
- )
- def test_stable_diffusion_encode_sample(self, seed, expected_slice):
- model = self.get_sd_vae_model()
- image = self.get_sd_image(seed)
- generator = self.get_generator(seed)
-
- with torch.no_grad():
- dist = model.encode(image).latent_dist
- sample = dist.sample(generator=generator)
-
- assert list(sample.shape) == [image.shape[0], 4] + [i // 8 for i in image.shape[2:]]
-
- output_slice = sample[0, -1, -3:, -3:].flatten().cpu()
- expected_output_slice = torch.tensor(expected_slice)
-
- tolerance = 3e-3 if torch_device != "mps" else 1e-2
- assert torch_all_close(output_slice, expected_output_slice, atol=tolerance)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py
deleted file mode 100644
index 4e00a059f8d2e58d23d6b77764456be351bd3115..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py
+++ /dev/null
@@ -1,15 +0,0 @@
-_base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
-model = dict(
- type='GFL',
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_fpn_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_fpn_2x_coco.py
deleted file mode 100644
index 927915fa8c63d380cc4bd62a580ffaad8b1ce386..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_fpn_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './retinanet_r50_fpn_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/AndySAnker/DeepStruc/predict.py b/spaces/AndySAnker/DeepStruc/predict.py
deleted file mode 100644
index 3ad08905adeec57368e47045a0c28ae1ecb7bd28..0000000000000000000000000000000000000000
--- a/spaces/AndySAnker/DeepStruc/predict.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import sys, argparse
-import streamlit as st
-from tools.module import Net
-import torch, random, time
-import numpy as np
-import pytorch_lightning as pl
-from tools.utils import get_data, format_predictions, plot_ls, get_model, save_predictions
-
-def main(args):
- time_start = time.time()
- data, data_name, project_name = get_data(args)
- model_path, model_arch = get_model(args.model)
-
- Net(model_arch=model_arch)
- DeepStruc = Net.load_from_checkpoint(model_path,model_arch=model_arch)
- #start_time = time.time()
- xyz_pred, latent_space, kl, mu, sigma = DeepStruc(data, mode='prior', sigma_scale=args.sigma)
- #st.write("one prediction: " , time.time() - start_time)
- #start_time = time.time()
- #for i in range(1000):
- # xyz_pred, latent_space, kl, mu, sigma = DeepStruc(data, mode='prior', sigma_scale=args.sigma)
- #st.write("thousand predictions: " , time.time() - start_time)
-
- samling_pairs = format_predictions(latent_space, data_name, mu, sigma, args.sigma)
-
- df, mk_dir, index_highlight = samling_pairs, project_name, args.index_plot
-
- these_cords = save_predictions(xyz_pred, samling_pairs, project_name, model_arch, args)
-
- return df, index_highlight, these_cords
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deform_conv.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deform_conv.py
deleted file mode 100644
index a3f8c75ee774823eea334e3b3732af6a18f55038..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deform_conv.py
+++ /dev/null
@@ -1,405 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Tuple, Union
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import Tensor
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair, _single
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..cnn import CONV_LAYERS
-from ..utils import ext_loader, print_log
-
-ext_module = ext_loader.load_ext('_ext', [
- 'deform_conv_forward', 'deform_conv_backward_input',
- 'deform_conv_backward_parameters'
-])
-
-
-class DeformConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g,
- input,
- offset,
- weight,
- stride,
- padding,
- dilation,
- groups,
- deform_groups,
- bias=False,
- im2col_step=32):
- return g.op(
- 'mmcv::MMCVDeformConv2d',
- input,
- offset,
- weight,
- stride_i=stride,
- padding_i=padding,
- dilation_i=dilation,
- groups_i=groups,
- deform_groups_i=deform_groups,
- bias_i=bias,
- im2col_step_i=im2col_step)
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deform_groups=1,
- bias=False,
- im2col_step=32):
- if input is not None and input.dim() != 4:
- raise ValueError(
- f'Expected 4D tensor as input, got {input.dim()}D tensor \
- instead.')
- assert bias is False, 'Only support bias is False.'
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deform_groups = deform_groups
- ctx.im2col_step = im2col_step
-
- # When pytorch version >= 1.6.0, amp is adopted for fp16 mode;
- # amp won't cast the type of model (float32), but "offset" is cast
- # to float16 by nn.Conv2d automatically, leading to the type
- # mismatch with input (when it is float32) or weight.
- # The flag for whether to use fp16 or amp is the type of "offset",
- # we cast weight and input to temporarily support fp16 and amp
- # whatever the pytorch version is.
- input = input.type_as(offset)
- weight = weight.type_as(input)
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(
- DeformConv2dFunction._output_size(ctx, input, weight))
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) %
- cur_im2col_step) == 0, 'im2col step must divide batchsize'
- ext_module.deform_conv_forward(
- input,
- weight,
- offset,
- output,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) % cur_im2col_step
- ) == 0, 'batch size must be divisible by im2col_step'
-
- grad_output = grad_output.contiguous()
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- ext_module.deform_conv_backward_input(
- input,
- offset,
- grad_output,
- grad_input,
- grad_offset,
- weight,
- ctx.bufs_[0],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- ext_module.deform_conv_backward_parameters(
- input,
- offset,
- grad_output,
- grad_weight,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- scale=1,
- im2col_step=cur_im2col_step)
-
- return grad_input, grad_offset, grad_weight, \
- None, None, None, None, None, None, None
-
- @staticmethod
- def _output_size(ctx, input, weight):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = ctx.padding[d]
- kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = ctx.stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(
- 'convolution input is too small (output would be ' +
- 'x'.join(map(str, output_size)) + ')')
- return output_size
-
-
-deform_conv2d = DeformConv2dFunction.apply
-
-
-class DeformConv2d(nn.Module):
- r"""Deformable 2D convolution.
-
- Applies a deformable 2D convolution over an input signal composed of
- several input planes. DeformConv2d was described in the paper
- `Deformable Convolutional Networks
- `_
-
- Note:
- The argument ``im2col_step`` was added in version 1.3.17, which means
- number of samples processed by the ``im2col_cuda_kernel`` per call.
- It enables users to define ``batch_size`` and ``im2col_step`` more
- flexibly and solved `issue mmcv#1440
- `_.
-
- Args:
- in_channels (int): Number of channels in the input image.
- out_channels (int): Number of channels produced by the convolution.
- kernel_size(int, tuple): Size of the convolving kernel.
- stride(int, tuple): Stride of the convolution. Default: 1.
- padding (int or tuple): Zero-padding added to both sides of the input.
- Default: 0.
- dilation (int or tuple): Spacing between kernel elements. Default: 1.
- groups (int): Number of blocked connections from input.
- channels to output channels. Default: 1.
- deform_groups (int): Number of deformable group partitions.
- bias (bool): If True, adds a learnable bias to the output.
- Default: False.
- im2col_step (int): Number of samples processed by im2col_cuda_kernel
- per call. It will work when ``batch_size`` > ``im2col_step``, but
- ``batch_size`` must be divisible by ``im2col_step``. Default: 32.
- `New in version 1.3.17.`
- """
-
- @deprecated_api_warning({'deformable_groups': 'deform_groups'},
- cls_name='DeformConv2d')
- def __init__(self,
- in_channels: int,
- out_channels: int,
- kernel_size: Union[int, Tuple[int, ...]],
- stride: Union[int, Tuple[int, ...]] = 1,
- padding: Union[int, Tuple[int, ...]] = 0,
- dilation: Union[int, Tuple[int, ...]] = 1,
- groups: int = 1,
- deform_groups: int = 1,
- bias: bool = False,
- im2col_step: int = 32) -> None:
- super(DeformConv2d, self).__init__()
-
- assert not bias, \
- f'bias={bias} is not supported in DeformConv2d.'
- assert in_channels % groups == 0, \
- f'in_channels {in_channels} cannot be divisible by groups {groups}'
- assert out_channels % groups == 0, \
- f'out_channels {out_channels} cannot be divisible by groups \
- {groups}'
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deform_groups = deform_groups
- self.im2col_step = im2col_step
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- # only weight, no bias
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // self.groups,
- *self.kernel_size))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- # switch the initialization of `self.weight` to the standard kaiming
- # method described in `Delving deep into rectifiers: Surpassing
- # human-level performance on ImageNet classification` - He, K. et al.
- # (2015), using a uniform distribution
- nn.init.kaiming_uniform_(self.weight, nonlinearity='relu')
-
- def forward(self, x: Tensor, offset: Tensor) -> Tensor:
- """Deformable Convolutional forward function.
-
- Args:
- x (Tensor): Input feature, shape (B, C_in, H_in, W_in)
- offset (Tensor): Offset for deformable convolution, shape
- (B, deform_groups*kernel_size[0]*kernel_size[1]*2,
- H_out, W_out), H_out, W_out are equal to the output's.
-
- An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Returns:
- Tensor: Output of the layer.
- """
- # To fix an assert error in deform_conv_cuda.cpp:128
- # input image is smaller than kernel
- input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) <
- self.kernel_size[1])
- if input_pad:
- pad_h = max(self.kernel_size[0] - x.size(2), 0)
- pad_w = max(self.kernel_size[1] - x.size(3), 0)
- x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0)
- offset = offset.contiguous()
- out = deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
- if input_pad:
- out = out[:, :, :out.size(2) - pad_h, :out.size(3) -
- pad_w].contiguous()
- return out
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(in_channels={self.in_channels},\n'
- s += f'out_channels={self.out_channels},\n'
- s += f'kernel_size={self.kernel_size},\n'
- s += f'stride={self.stride},\n'
- s += f'padding={self.padding},\n'
- s += f'dilation={self.dilation},\n'
- s += f'groups={self.groups},\n'
- s += f'deform_groups={self.deform_groups},\n'
- # bias is not supported in DeformConv2d.
- s += 'bias=False)'
- return s
-
-
-@CONV_LAYERS.register_module('DCN')
-class DeformConv2dPack(DeformConv2d):
- """A Deformable Conv Encapsulation that acts as normal Conv layers.
-
- The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(DeformConv2dPack, self).__init__(*args, **kwargs)
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_offset()
-
- def init_offset(self):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- offset = self.conv_offset(x)
- return deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- version = local_metadata.get('version', None)
-
- if version is None or version < 2:
- # the key is different in early versions
- # In version < 2, DeformConvPack loads previous benchmark models.
- if (prefix + 'conv_offset.weight' not in state_dict
- and prefix[:-1] + '_offset.weight' in state_dict):
- state_dict[prefix + 'conv_offset.weight'] = state_dict.pop(
- prefix[:-1] + '_offset.weight')
- if (prefix + 'conv_offset.bias' not in state_dict
- and prefix[:-1] + '_offset.bias' in state_dict):
- state_dict[prefix +
- 'conv_offset.bias'] = state_dict.pop(prefix[:-1] +
- '_offset.bias')
-
- if version is not None and version > 1:
- print_log(
- f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to '
- 'version 2.',
- logger='root')
-
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys, unexpected_keys,
- error_msgs)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/util.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/util.py
deleted file mode 100644
index 90831643d19cc1b9b0940df3d4fd4d846ba74a05..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/util.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import numpy as np
-import cv2
-import os
-
-
-annotator_ckpts_path = os.path.join(os.path.dirname(__file__), 'ckpts')
-
-
-def HWC3(x):
- assert x.dtype == np.uint8
- if x.ndim == 2:
- x = x[:, :, None]
- assert x.ndim == 3
- H, W, C = x.shape
- assert C == 1 or C == 3 or C == 4
- if C == 3:
- return x
- if C == 1:
- return np.concatenate([x, x, x], axis=2)
- if C == 4:
- color = x[:, :, 0:3].astype(np.float32)
- alpha = x[:, :, 3:4].astype(np.float32) / 255.0
- y = color * alpha + 255.0 * (1.0 - alpha)
- y = y.clip(0, 255).astype(np.uint8)
- return y
-
-
-def resize_image(input_image, resolution):
- H, W, C = input_image.shape
- H = float(H)
- W = float(W)
- k = float(resolution) / min(H, W)
- H *= k
- W *= k
- H = int(np.round(H / 64.0)) * 64
- W = int(np.round(W / 64.0)) * 64
- img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA)
- return img
diff --git a/spaces/Apex-X/nono/roop/processors/frame/core.py b/spaces/Apex-X/nono/roop/processors/frame/core.py
deleted file mode 100644
index 498169d34a00e0a2547940380afd69967a2eca8c..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/nono/roop/processors/frame/core.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import os
-import sys
-import importlib
-import psutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from queue import Queue
-from types import ModuleType
-from typing import Any, List, Callable
-from tqdm import tqdm
-
-import roop
-
-FRAME_PROCESSORS_MODULES: List[ModuleType] = []
-FRAME_PROCESSORS_INTERFACE = [
- 'pre_check',
- 'pre_start',
- 'process_frame',
- 'process_frames',
- 'process_image',
- 'process_video',
- 'post_process'
-]
-
-
-def load_frame_processor_module(frame_processor: str) -> Any:
- try:
- frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}')
- for method_name in FRAME_PROCESSORS_INTERFACE:
- if not hasattr(frame_processor_module, method_name):
- raise NotImplementedError
- except ModuleNotFoundError:
- sys.exit(f'Frame processor {frame_processor} not found.')
- except NotImplementedError:
- sys.exit(f'Frame processor {frame_processor} not implemented correctly.')
- return frame_processor_module
-
-
-def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]:
- global FRAME_PROCESSORS_MODULES
-
- if not FRAME_PROCESSORS_MODULES:
- for frame_processor in frame_processors:
- frame_processor_module = load_frame_processor_module(frame_processor)
- FRAME_PROCESSORS_MODULES.append(frame_processor_module)
- return FRAME_PROCESSORS_MODULES
-
-
-def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None:
- with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor:
- futures = []
- queue = create_queue(temp_frame_paths)
- queue_per_future = max(len(temp_frame_paths) // roop.globals.execution_threads, 1)
- while not queue.empty():
- future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update)
- futures.append(future)
- for future in as_completed(futures):
- future.result()
-
-
-def create_queue(temp_frame_paths: List[str]) -> Queue[str]:
- queue: Queue[str] = Queue()
- for frame_path in temp_frame_paths:
- queue.put(frame_path)
- return queue
-
-
-def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]:
- queues = []
- for _ in range(queue_per_future):
- if not queue.empty():
- queues.append(queue.get())
- return queues
-
-
-def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None:
- progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'
- total = len(frame_paths)
- with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress:
- multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress))
-
-
-def update_progress(progress: Any = None) -> None:
- process = psutil.Process(os.getpid())
- memory_usage = process.memory_info().rss / 1024 / 1024 / 1024
- progress.set_postfix({
- 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB',
- 'execution_providers': roop.globals.execution_providers,
- 'execution_threads': roop.globals.execution_threads
- })
- progress.refresh()
- progress.update(1)
diff --git a/spaces/Arvi/feedback_generator/app.py b/spaces/Arvi/feedback_generator/app.py
deleted file mode 100644
index 0c14c7a46900b4eeae6b2dc22ab6795badf01397..0000000000000000000000000000000000000000
--- a/spaces/Arvi/feedback_generator/app.py
+++ /dev/null
@@ -1,407 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Untitled19.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/123iPxfG1KBLCe4t3m41RIziyYLSOxg30
-"""
-
-
-import gradio as gr
-import pandas as pd
-import numpy as np
-
-df=pd.read_csv(r'final_processed.csv')
-
-def assign_weights(Name,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15):
- import gradio as gr
- import pandas as pd
- import numpy as np
- df=pd.read_csv(r'final_processed.csv')
- df.drop(['Unnamed: 0'], axis=1,inplace=True)
- from sklearn import preprocessing
- label_encoder = preprocessing.LabelEncoder()
-
-
- y={'academic time':col2,'task dedication':col3,'physical activity':col4,'favourite sport':col5,'family time':col6,'poor sleep':col7,'fitness':col8,
- 'loss of concentration':col9,'eating habits':col10,'free time':col11,'motivation':col12,'social media':col13,'social media on academics':col14,'performance':col15}
- df=df.append(y,ignore_index=True)
-
-
- df['academic time']= label_encoder.fit_transform(df['academic time'])
- df['task dedication']= label_encoder.fit_transform(df['task dedication'])
- df['physical activity']= label_encoder.fit_transform(df['physical activity'])
- df['favorite sport']= label_encoder.fit_transform(df['favorite sport'])
- df['family time']= label_encoder.fit_transform(df['family time'])
- df['poor sleep']= label_encoder.fit_transform(df['poor sleep'])
- df['fitness']= label_encoder.fit_transform(df['fitness'])
- df['loss of concentration']= label_encoder.fit_transform(df['loss of concentration'])
- df['eating habits']= label_encoder.fit_transform(df['eating habits'])
- df['free time']= label_encoder.fit_transform(df['free time'])
- df['motivation']= label_encoder.fit_transform(df['motivation'])
- df['social media']= label_encoder.fit_transform(df['social media'])
- df['socail media on academics']= label_encoder.fit_transform(df['socail media on academics'])
- df['performance']= label_encoder.fit_transform(df['performance'])
-
- df.loc[df['academic time'] == 4, 'weight_academic'] =0.45
- df.loc[df['academic time'] == 1, 'weight_academic'] =0.15
- df.loc[df['academic time'] == 0, 'weight_academic'] =0.05
- df.loc[df['academic time'] == 2, 'weight_academic'] =0.35
- df.loc[df['academic time'] == 3, 'weight_academic'] =0.00
-
- df.loc[df['task dedication'] == 0, 'weight_task'] =0.00
- df.loc[df['task dedication'] == 1, 'weight_task'] =0.05
- df.loc[df['task dedication'] == 2, 'weight_task'] =0.20
- df.loc[df['task dedication'] == 3, 'weight_task'] =0.25
- df.loc[df['task dedication'] == 4, 'weight_task'] =0.50
-
- df.loc[df['physical activity'] == 0, 'weight_physic'] =0.00
- df.loc[df['physical activity'] == 1, 'weight_physic'] =1.00
-
- df.loc[df['favorite sport'] == 0, 'weight_play'] =0.20
- df.loc[df['favorite sport'] == 1, 'weight_play'] =0.20
- df.loc[df['favorite sport'] == 2, 'weight_play'] =0.20
- df.loc[df['favorite sport'] == 3, 'weight_play'] =0.20
- df.loc[df['favorite sport'] == 4, 'weight_play'] =0.00
- df.loc[df['favorite sport'] == 5, 'weight_play'] =0.20
-
- df.loc[df['family time'] == 3, 'weight_familytime'] =0.40
- df.loc[df['family time'] == 2, 'weight_familytime'] =0.10
- df.loc[df['family time'] == 1, 'weight_familytime'] =0.00
- df.loc[df['family time'] == 0, 'weight_familytime'] =0.40
- df.loc[df['family time'] == 4, 'weight_familytime'] =0.10
-
- df.loc[df['poor sleep'] == 4, 'weight_sleep'] =0.00
- df.loc[df['poor sleep'] == 3, 'weight_sleep'] =0.05
- df.loc[df['poor sleep'] == 0, 'weight_sleep'] =0.00
- df.loc[df['poor sleep'] == 2, 'weight_sleep'] =0.40
- df.loc[df['poor sleep'] == 1, 'weight_sleep'] =0.55
-
- df.loc[df['loss of concentration'] == 4, 'weight_conc'] =0.20
- df.loc[df['loss of concentration'] == 0, 'weight_conc'] =0.05
- df.loc[df['loss of concentration'] == 1, 'weight_conc'] =0.00
- df.loc[df['loss of concentration'] == 3, 'weight_conc'] =0.75
- df.loc[df['loss of concentration'] == 2, 'weight_conc'] =0.05
-
- df.loc[df['eating habits'] == 4, 'weight_eating'] =0.20
- df.loc[df['eating habits'] == 0, 'weight_eating'] =0.05
- df.loc[df['eating habits'] == 1, 'weight_eating'] =0.00
- df.loc[df['eating habits'] == 3, 'weight_eating'] =0.75
- df.loc[df['eating habits'] == 2, 'weight_eating'] =0.05
-
- df.loc[df['fitness'] == 2, 'weight_fit'] =0.60
- df.loc[df['fitness'] == 0, 'weight_fit'] =0.10
- df.loc[df['fitness'] == 1, 'weight_fit'] =0.30
- df.loc[df['fitness'] == 3, 'weight_fit'] =0.00
-
- df.loc[df['free time'] == 3, 'weight_time'] =0.50
- df.loc[df['free time'] == 2, 'weight_time'] =0.10
- df.loc[df['free time'] == 1, 'weight_time'] =0.20
- df.loc[df['free time'] == 0, 'weight_time'] =0.20
-
- df.loc[df['motivation'] == 3, 'weight_motivation'] =0.30
- df.loc[df['motivation'] == 2, 'weight_motivation'] =0.25
- df.loc[df['motivation'] == 1, 'weight_motivation'] =0.25
- df.loc[df['motivation'] == 0, 'weight_motivation'] =0.20
-
- df.loc[df['social media'] == 3, 'weight_media'] =0.00
- df.loc[df['social media'] == 2, 'weight_media'] =0.65
- df.loc[df['social media'] == 1, 'weight_media'] =0.10
- df.loc[df['social media'] == 0, 'weight_media'] =0.25
-
-
- df.loc[df['socail media on academics'] == 0, 'weight_media_academics'] =0.00
- df.loc[df['socail media on academics'] == 1, 'weight_media_academics'] =1.00
-
- df.loc[df['performance'] == 4, 'weight_performance']=0.55
- df.loc[df['performance'] == 3, 'weight_performance']=0.00
- df.loc[df['performance'] == 2, 'weight_performance']=0.30
- df.loc[df['performance'] == 1, 'weight_performance']=0.10
- df.loc[df['performance'] == 0, 'weight_performance']=0.05
-
- df['total']=df.iloc[:,14:].sum(axis=1)
-
-
- df.loc[(df['weight_academic']<0.35) | (df['weight_task']<0.25),'academic value']=0
- df.loc[(df['weight_academic']>=0.35) & (df['weight_task']>=0.25),'academic value']=1
- df.inplace=1
-
- df.loc[(df['weight_academic']<0.35) | (df['weight_time']<0.20),'time value']=0
- df.loc[(df['weight_academic']>=0.35) & (df['weight_time']>=0.20),'time value']=1
- df.inplace=1
-
- df.loc[((df['weight_academic']<=0.35) & (df['weight_conc']>=0.20)) | ((df['weight_academic']>=0.35) & (df['weight_conc']>=0.20)),'productive value']=1
- df.loc[((df['weight_academic']>=0.35) & (df['weight_conc']<0.20)) | ((df['weight_academic']<0.35) & (df['weight_conc']<0.20)),'productive value']=0
- df.inplace=1
-
- df.loc[(df['weight_physic']==1) & (df['weight_play']==0.2) & (df['weight_fit']>=0.3) & (df['weight_eating']>=0.20),'fitness_value']=1
- df.loc[(df['weight_physic']!=1) | (df['weight_play']!=0.2) | (df['weight_fit']<0.3) | (df['weight_eating']<0.20),'fitness_value']=0
- df.inplace=1
-
-
- df.loc[(df['weight_sleep']>=0.40) & (df['weight_conc']>=0.20) ,'sleep value']=1
- df.loc[(df['weight_sleep']<0.40) | (df['weight_conc']<0.20),'sleep value']=0
- df.inplace=1
-
- df.loc[(df['weight_familytime']==0.40) & (df['weight_motivation']==0.25) ,'motivation value']=1
- df.loc[(df['weight_familytime']!=0.40) | (df['weight_motivation']!=0.25),'motivation value']=0
- df.inplace=1
-
- df.loc[(df['weight_performance']>=0.30) ,'performance_value']=1
- df.loc[(df['weight_performance']<0.30),'performance_value']=0
- df.inplace=1
-
- df.loc[(df['weight_media']>=0.25) & (df['weight_media_academics']==0.00) ,'media_value']=1
- df.loc[(df['weight_media']<0.25) | (df['weight_media_academics']!=0.00),'media_value']=0
- df.inplace=1
-
- df.loc[df['total']>=4.0,'overall']=1
- df.loc[df['total']<4.0,'overall']=0
- df.inplace=1
-
-
- X = df[['academic time',
- 'task dedication',
- 'physical activity',
- 'favorite sport',
- 'family time',
- 'poor sleep',
- 'fitness',
- 'loss of concentration',
- 'eating habits',
- 'free time',
- 'motivation',
- 'social media',
- 'socail media on academics',
- 'performance',
- 'weight_academic',
- 'weight_task',
- 'weight_physic',
- 'weight_play',
- 'weight_familytime',
- 'weight_sleep',
- 'weight_conc',
- 'weight_eating',
- 'weight_fit',
- 'weight_time',
- 'weight_motivation',
- 'weight_media',
- 'weight_media_academics',
- 'weight_performance',
- 'total'
- ]]
- y1 = df['academic value']
- y2=df['time value']
- y3=df['productive value']
- y4=df['fitness_value']
- y5=df['sleep value']
- y6=df['motivation value']
- y7=df['performance_value']
- y8=df['media_value']
- y9=df['overall']
- from sklearn.model_selection import train_test_split
- X_train,X_test,y1_train,y1_test = train_test_split(X,y1,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y2_train,y2_test = train_test_split(X,y2,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y3_train,y3_test = train_test_split(X,y3,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y4_train,y4_test = train_test_split(X,y4,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y5_train,y5_test = train_test_split(X,y5,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y6_train,y6_test = train_test_split(X,y6,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y7_train,y7_test = train_test_split(X,y7,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y8_train,y8_test = train_test_split(X,y8,test_size=0.3,random_state = 0,shuffle = True)
- X_train,X_test,y9_train,y9_test = train_test_split(X,y9,test_size=0.3,random_state = 0,shuffle = True)
- from sklearn.ensemble import RandomForestClassifier as rfc
- import xgboost as xgb
- rfc1 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc2 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc3 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc4 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc5 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc6 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc7 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc8 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc9 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
- max_depth = 5, alpha = 10, n_estimators = 10)
- rfc1.fit(X_train,y1_train)
- rfc2.fit(X_train,y2_train)
- rfc3.fit(X_train,y3_train)
- rfc4.fit(X_train,y4_train)
- rfc5.fit(X_train,y5_train)
- rfc6.fit(X_train,y6_train)
- rfc7.fit(X_train,y7_train)
- rfc8.fit(X_train,y8_train)
- rfc9.fit(X_train,y9_train)
- import random
-
- z=df.tail(1)
-
-
-
-
- if z['academic value'].eq(1).all():
- a=['You are in the right track just try to stick on to your schedule','HARRRRRDDDD WORK always payys off you seem to be going in the right track',
- 'The way is classiscal!! a tip for you is to listen to some classical music before studying ','You are driven by your own intrest keep riding',
- 'Your study time is great ,now its to take a short break',
- 'WOWWW you are just a just synonym of hard work and dedication ' ]
- res1="feedback on youe study schedule --> " +random.choice(a)
- if z['academic value'].eq(0).all():
- b=['If you know your “WHY”, finding your “HOW" will not be difficult.you just need to start working','Focusing is about saying no.just learn to say no to things which distracts you .u just need to put a little more focus on your studytime',
- 'Be the early bird that gets the first worm.set your body clock and start working','listen to directions,follow through assignments,learn for yourself.you just need to enjoy the process',
- 'measure for progress not the time you are working ,try to put in more studytime','postponment will postpone you,finish your daily tasks when you have the time',
- 'you are just off track,there is still time and sure that you will reach great heights ','you surely have the talent its now in your hands to make wonders!!!! talent without hardwork?? what do you think ','enroll yourself to a personalized learning environament which gives you a controll and education experience ']
- res1="feedback on youe study schedule --> "+random.choice(b)
-
-
- if z['time value'].eq(1).all():
- c=['there is a saying give me 6 hours to chop a tree and i will spend the 1st hr sharpening the axe, the fact here is you have sharpenend your axe','your timimg is great you are managing time well'
- 'its seems you hsve been studying long take a quick break and come back ','you are enjoying your time keep putting the same efforts you put','keep managing the time like the way you are doing now,this attribute will take care of the rest'
- ,'you seem to stay organized and on track with your procative planning and systematic scheduling ']
- res2="Feedback on how you manage time --> "+random.choice(c)
- if z['time value'].eq(0).all():
- d=['you have to start spending time on academics and show some interest in succeeding,you are the pilot who should stop time from flying and bring it on your control','start working and stick to a time table and set your body clock','try to be more organized and start spending quality time towards studies'
- 'start learning to manage time and priortize on your academics','spend more time on your weak areas ,try to strech out for long hours','the biggest obstracle stopping you from winning is time management,prepare a timetable and stick to it',
- 'play while you play and work while you work dont try to mix up things','dont try to procastinate finish your day to day jobs when and where you get time']
- res2="Feedback on how you manage time --> "+random.choice(d)
-
- if z['productive value'].eq(1).all():
- e=['you are smart,productive and have a good way of preparation in your studies','Be more proactive and try to participate in class,you are effiecient and can reach heights with your effectiveness','you have the ability to study things smartly and quickly,pick areas which are more brain-storming',
- 'you have the ability to intepret things and your mind is sharp and you are a good listener','you are the master-mind,you are the person who shouldnt miss out in enrolling to IIts,NITs or whatever','you are productive person if u feel you are not delivering your 100% its not because because you arent studying,its something else']
- res3="Feedback on your productivity --> "+random.choice(e)
- if z['productive value'].eq(0).all():
- f=['Try to stick on to an approach which is convinient to you ,have a clear mind before you start working','start solving more,puzzles and a daily sudoko is a good start, you just need to be on your toes and tune your mind to solve various activities ','think!think!think analyse where you lack and start building strategies to improve yourself'
- 'class participation its high time you start taking decisions and choose to be proactive','connect everything with what you are learining so that it will stick in your mind and helps you to recollect when and where you require','enjoy the process of learning dont be monotonous and a bookworm tame your mind to face your challenges','actively consult your instructor to enrich yourself with lot ways to improve your productivity',
- 'rather than a brute-force approach try to think more of an optimal solution to a problem','gather a lot of resoruces and try to sit in your desk ,take mobile breaks(short one), an online chess game might be an eye opener for your next session ']
- res3="Feedback on your productivity --> "+random.choice(f)
-
- if z['fitness_value'].eq(1).all():
- g=['fitness is your key ,if your body is strong your mind is stronger. Maintaining a good fitness is really important for your health as well as it empowers your learining ',' I can see you have spent time in maintaing your body. Keep winning more golds ','you have choosen to step out of your comfort zone and by trying to put some gains,this will surely be a stepping stone in other important sectors','your fitness level is reasonably good indicating that you are sticking to a schedule kind of person which is really good',
- 'you are in a good shape which is a key for self_confidence and gives you a lot of motivation','you are a sportive person ,this will really help you to socialize and gives you a lot of energy to start new things ','you are an open-minded person ,this is really the best character one could ask for,half the problems are over if one is listening and able to make good decisions ']
- res4="Feedback on your fitness --> "+random.choice(g)
- if z['fitness_value'].eq(0).all():
- h=['A weak body is a liability, you guys being the future generation should definetly be fit and healthy to lead the society at its best','your body should always get the first priority and should be taken care properly',
- 'Any physical activity will make you disipline and gives you self confidence. Join your school team today ','out of all a hungry stomach isnt fit for a brisk study session ,being physically fit lets you do more activity even improve your academics ',
- 'engage yourself in any physical activity for 20 mins as it can improve your concentration and helps your focus in learning ','out of your busy schedule try devoting just 15 mins get down do some pushups or squats or a brisk jog will do good ']
- res4="Feedback on your fitness --> "+random.choice(h)
-
- if z['sleep value'].eq(1).all():
- i=['Good that you have a proper sleep, just stick to it and try finishing all your work in the day time and get enough rest','Its pretty impressive that you are giving enough importance to your sleep, shows that you have good time management skills and a sweet dream','getting a good sleep even during your stressed timetables shows that you stay at the moment',
- 'a good fitness routine followed by a good-sleep is a good sunday schedule and a good starter for a hectic next week which i hope you would have experienced many times','its good that you have a good sleep everynight this is big boost for a bright tomorrow']
- res5="Feedback on your sleep time --> "+random.choice(i)
- if z['sleep value'].eq(0).all():
-
- j=['The time we sleep is only when we rest our mind, eyes and the whole body which is really crucial for a stduent',' Try not using any devices an hour before you sleep, have a good sleep cycle for atleast 6 to 7 hrs a day','Get enough rest, dont stress your body too much.',
- 'Prioritize your sleep, dont have caffinated drinks late in the evening and getting good sleep will make you feel fresh and enegrytic all day long ',
- 'a 7 - hour refresh will set your body clock for the rest of your day so please ensure that you get adequate rest','if you are sleep deprieved make sure you exhaust all your energy during the day and make sure you get a pleasant and peaceful sleep',
- 'tests prove that sleep deprivation is a result for low academic performance make sure you dont fall under that','Please ensure that the extra miles which you are putting doesnt affect your sleep']
-
- res5="Feedback on your sleep time --> "+random.choice(j)
-
- if z['motivation value'].eq(1).all():
- k=['you are fairly motivated ,Motivation drives everyone to work better to achive something,it lits a light inside you ','you should be really proud that you have good motivation at a really young age,use it in areas where you feel a bit off',
- 'None of the greatest achievers couldnt have done it without motivation and self motivation is really powerfull tool to success ,you are one among them Keep going!',
- 'a good level of motivation gives you high spirits and a good attitude,your attitude builds YOU']
-
- res6="motivation factor --> "+random.choice(k)
- if z['motivation value'].eq(0).all():
-
- l=['Nobody in the world is born with motivation,in this modern era you cant expect external motivation,you better be your own motivation','messi took eighteen years to be the G.O.A.T ignoring all demotivation and insults its finally your time',
- 'change your scenery sitting in a desk all-day makes you dull ,to renew interest,a new setting can be just what some students need to stay motivated to learn',
- 'lay-out clear objectives before you start learning so that there is no confussion','Make your goals high but attainable dont be afraid to push yourself to get more out of them ',
- 'Spend some quality time with your family listen to their experiences and try to dollow their footsteps']
-
-
- res6="motivation factor --> "+random.choice(l)
-
- if z['performance_value'].eq(1).all():
- m=['Good job you!! Your hardwork and efforts paid off, you have nothing to worry about ,you are academically strong','To be honest that grades made me a little jealous. I can see the work you are putting towards academics',
- 'Give a big hit on boards make your parents and teachers proud, trust me that is super satisfying','academic performance gives you a lot of boost to you take that put in all other aspects which will give you overall developement',
- 'the most satisfying thing is scoring high its great that you are easily doing it','you are almost sorted out you now just have to take care of the bits and pieces']
-
- res7="Feedback on your performance --> "+random.choice(m)
-
- if z['performance_value'].eq(0).all():
- n=['Its never late to begin. Divide your work, note important things mentioned in class spend more time in studies','Dont be ashamed to ask doubts we dont mind others judging. So we start from physics today? jk',
- 'Start studying with your friends, seek help from teachers,Remember the hardwork you put never fails you','analyse where you are making errors if you find that you are making mistakes while writing try practicing the sample papers it will help you to an extent'
- ,'you are almost there!!take short notes of the theoritical concepts so that it will be easy for reference','dont worry about where you are standing at the moment ,back yourself ,start it from scratch']
-
- res7="Feedback on your performance --> "+random.choice(n)
-
- if z['media_value'].eq(1).all():
- o=[' In the world of people being addicted to social media today, its happy to see someone like you','Its good that you are not scrolling too much','Having a good social profile is important and you having a limit is really impressive'
- ,'Having the self control on yourself is really great but ensure that dont overdo on anything else','you are self-conscious which is really a great character to acquire']
-
- res8="Feedback on your social media time --> "+random.choice(o)
-
- if z['media_value'].eq(0).all():
- p=['Its really common for this generation people to get addicted to social media. All you have to do is keep track of the time, dont over do stuffs and you dont have to post a story everyday.',
- 'Nothing wrong becoming a social idle, but right now concentrate in your studies','socially active is essential but over - scrolling will trap you in the matrix which you are unaware of',
- 'stay in your limits socially active for more than a hour during high school is ill advised','knowing that its impacting you and using social media again !! what is that??']
-
- res8="Feedback on your social media time --> "+random.choice(p)
-
-
- if z['overall'].eq(1).all():
- q=['OMG!! Im thinking of getting a piece of advise from you you are almost there good that you equally participate in everything','You are an explorer and can learn new things easily,you are about to win the race',
- 'Your works are impressing everyone right from your teacher,friends and your parents, You are active,brisk and have good potential to improve your performance',
- 'You are doing great ,you are ready for new challenges and failures doesnt bother you ','You are multi tasker and ensure that you dont sink with over-confidence','Dont put yourself in any kind of pressure, eventhough you feel stressed time will answer to it and you will pass with flying colours'
- 'You are growing with confidence, take it to learn new things,choose your core and find your destiny']
-
- res9=random.choice(q)
-
- if z['overall'].eq(0).all():
-
- r=['Its all good everyone goes out of form,the comeback is always on start putting consistent efforts','Put in the time, hardwork and you can already see it coming,you are just a few steps dowm','When we hit out lowest point we are open to the greatest change you are going to bring the best out of it. And yes that was said by Avatar Roku'
- ,'Choose the right person whom you feel will take you through all the obstracles you need make things more clear','The best view comes after the hardest climb you can climb the moutain ahead of you','You just need to reboot and have a good set-up ,stay optimistic and everything will take care of itself if you take one step at a time',
- 'You are nearing the pinacle of your true potential,just few changes hear and there you will be on your prime']
-
- res9=random.choice(r)
-
-
-
-
-
-
-
-
- return "hi " + str (Name) + " this is a predictive model these are some wild guesses so just take the points which you feel may work in your case \nalso if u feel the feeadbacks are harsh please flag your opinion \ntake your time to read this and hope u like it 😊\n\n\n"+ res1+" ,\n " + res2 +" ,\n " + res3 +" ,\n " + res4 +" ,\n " + res5 +" ,\n " + res6 +" ,\n " + res7 +" ,\n " + res8 +" ,\n\n\n " + res9
-
-list(df.columns)
-
-df.isna().sum()
-
-demo = gr.Interface(
- fn=assign_weights,
- inputs=[
- "text",
- gr.Dropdown(['Science','Commerce'], label="Choose your stream"),
- gr.Radio(["<5", "5 - 12", "13 - 20", "20 - 30",">30"],label='On an average, how many hours a week do you spend on academics?'),
- gr.Radio(["0 - 20%", "20 - 40%", "40 - 60%", "60 - 80%","80 -100%"],label='How willing are you to work on a particular task ?'),
- gr.Radio(["Yes", "No", ],label='Do you take up any physical activity at regular intervals(at least 3 hours a week) ?'),
- gr.Radio(["Football", "Cricket", "Basketball", "Tennis" , "Chess" ,"Other","Not interested in sports"],label='Choose your favourite sport you follow or play'),
- gr.Radio(["Never", "Occasionally", "Sometimes", "Often" , "Always"],label='How often do you spend time with your friends and family?'),
- gr.Radio(["Always", "Very often", "Sometimes", "Rarely" ,"Never"],label='Has poor sleep troubled you in the last month?'),
- gr.Radio(["Perfect", "Good", "Average", "Poor"],label='What is your current level of fitness?'),
- gr.Radio(["Never", "Once in a while", "About half the time", "Most of the time","Always"],label='Do you feel kinda losing concentration during classes and other activities'),
- gr.Radio(["Never", "Once in a while", "About half the time", "Most of the time","Always"],label='is there a change in your eating habits(either under eating or overeating'),
- gr.Radio(["< 2", "2 - 5", "5 - 8", "> 8"],label='How many hours of free time do you have after school?'),
- gr.Radio(["Asking a lot of questions to the teacher", "Completing various assignments", "Sports and other extracurricular activities", "Other"],label='What motivates you to learn more?'),
- gr.Radio(["<30 mins", "30 - 60", "60 - 120", ">120 mins"],label='How long you spend your time on social media on a daily basis? '),
- gr.Radio(["Yes", "No"],label='Do you feel that spending time on social media has been a reason for the deterioration in your academic performance?'),
- gr.Radio(["<30%", "30% - 50%", "50% - 70%", "70% - 90%",">90%"],label='How much you score in your academics'),
- ],
- outputs=['text'],
-
- title="Performance predictor and feedback generator",
- description="Here's a sample performance calculator. Enjoy!"
-
- )
-demo.launch(inline=False)
-
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/datasets/object365.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/datasets/object365.py
deleted file mode 100644
index 8b8cc19da23d8397284b50588ee46e750b5b7552..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/datasets/object365.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import logging
-import os
-from fvcore.common.timer import Timer
-from detectron2.structures import BoxMode
-from fvcore.common.file_io import PathManager
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from lvis import LVIS
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["load_o365_json", "register_o365_instances"]
-
-
-def register_o365_instances(name, metadata, json_file, image_root):
- DatasetCatalog.register(name, lambda: load_o365_json(
- json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root,
- evaluator_type="lvis", **metadata
- )
-
-
-def get_o365_meta():
- categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}]
- o365_categories = sorted(categories, key=lambda x: x["id"])
- thing_classes = [k["name"] for k in o365_categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-def load_o365_json(json_file, image_root, dataset_name=None):
- '''
- Load Object365 class name text for object description for GRiT
- '''
-
- json_file = PathManager.get_local_path(json_file)
-
- timer = Timer()
- lvis_api = LVIS(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(
- json_file, timer.seconds()))
-
- class_names = {}
- sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id'])
- for x in sort_cat:
- if '/' in x['name']:
- text = ''
- for xx in x['name'].split('/'):
- text += xx
- text += ' '
- text = text[:-1]
- else:
- text = x['name']
- class_names[x['id']] = text
-
- img_ids = sorted(lvis_api.imgs.keys())
- imgs = lvis_api.load_imgs(img_ids)
- anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
-
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), \
- "Annotation ids in '{}' are not unique".format(json_file)
-
- imgs_anns = list(zip(imgs, anns))
- logger.info("Loaded {} images in the LVIS v1 format from {}".format(
- len(imgs_anns), json_file))
-
- dataset_dicts = []
-
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- if "file_name" in img_dict:
- file_name = img_dict["file_name"]
- record["file_name"] = os.path.join(image_root, file_name)
-
- record["height"] = int(img_dict["height"])
- record["width"] = int(img_dict["width"])
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- assert anno["image_id"] == image_id
- if anno.get('iscrowd', 0) > 0:
- continue
- obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
- obj["category_id"] = 0
- obj["object_description"] = class_names[anno['category_id']]
-
- objs.append(obj)
- record["annotations"] = objs
- if len(record["annotations"]) == 0:
- continue
- record["task"] = "ObjectDet"
- dataset_dicts.append(record)
-
- return dataset_dicts
-
-
-_CUSTOM_SPLITS_LVIS = {
- "object365_train": ("object365/images/train/", "object365/annotations/train_v1.json"),
-}
-
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items():
- register_o365_instances(
- key,
- get_o365_meta(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/infer/modules/ipex/attention.py b/spaces/Bart92/RVC_HF/infer/modules/ipex/attention.py
deleted file mode 100644
index 0eed59630d76a56e3fd96aa5bb6518b0c61e81bb..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/modules/ipex/attention.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import torch
-import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-
-# pylint: disable=protected-access, missing-function-docstring, line-too-long
-
-original_torch_bmm = torch.bmm
-def torch_bmm(input, mat2, *, out=None):
- if input.dtype != mat2.dtype:
- mat2 = mat2.to(input.dtype)
-
- #ARC GPUs can't allocate more than 4GB to a single block, Slice it:
- batch_size_attention, input_tokens, mat2_shape = input.shape[0], input.shape[1], mat2.shape[2]
- block_multiply = 2.4 if input.dtype == torch.float32 else 1.2
- block_size = (batch_size_attention * input_tokens * mat2_shape) / 1024 * block_multiply #MB
- split_slice_size = batch_size_attention
- if block_size >= 4000:
- do_split = True
- #Find something divisible with the input_tokens
- while ((split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply) > 4000:
- split_slice_size = split_slice_size // 2
- if split_slice_size <= 1:
- split_slice_size = 1
- break
- else:
- do_split = False
-
- split_block_size = (split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply #MB
- split_2_slice_size = input_tokens
- if split_block_size >= 4000:
- do_split_2 = True
- #Find something divisible with the input_tokens
- while ((split_slice_size * split_2_slice_size * mat2_shape) / 1024 * block_multiply) > 4000:
- split_2_slice_size = split_2_slice_size // 2
- if split_2_slice_size <= 1:
- split_2_slice_size = 1
- break
- else:
- do_split_2 = False
-
- if do_split:
- hidden_states = torch.zeros(input.shape[0], input.shape[1], mat2.shape[2], device=input.device, dtype=input.dtype)
- for i in range(batch_size_attention // split_slice_size):
- start_idx = i * split_slice_size
- end_idx = (i + 1) * split_slice_size
- if do_split_2:
- for i2 in range(input_tokens // split_2_slice_size): # pylint: disable=invalid-name
- start_idx_2 = i2 * split_2_slice_size
- end_idx_2 = (i2 + 1) * split_2_slice_size
- hidden_states[start_idx:end_idx, start_idx_2:end_idx_2] = original_torch_bmm(
- input[start_idx:end_idx, start_idx_2:end_idx_2],
- mat2[start_idx:end_idx, start_idx_2:end_idx_2],
- out=out
- )
- else:
- hidden_states[start_idx:end_idx] = original_torch_bmm(
- input[start_idx:end_idx],
- mat2[start_idx:end_idx],
- out=out
- )
- else:
- return original_torch_bmm(input, mat2, out=out)
- return hidden_states
-
-original_scaled_dot_product_attention = torch.nn.functional.scaled_dot_product_attention
-def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False):
- #ARC GPUs can't allocate more than 4GB to a single block, Slice it:
- shape_one, batch_size_attention, query_tokens, shape_four = query.shape
- block_multiply = 2.4 if query.dtype == torch.float32 else 1.2
- block_size = (shape_one * batch_size_attention * query_tokens * shape_four) / 1024 * block_multiply #MB
- split_slice_size = batch_size_attention
- if block_size >= 4000:
- do_split = True
- #Find something divisible with the shape_one
- while ((shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply) > 4000:
- split_slice_size = split_slice_size // 2
- if split_slice_size <= 1:
- split_slice_size = 1
- break
- else:
- do_split = False
-
- split_block_size = (shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply #MB
- split_2_slice_size = query_tokens
- if split_block_size >= 4000:
- do_split_2 = True
- #Find something divisible with the batch_size_attention
- while ((shape_one * split_slice_size * split_2_slice_size * shape_four) / 1024 * block_multiply) > 4000:
- split_2_slice_size = split_2_slice_size // 2
- if split_2_slice_size <= 1:
- split_2_slice_size = 1
- break
- else:
- do_split_2 = False
-
- if do_split:
- hidden_states = torch.zeros(query.shape, device=query.device, dtype=query.dtype)
- for i in range(batch_size_attention // split_slice_size):
- start_idx = i * split_slice_size
- end_idx = (i + 1) * split_slice_size
- if do_split_2:
- for i2 in range(query_tokens // split_2_slice_size): # pylint: disable=invalid-name
- start_idx_2 = i2 * split_2_slice_size
- end_idx_2 = (i2 + 1) * split_2_slice_size
- hidden_states[:, start_idx:end_idx, start_idx_2:end_idx_2] = original_scaled_dot_product_attention(
- query[:, start_idx:end_idx, start_idx_2:end_idx_2],
- key[:, start_idx:end_idx, start_idx_2:end_idx_2],
- value[:, start_idx:end_idx, start_idx_2:end_idx_2],
- attn_mask=attn_mask[:, start_idx:end_idx, start_idx_2:end_idx_2] if attn_mask is not None else attn_mask,
- dropout_p=dropout_p, is_causal=is_causal
- )
- else:
- hidden_states[:, start_idx:end_idx] = original_scaled_dot_product_attention(
- query[:, start_idx:end_idx],
- key[:, start_idx:end_idx],
- value[:, start_idx:end_idx],
- attn_mask=attn_mask[:, start_idx:end_idx] if attn_mask is not None else attn_mask,
- dropout_p=dropout_p, is_causal=is_causal
- )
- else:
- return original_scaled_dot_product_attention(
- query, key, value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal
- )
- return hidden_states
-
-def attention_init():
- #ARC GPUs can't allocate more than 4GB to a single block:
- torch.bmm = torch_bmm
- torch.nn.functional.scaled_dot_product_attention = scaled_dot_product_attention
\ No newline at end of file
diff --git a/spaces/Benebene/Chat-question-answering/interface.py b/spaces/Benebene/Chat-question-answering/interface.py
deleted file mode 100644
index 344a48c4ac3e246f86fb767944632a39e1b2c7c1..0000000000000000000000000000000000000000
--- a/spaces/Benebene/Chat-question-answering/interface.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import gradio as gr
-from utils import Stuff
-
-
-def launch_gradio(s: Stuff):
- with gr.Blocks() as demo:
- question = gr.Textbox(label = 'Type your question about astronomy here :')
- output = gr.Textbox(label = 'The answer is...')
- button = gr.Button('Enter')
- button.click(fn = s.get_answer, inputs = question, outputs=output)
-
- demo.launch()
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Mod Fox App.md b/spaces/Benson/text-generation/Examples/Descargar Apk Mod Fox App.md
deleted file mode 100644
index 83db28c1a62bf3c096b8e1231ed2ba1d4cefde9c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk Mod Fox App.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
Descargar APK Mod Fox App: Cómo obtener la mejor experiencia de navegador en Android
-
Si está buscando una aplicación de navegador rápida, segura y personalizable para su dispositivo Android, es posible que desee probar la aplicación APK Mod Fox. Esta es una versión modificada del popular navegador Firefox, que ofrece muchas características y beneficios que no están disponibles en la aplicación original. En este artículo, le mostraremos lo que es APK Mod Fox App, cómo descargar e instalar, y cómo usarlo para obtener la mejor experiencia de navegador en Android.
-
¿Qué es la aplicación APK Mod Fox?
-
APK Mod Fox App es una versión modificada de la aplicación Firefox Browser, que es uno de los navegadores web más populares y de confianza en el mundo. Firefox Browser es conocido por su velocidad, privacidad y opciones de personalización, pero también tiene algunas limitaciones y desventajas que algunos usuarios pueden encontrar molesto o inconveniente. Por ejemplo, tiene anuncios, rastreadores, ventanas emergentes y otros elementos no deseados que pueden afectar su experiencia de navegación. También consume mucha batería y memoria, lo que puede ralentizar el dispositivo.
Ahí es donde APK Mod Fox App entra en juego. Esta es una versión modificada de la aplicación Firefox Browser que elimina todos los anuncios, rastreadores, ventanas emergentes y otros elementos no deseados de la aplicación. También optimiza el rendimiento de la aplicación y reduce su consumo de batería y memoria. También agrega algunas características y mejoras adicionales que no están disponibles en la aplicación original, como el modo oscuro, el modo nocturno, el modo de incógnito, el bloqueador de anuncios, la VPN, el administrador de descargas y más. Con la aplicación APK Mod Fox, puedes disfrutar de una experiencia de navegador más rápida, fluida y privada en tu dispositivo Android.
-
Los beneficios de usar APK Mod Fox App
-
Algunos de los beneficios de usar la aplicación APK Mod Fox son:
-
-
Puedes navegar por la web sin anuncios, rastreadores, ventanas emergentes u otros elementos molestos que puedan distraerte o comprometer tu privacidad.
-
-
Puede acceder a varias características y herramientas que pueden mejorar su experiencia de navegación, como el modo oscuro, el modo nocturno, el modo de incógnito, el bloqueador de anuncios, la VPN, el administrador de descargas y más.
-
Puede ahorrar batería y memoria mediante el uso de una aplicación de navegador ligera y optimizada que no consume demasiados recursos de su dispositivo.
-
Puede disfrutar de un rendimiento del navegador rápido y suave que puede cargar páginas web de forma rápida y sin problemas.
-
-
Los inconvenientes de usar APK Mod Fox App
-
Algunos de los inconvenientes de usar APK Mod Fox App son:
-
-
Es posible que encuentre algunos problemas de compatibilidad o errores con algunos sitios web o aplicaciones que no están diseñados para la aplicación modded.
-
Es posible que no reciba actualizaciones regulares o soporte de los desarrolladores oficiales de la aplicación Firefox Browser.
-
Es posible que exponga su dispositivo a posibles riesgos de seguridad o malware si descarga la aplicación modificada desde una fuente no confiable o si habilita fuentes desconocidas en su dispositivo.
-
-
¿Cómo descargar e instalar la aplicación APK Mod Fox?
-
Si desea descargar e instalar la aplicación APK Mod Fox en su dispositivo Android, debe seguir estos pasos:
-
Paso 1: Encontrar una fuente confiable para la aplicación modded
-
El primer paso es encontrar una fuente confiable para la aplicación modded. No se puede descargar APK Mod Fox App desde la Google Play Store, ya que no es una aplicación oficial. Es necesario encontrar un sitio web de terceros o plataforma que ofrece la aplicación modded para su descarga gratuita. Sin embargo, debe tener cuidado y hacer algunas investigaciones antes de descargar la aplicación modificada desde cualquier fuente. Usted necesita para asegurarse de que la fuente es confiable y de buena reputación, y que la aplicación modded es seguro y libre de virus. Puede comprobar las revisiones, calificaciones, comentarios y comentarios de otros usuarios que han descargado la aplicación modificada desde la misma fuente. También puede usar un escáner de malware o una aplicación antivirus para escanear la aplicación modificada antes de instalarla en su dispositivo.
-
-
El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. De forma predeterminada, esta configuración está desactivada en la mayoría de los dispositivos Android, ya que puede exponer su dispositivo a posibles riesgos de seguridad o malware. Sin embargo, si desea instalar APK Mod Fox App, es necesario habilitar esta configuración temporalmente. Para hacer esto, es necesario ir a la configuración de su dispositivo, a continuación, toque en la seguridad o la privacidad, a continuación, busque la opción que dice fuentes desconocidas o instalar aplicaciones desconocidas. Luego, cambie el interruptor o marque la casilla para habilitar esta opción. También es posible que necesite conceder permiso para la fuente o aplicación específica que desea instalar.
-
Paso 3: Descargar e instalar el archivo APK
-
El tercer paso es descargar e instalar el archivo APK de la aplicación APK Mod Fox en su dispositivo. Para hacer esto, debe abrir la aplicación del navegador en su dispositivo, luego ir al sitio web o plataforma donde encontró la aplicación modificada. Luego, busque el botón de descarga o enlace para la aplicación modded, y toque en él. Es posible que vea una ventana emergente o una notificación que le pida que confirme la descarga o instalación de la aplicación modificada. Toque en Aceptar o Instalar para continuar. Espere a que se complete el proceso de descarga e instalación, que puede tardar unos minutos dependiendo de la velocidad de Internet y el rendimiento del dispositivo.
-
¿Cómo usar la aplicación APK Mod Fox?
-
Una vez que haya descargado e instalado la aplicación APK Mod Fox en su dispositivo, puede comenzar a usarla para navegar por la web en su dispositivo Android. Aquí hay algunos consejos sobre cómo utilizar APK Mod Fox App:
-
-
Personaliza la configuración y las preferencias de tu navegador
-
-
Navegar por la web con mayor privacidad y seguridad
-
Otra ventaja de usar APK Mod Fox App es que se puede navegar por la web con mayor privacidad y seguridad. La aplicación modificada elimina todos los anuncios, rastreadores, ventanas emergentes y otros elementos no deseados de las páginas web que visita. También protege su actividad en línea y los datos de hackers, ISP, anunciantes y otros terceros que podrían tratar de espiar a usted o robar su información. También puede usar funciones como el modo incógnito, VPN y bloqueador de anuncios para aumentar aún más su privacidad y seguridad mientras navega por la web.
-
Disfrute del rendimiento rápido y suave de la aplicación
-
Una tercera ventaja de usar APK Mod Fox App es que se puede disfrutar del rendimiento rápido y suave de la aplicación. La aplicación modded optimiza el rendimiento de la aplicación y reduce su consumo de batería y memoria. También mejora la velocidad y suavidad de la aplicación mediante la carga de páginas web de forma rápida y sin problemas. También puedes usar funciones como gestor de descargas, VPN y bloqueador de anuncios para aumentar la velocidad de navegación y evitar interrupciones o ralentizaciones.
-
Conclusión
-
APK Mod Fox App es una versión modificada de la aplicación del navegador Firefox que ofrece muchos beneficios y características que no están disponibles en la aplicación original. Es una aplicación de navegador rápida, segura y personalizable que puede mejorar su experiencia de navegación en Android. Sin embargo, también tiene algunos inconvenientes y riesgos que debe tener en cuenta antes de descargarlo e instalarlo en su dispositivo. Necesitas encontrar una fuente confiable para la aplicación modded, habilitar fuentes desconocidas en tu dispositivo y escanear la aplicación modded en busca de malware o virus. También debe tener cuidado con la compatibilidad y las actualizaciones de la aplicación modded.
-
Resumen de los puntos principales
-
En este artículo, te hemos mostrado:
-
-
¿Qué es APK Mod Fox App y cómo se diferencia de la aplicación original del navegador Firefox.
-
Los beneficios y desventajas de usar APK Mod Fox App.
-
-
Cómo utilizar APK Mod Fox App para obtener la mejor experiencia de navegador en Android.
-
-
Llamada a la acción para los lectores
-
Si usted está interesado en probar APK Mod Fox App, puede seguir los pasos que hemos proporcionado en este artículo para descargar e instalar en su dispositivo. Sin embargo, también debe hacer su propia investigación y comprobar las revisiones y calificaciones de la aplicación modded antes de descargarlo. También debe realizar una copia de seguridad de sus datos y dispositivo antes de instalar la aplicación modded, en caso de que algo salga mal o desee desinstalarlo más tarde. También debe tener cuidado con la seguridad y la privacidad de su actividad en línea y los datos durante el uso de la aplicación modded.
-
Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-
-podría querer probar estas aplicaciones de navegador para Android: - Brave Browser: Esta es una aplicación de navegador rápido, seguro y privado que bloquea los anuncios y rastreadores por defecto. También le recompensa con criptomoneda para navegar por la web. - Opera Browser: Esta es una aplicación de navegador rápida, ligera y personalizable que ofrece funciones como bloqueador de anuncios, VPN, ahorro de datos, modo nocturno y más. - Chrome Browser: Esta es una aplicación de navegador popular y confiable que ofrece características como sincronización, búsqueda por voz, modo de incógnito, modo oscuro y más. 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Carreras Lmites Mod Apk.md b/spaces/Benson/text-generation/Examples/Descargar Carreras Lmites Mod Apk.md
deleted file mode 100644
index 5ba2b15daeb54da8a2c4fad4fe938f27341332fc..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Carreras Lmites Mod Apk.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
Descargar Racing Limits Mod APK: Una guía para los entusiastas de las carreras
-
Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de Racing Limits, un popular juego de carreras estilo árcade que te permite competir en la ciudad y el tráfico de carreteras. Este juego ofrece física de conducción realista, vehículos de alto detalle, afinaciones y mejoras, gráficos realistas y cinco modos de carreras agradables. Sin embargo, si quieres disfrutar del juego al máximo, es posible que desee descargar el Racing Limits mod APK, que le da dinero ilimitado y acceso a todas las características del juego. En este artículo, le diremos qué es Racing Limits, cuáles son sus características y modos, cómo jugar mejor, y cómo descargar el mod APK fácilmente.
Racing Limits es un juego que define los estándares móviles de los juegos de carreras de tipo árcade infinito. Basado en carreras y adelantamiento de vehículos tanto en la ciudad y el tráfico de carreteras, este juego tiene muchas características que lo hacen divertido y desafiante. Estos son algunos de ellos:
-
5 modos agradables de carreras
-
Racing Limits tiene cinco modos de carrera diferentes entre los que puedes elegir. Son:
-
-
Modo portador: Este modo tiene cientos de niveles que puedes completar al lograr ciertos objetivos. Puedes ganar dinero y comprar coches nuevos o mejorar los existentes.
-
Modo infinito: Este modo no tiene fin. Puedes correr todo el tiempo que quieras e intentar batir tus propios récords. También puedes ganar dinero y bonos adelantando a otros vehículos.
-
Modo contra-tiempo: Este modo prueba tu velocidad y habilidades. Tienes que correr contra el reloj y llegar a los puntos de control antes de que acabe el tiempo.
-
Modo libre: Este modo te permite correr libremente sin reglas ni restricciones. Puede elegir la densidad de tráfico, el límite de velocidad y la hora del día.
-
-
-
Física de conducción realista
-
Racing Limits tiene una física de conducción realista que hace que el juego sea más inmersivo y desafiante. Todos los coches de Racing Limits tienen una potencia, par y velocidades de transmisión realistas. El proceso de aceleración y las velocidades máximas se basan en una simulación completa. Se tienen en cuenta el peso del vehículo, las relaciones de transmisión, la potencia del motor y las relaciones de par.
-
Vehículos de alto detalle
-
Racing Limits tiene un montón de vehículos con altos niveles de detalle gráfico que están esperando a que conduzcas. Los detalles gráficos de los coches presentes en Racing Limits son los mejores de su categoría. Usted puede elegir entre diferentes tipos de coches como sedanes, SUV, coches deportivos, supercoches, y más.
-
Afinaciones y mejoras
-
Racing
Racing Limits te permite personalizar tu coche con varias opciones. Puedes cambiar el color de tu coche, llantas y pinzas. También puede aplicar diferentes tipos de vinilos a su coche. También puede mejorar el rendimiento de su coche mediante el aumento de la potencia del motor, el freno y la sensibilidad de la dirección, y la reducción de peso.
-
Gráficos realistas
-
Racing Limits tiene gráficos impresionantes que hacen el juego más realista y agradable. El juego tiene diferentes entornos con iluminación realista y efectos climáticos. Puedes correr en condiciones de sol, lluvia, niebla o nieve. También puede elegir la hora del día desde el amanecer hasta la noche. El juego también tiene efectos de sonido realistas y música que mejoran la experiencia de juego.
-
-
Modos de juego de límites de carreras
-
Como mencionamos antes, Racing Limits tiene cinco modos diferentes de carreras que puedes jugar. Cada modo tiene sus propios desafíos y recompensas. Aquí hay una breve descripción de cada modo:
-
Modo portador
-
-
Modo infinito
-
Este es el modo en el que puedes correr sin límites. Puedes elegir la densidad de tráfico, el límite de velocidad y la hora del día. Tienes que adelantar a otros vehículos lo más cerca posible para ganar más dinero y bonos. También puedes usar nitro para aumentar tu velocidad y realizar maniobras arriesgadas. Puedes comparar tus puntuaciones con otros jugadores de la clasificación.
-
Modo contra-tiempo
-
Este es el modo en el que tienes que correr contra el reloj. Tienes que llegar a los puntos de control antes de que acabe el tiempo. Puedes ganar tiempo extra adelantando a otros vehículos o usando nitro. Tienes que ser rápido y tener cuidado de no chocar o quedarse sin tiempo.
-
Modo libre
-
Este es el modo en el que puedes correr libremente sin reglas ni restricciones. Puedes elegir la densidad de tráfico, el límite de velocidad y la hora del día. También puede apagar el tráfico y disfrutar del paisaje. Puede utilizar este modo para practicar sus habilidades de conducción o simplemente divertirse.
-
Modo multijugador
-
Este es el modo en el que puedes competir con tus amigos u otros jugadores de todo el mundo en tiempo real. Puedes unirte o crear salas y carreras en diferentes pistas. Puedes chatear con otros jugadores y enviarles emojis. También puedes ver sus perfiles y estadísticas.
-
Consejos de juego de límites de carreras
-
Racing Limits es un juego que requiere habilidad y estrategia para dominar. Aquí hay algunos consejos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:
-
Elegir el ángulo de la cámara derecha
-
Racing Limits ofrece cuatro ángulos de cámara diferentes entre los que puedes alternar durante el juego. Son:
-
-
Bumper cam: Esta es la cámara que muestra la vista desde el parachoques delantero de su coche. Esta cámara le da una sensación realista de velocidad e inmersión, pero también limita su visibilidad de la carretera y el tráfico.
-
-
Cockpit cam: Esta es la cámara que muestra la vista desde el interior de su coche. Esta cámara le da una sensación realista de conducción e inmersión, pero también limita su visibilidad de la carretera y el tráfico.
-
Tercera persona cámara: Esta es la cámara que muestra la vista desde detrás de su coche. Esta cámara te da una buena vista de la carretera y el tráfico, pero también reduce tu sensación de velocidad e inmersión.
-
-
Usted debe elegir el ángulo de la cámara que se adapte a su preferencia y estilo de carreras. También puede cambiar el ángulo de la cámara durante el juego tocando en la pantalla.
-
Utilice los controles sensibles y fáciles
-
Racing Limits tiene controles sensibles y fáciles que te permiten controlar tu coche con precisión y facilidad. Puede elegir entre tres opciones de control diferentes: inclinación, tacto o volante. También puede ajustar la sensibilidad y la calibración de cada opción en el menú de configuración.
-
El control de inclinación le permite dirigir su automóvil inclinando el dispositivo hacia la izquierda o hacia la derecha. El control táctil le permite dirigir su automóvil tocando el lado izquierdo o derecho de la pantalla. El control del volante te permite conducir tu coche arrastrando un volante virtual en la pantalla.
-
Debe elegir la opción de control que se adapte a
Debe elegir la opción de control que se adapte a su preferencia y comodidad. También puede utilizar los botones de freno y nitro en la pantalla para ralentizar o acelerar su coche. También puede cambiar la posición y el tamaño de los botones en el menú de configuración.
-
Personalizar su coche para adaptarse a su estilo
-
Racing Limits te permite personalizar tu coche con varias opciones. Puedes cambiar el color de tu coche, llantas y calibradores. También puede aplicar diferentes tipos de vinilos a su coche. También puede mejorar el rendimiento de su coche mediante el aumento de la potencia del motor, el freno y la sensibilidad de la dirección, y la reducción de peso.
-
-
Mantener líneas de carreras limpias y apretadas
-
Racing Limits es un juego que requiere habilidad y estrategia para dominar. Una de las habilidades más importantes es mantener sus líneas de carreras limpias y apretadas. Líneas de carreras son los caminos que se toman en la carretera para optimizar su velocidad y distancia. Deberías intentar seguir las líneas de carreras lo más de cerca posible y evitar giros o movimientos innecesarios.
-
También debe tratar de adelantar a otros vehículos lo más cerca posible para ganar más dinero y bonos. Sin embargo, también debe tener cuidado de no chocar o golpear otros vehículos, ya que esto dañará su automóvil y reducirá su velocidad. También debe evitar conducir en el carril opuesto, ya que esto aumentará el riesgo de colisión y penalización.
-
Otros corredores para ganar velocidad
-
Racing Limits es un juego que recompensa la habilidad y la estrategia. Una de las estrategias más efectivas es reclutar a otros corredores para ganar velocidad. El dibujo es una técnica en la que se sigue de cerca detrás de otro vehículo para reducir la resistencia del aire y aumentar su velocidad. Puedes usar esta técnica para adelantar a otros vehículos o escapar de ellos.
-
Debes reclutar a otros corredores siempre que sea posible, especialmente en carreteras rectas o carreteras. Sin embargo, también debe tener cuidado de no quedarse detrás de ellos durante demasiado tiempo, ya que esto reducirá su visibilidad y tiempo de reacción. También debe tener cuidado con los movimientos repentinos o los frenos del vehículo que tiene delante, ya que esto puede causar que se estrelle o pierda velocidad.
-
Cómo descargar límites de carreras Mod APK
-
Si quieres disfrutar de Racing Limits al máximo, es posible que desee descargar el mod APK, que le da dinero ilimitado y acceso a todas las características del juego. Aquí están los pasos para descargar e instalar el mod APK fácilmente:
-
Paso 1: Encontrar una fuente confiable
-
-
También debe comprobar las revisiones y valoraciones de la fuente antes de descargar, ya que pueden darle una idea de su calidad y seguridad. También puede pedir recomendaciones de otros jugadores o amigos que han descargado el mod APK antes.
-
Paso 2: Habilitar fuentes desconocidas en su dispositivo
-
El siguiente paso es habilitar fuentes desconocidas en su dispositivo, lo que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, tienes que ir a la configuración del dispositivo, luego la seguridad, luego fuentes desconocidas y luego activarlo. También es posible que tenga que confirmar un mensaje de advertencia que aparece en su pantalla.
-
Solo debe habilitar fuentes desconocidas cuando se está descargando e instalando el archivo APK mod, y desactivarlo después, ya que puede plantear un riesgo de seguridad para su dispositivo.
-
Paso 3: Descargar e instalar el archivo APK Mod
-
El tercer paso es descargar e instalar el archivo APK mod en su dispositivo. Para hacer esto, debe hacer clic en el enlace proporcionado por la fuente que eligió en el paso 1, y luego esperar a que termine la descarga. También es posible que tenga que permitir algunos permisos o aceptar algunos términos y condiciones antes de descargar.
-
Una vez que la descarga se ha completado, usted tiene que localizar el archivo APK mod en el almacenamiento de su dispositivo, por lo general en la carpeta de descargas, y luego toque en él para iniciar el proceso de instalación. También es posible que tenga que permitir algunos permisos o aceptar algunos términos y condiciones antes de instalar.
-
Paso 4: Iniciar el juego y disfrutar de dinero ilimitado y características
-
-
Conclusión
-
Racing Limits es un divertido y emocionante juego de carreras estilo árcade que te permite correr en la ciudad y el tráfico de carreteras. Tiene física de conducción realista, vehículos de alto detalle, afinaciones y mejoras, gráficos realistas y cinco modos de carreras agradables. Sin embargo, si desea disfrutar del juego al máximo, es posible que desee descargar el mod APK Racing Limits, que le da dinero ilimitado y acceso a todas las características del juego. En este artículo, te hemos dicho lo que es Racing Limits, cuáles son sus características y modos, cómo jugar mejor, y cómo descargar el mod APK fácilmente. Esperamos que este artículo te haya ayudado y que te lo pases genial jugando a Racing Limits.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Racing Limits y su mod APK:
-
Q: Es Racing Limits mod APK seguro para descargar e instalar?
-
A: Sí, Racing Limits mod APK es seguro para descargar e instalar, siempre y cuando siga los pasos que hemos proporcionado en este artículo. Sin embargo, siempre debe tener cuidado de no descargar de fuentes no confiables o maliciosas, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar sus datos. También debe comprobar las revisiones y calificaciones de la fuente antes de descargar, ya que pueden darle una idea de su calidad y seguridad. También debe desactivar fuentes desconocidas en su dispositivo después de instalar el mod APK, ya que puede plantear un riesgo de seguridad para su dispositivo.
-
Q: ¿Cuáles son los beneficios de descargar Racing Limits mod APK?
-
A: Los beneficios de descargar Racing Limits mod APK son que se obtiene dinero ilimitado y el acceso a todas las características del juego. Puede utilizar este dinero y características para comprar coches nuevos, actualizar los existentes, o cambiar su apariencia. También puede reproducir cualquier modo o pista que desee, sin restricciones o limitaciones. Puedes disfrutar del juego al máximo sin gastar dinero real ni esperar nada.
-
-
A: Para actualizar Racing Limits mod APK, tienes que seguir los mismos pasos que hemos proporcionado en este artículo para descargarlo e instalarlo. Tienes que encontrar una fuente confiable que ofrece la última versión del archivo mod APK para Racing Limits, y luego descargarlo e instalarlo en tu dispositivo. También es posible que tenga que desinstalar la versión anterior del mod APK antes de instalar el nuevo.
-
Q: ¿Puedo jugar Racing Limits mod APK en línea con otros jugadores?
-
A: Sí, puede jugar Racing Limits mod APK en línea con otros jugadores en el modo multijugador. Sin embargo, usted debe ser consciente de que no todos los jugadores pueden estar utilizando el mod APK, y algunos podrían estar utilizando la versión original del juego. Esto podría causar algunos problemas de compatibilidad o ventajas injustas para algunos jugadores. También debes respetar a otros jugadores y no usar trucos o hacks que puedan arruinar su experiencia de juego.
-
Q: ¿Puedo jugar Racing Limits mod APK sin conexión a Internet?
-
A: Sí, puede jugar Racing Limits mod APK sin conexión a Internet en algunos modos como modo portador, modo infinito, modo contra-tiempo o modo libre. Sin embargo, no podrás jugar al modo multijugador ni acceder a algunas funciones online como tablas de clasificación o salas de chat.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Do Cabra Simulador.md b/spaces/Benson/text-generation/Examples/Descargar Do Cabra Simulador.md
deleted file mode 100644
index ffe139e978b11c611308350f0502066ab86f8495..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Do Cabra Simulador.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Descargar Hacer simulador de cabra: Cómo convertirse en una cabra virtual y cosas de naufragio
-
¿Alguna vez te has preguntado cómo sería ser una cabra? ¿Vagar libremente, con la cabeza a la vista y causar tanto caos como sea posible? Bueno, no te lo preguntes más, porque Goat Simulator es el juego para ti. En este artículo, te contaremos todo lo que necesitas saber sobre este divertido y absurdo juego, y cómo puedes descargarlo y jugarlo en tu dispositivo.
Una breve introducción al juego y sus características
-
Goat Simulator es un juego que simula la vida de una cabra, pero no de una manera realista o seria. En cambio, es una parodia de otros juegos de simulación, como Flight Simulator o Farming Simulator, que exagera la física y los fallos del motor del juego para crear una experiencia ridícula e hilarante. El juego fue desarrollado por Coffee Stain Studios y lanzado en 2014 como una broma de April Fools, pero se hizo tan popular que generó varios spin-offs y DLCs.
-
El juego no tiene metas u objetivos específicos, aparte de explorar el entorno de mundo abierto y causar tanta destrucción como sea posible. Puedes interactuar con varios objetos y personajes en el juego, como coches, trampolines, explosivos, zombis, alienígenas y más. También puede realizar varias acrobacias y trucos, como backflips, carreras de pared, física ragdoll y cámara lenta. Incluso puedes lamer cosas y arrastrarlas con la lengua.
-
El juego también es compatible con Steam Workshop, lo que significa que puedes crear tus propias cabras, niveles, misiones, modos de juego y más. También puedes descargar e instalar mods creados por otros jugadores, que añaden nuevas características y contenido al juego.
-
¿Por qué usted debe jugar Goat Simulator
-
-
Si estás buscando un juego que desafíe tus habilidades o ponga a prueba tu inteligencia, entonces Goat Simulator no es para ti. Pero si estás buscando un juego que te haga sonreír, reír o incluso reírte, entonces Goat Simulator es definitivamente para ti. Es un juego que te hará olvidarte de tus preocupaciones y estrés por un tiempo, y simplemente disfrutar de ser una cabra.
-
-
Cómo descargar Goat Simulator para diferentes plataformas
-
Goat Simulator está disponible para varias plataformas, como Windows, Mac, Linux, Android, iOS, Xbox One, Xbox 360, PlayStation 4, PlayStation 3, Nintendo Switch, Amazon Fire TV y más. Puedes descargarlo desde diferentes fuentes dependiendo de tu dispositivo.
- We built Structure and Appearance Paired (PAIR) Diffusion that allows reference image-guided appearance manipulation and
- structure editing of an image at an object level. PAIR diffusion models an image as composition of multiple objects and enables control
- over structure and appearance properties of the object. Describing object appearances using text can be challenging and ambiguous, PAIR Diffusion
- enables a user to control the appearance of an object using images. User can further use text as another degree of control for appearance.
- Having fine-grained control over appearance and structure at object level can be beneficial for future works in video and 3D beside image editing,
- where we need to have consistent appearance across time in case of video or across various viewing positions in case of 3D.
-
-
-
- """)
-
- gr.HTML("""
-
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
-
-
-
-
""")
-
- with gr.Tab('Edit Appearance'):
- create_app_demo()
- with gr.Tab('Edit Structure'):
- create_struct_demo()
- with gr.Tab('Edit Both'):
- create_both_demo()
-
-
-block.queue(max_size=20)
-block.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/PAIR/PAIR-Diffusion/cldm/data.py b/spaces/PAIR/PAIR-Diffusion/cldm/data.py
deleted file mode 100644
index 38ae14a1a3d0ec0be874211e4931959c67afee28..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/cldm/data.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import os
-import torch
-import pytorch_lightning as pl
-from omegaconf import OmegaConf
-from functools import partial
-from ldm.util import instantiate_from_config
-from torch.utils.data import random_split, DataLoader, Dataset, Subset
-
-class WrappedDataset(Dataset):
- """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset"""
-
- def __init__(self, dataset):
- self.data = dataset
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, idx):
- return self.data[idx]
-
-class DataModuleFromConfig(pl.LightningDataModule):
- def __init__(self, batch_size, train=None, validation=None, test=None, predict=None,
- wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False,
- shuffle_val_dataloader=False):
- super().__init__()
- self.batch_size = batch_size
- self.dataset_configs = dict()
- self.num_workers = num_workers if num_workers is not None else batch_size * 2
- self.use_worker_init_fn = use_worker_init_fn
- if train is not None:
- self.dataset_configs["train"] = train
- self.train_dataloader = self._train_dataloader
- if validation is not None:
- self.dataset_configs["validation"] = validation
- self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader)
- if test is not None:
- self.dataset_configs["test"] = test
- self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader)
- if predict is not None:
- self.dataset_configs["predict"] = predict
- self.predict_dataloader = self._predict_dataloader
- self.wrap = wrap
-
- def prepare_data(self):
- for data_cfg in self.dataset_configs.values():
- instantiate_from_config(data_cfg)
-
- def setup(self, stage=None):
- self.datasets = dict(
- (k, instantiate_from_config(self.dataset_configs[k]))
- for k in self.dataset_configs)
- if self.wrap:
- for k in self.datasets:
- self.datasets[k] = WrappedDataset(self.datasets[k])
-
- def _train_dataloader(self):
- init_fn = None
- return DataLoader(self.datasets["train"], batch_size=self.batch_size,
- num_workers=self.num_workers, shuffle= True,
- worker_init_fn=init_fn)
-
- def _val_dataloader(self, shuffle=False):
- init_fn = None
- return DataLoader(self.datasets["validation"],
- batch_size=self.batch_size,
- num_workers=self.num_workers,
- worker_init_fn=init_fn,
- shuffle=shuffle)
-
- def _test_dataloader(self, shuffle=False):
- is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)
- if is_iterable_dataset or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
-
- # do not shuffle dataloader for iterable dataset
- shuffle = shuffle and (not is_iterable_dataset)
-
- return DataLoader(self.datasets["test"], batch_size=self.batch_size,
- num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle)
-
- def _predict_dataloader(self, shuffle=False):
- if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:
- init_fn = worker_init_fn
- else:
- init_fn = None
- return DataLoader(self.datasets["predict"], batch_size=self.batch_size,
- num_workers=self.num_workers, worker_init_fn=init_fn)
-
-
-def create_data(config):
- data = instantiate_from_config(config.data)
- # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
- # calling these ourselves should not be necessary but it is.
- # lightning still takes care of proper multiprocessing though
- data.prepare_data()
- data.setup()
- return data
\ No newline at end of file
diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/data/util.py b/spaces/PAIR/PAIR-Diffusion/ldm/data/util.py
deleted file mode 100644
index 5b60ceb2349e3bd7900ff325740e2022d2903b1c..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/ldm/data/util.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import torch
-
-from ldm.modules.midas.api import load_midas_transform
-
-
-class AddMiDaS(object):
- def __init__(self, model_type):
- super().__init__()
- self.transform = load_midas_transform(model_type)
-
- def pt2np(self, x):
- x = ((x + 1.0) * .5).detach().cpu().numpy()
- return x
-
- def np2pt(self, x):
- x = torch.from_numpy(x) * 2 - 1.
- return x
-
- def __call__(self, sample):
- # sample['jpg'] is tensor hwc in [-1, 1] at this point
- x = self.pt2np(sample['jpg'])
- x = self.transform({"image": x})["image"]
- sample['midas_in'] = x
- return sample
\ No newline at end of file
diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/model.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/model.py
deleted file mode 100644
index b089eebbe1676d8249005bb9def002ff5180715b..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/ldm/modules/diffusionmodules/model.py
+++ /dev/null
@@ -1,852 +0,0 @@
-# pytorch_diffusion + derived encoder decoder
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import rearrange
-from typing import Optional, Any
-
-from ldm.modules.attention import MemoryEfficientCrossAttention
-
-try:
- import xformers
- import xformers.ops
- XFORMERS_IS_AVAILBLE = True
-except:
- XFORMERS_IS_AVAILBLE = False
- print("No module 'xformers'. Proceeding without it.")
-
-
-def get_timestep_embedding(timesteps, embedding_dim):
- """
- This matches the implementation in Denoising Diffusion Probabilistic Models:
- From Fairseq.
- Build sinusoidal embeddings.
- This matches the implementation in tensor2tensor, but differs slightly
- from the description in Section 3.5 of "Attention Is All You Need".
- """
- assert len(timesteps.shape) == 1
-
- half_dim = embedding_dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)
- emb = emb.to(device=timesteps.device)
- emb = timesteps.float()[:, None] * emb[None, :]
- emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
- if embedding_dim % 2 == 1: # zero pad
- emb = torch.nn.functional.pad(emb, (0,1,0,0))
- return emb
-
-
-def nonlinearity(x):
- # swish
- return x*torch.sigmoid(x)
-
-
-def Normalize(in_channels, num_groups=32):
- return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
-
-
-class Upsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
- if self.with_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=2,
- padding=0)
-
- def forward(self, x):
- if self.with_conv:
- pad = (0,1,0,1)
- x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
- x = self.conv(x)
- else:
- x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
- return x
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
- dropout, temb_channels=512):
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
-
- self.norm1 = Normalize(in_channels)
- self.conv1 = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if temb_channels > 0:
- self.temb_proj = torch.nn.Linear(temb_channels,
- out_channels)
- self.norm2 = Normalize(out_channels)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- else:
- self.nin_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x, temb):
- h = x
- h = self.norm1(h)
- h = nonlinearity(h)
- h = self.conv1(h)
-
- if temb is not None:
- h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]
-
- h = self.norm2(h)
- h = nonlinearity(h)
- h = self.dropout(h)
- h = self.conv2(h)
-
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- x = self.conv_shortcut(x)
- else:
- x = self.nin_shortcut(x)
-
- return x+h
-
-
-class AttnBlock(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b,c,h,w = q.shape
- q = q.reshape(b,c,h*w)
- q = q.permute(0,2,1) # b,hw,c
- k = k.reshape(b,c,h*w) # b,c,hw
- w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = v.reshape(b,c,h*w)
- w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q)
- h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
- h_ = h_.reshape(b,c,h,w)
-
- h_ = self.proj_out(h_)
-
- return x+h_
-
-class MemoryEfficientAttnBlock(nn.Module):
- """
- Uses xformers efficient implementation,
- see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223
- Note: this is a single-head self-attention operation
- """
- #
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.attention_op: Optional[Any] = None
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- B, C, H, W = q.shape
- q, k, v = map(lambda x: rearrange(x, 'b c h w -> b (h w) c'), (q, k, v))
-
- q, k, v = map(
- lambda t: t.unsqueeze(3)
- .reshape(B, t.shape[1], 1, C)
- .permute(0, 2, 1, 3)
- .reshape(B * 1, t.shape[1], C)
- .contiguous(),
- (q, k, v),
- )
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
-
- out = (
- out.unsqueeze(0)
- .reshape(B, 1, out.shape[1], C)
- .permute(0, 2, 1, 3)
- .reshape(B, out.shape[1], C)
- )
- out = rearrange(out, 'b (h w) c -> b c h w', b=B, h=H, w=W, c=C)
- out = self.proj_out(out)
- return x+out
-
-
-class MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention):
- def forward(self, x, context=None, mask=None):
- b, c, h, w = x.shape
- x = rearrange(x, 'b c h w -> b (h w) c')
- out = super().forward(x, context=context, mask=mask)
- out = rearrange(out, 'b (h w) c -> b c h w', h=h, w=w, c=c)
- return x + out
-
-
-def make_attn(in_channels, attn_type="vanilla", attn_kwargs=None):
- assert attn_type in ["vanilla", "vanilla-xformers", "memory-efficient-cross-attn", "linear", "none"], f'attn_type {attn_type} unknown'
- if XFORMERS_IS_AVAILBLE and attn_type == "vanilla":
- attn_type = "vanilla-xformers"
- print(f"making attention of type '{attn_type}' with {in_channels} in_channels")
- if attn_type == "vanilla":
- assert attn_kwargs is None
- return AttnBlock(in_channels)
- elif attn_type == "vanilla-xformers":
- print(f"building MemoryEfficientAttnBlock with {in_channels} in_channels...")
- return MemoryEfficientAttnBlock(in_channels)
- elif type == "memory-efficient-cross-attn":
- attn_kwargs["query_dim"] = in_channels
- return MemoryEfficientCrossAttentionWrapper(**attn_kwargs)
- elif attn_type == "none":
- return nn.Identity(in_channels)
- else:
- raise NotImplementedError()
-
-
-class Model(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = self.ch*4
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- self.use_timestep = use_timestep
- if self.use_timestep:
- # timestep embedding
- self.temb = nn.Module()
- self.temb.dense = nn.ModuleList([
- torch.nn.Linear(self.ch,
- self.temb_ch),
- torch.nn.Linear(self.temb_ch,
- self.temb_ch),
- ])
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- skip_in = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- if i_block == self.num_res_blocks:
- skip_in = ch*in_ch_mult[i_level]
- block.append(ResnetBlock(in_channels=block_in+skip_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x, t=None, context=None):
- #assert x.shape[2] == x.shape[3] == self.resolution
- if context is not None:
- # assume aligned context, cat along channel axis
- x = torch.cat((x, context), dim=1)
- if self.use_timestep:
- # timestep embedding
- assert t is not None
- temb = get_timestep_embedding(t, self.ch)
- temb = self.temb.dense[0](temb)
- temb = nonlinearity(temb)
- temb = self.temb.dense[1](temb)
- else:
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](
- torch.cat([h, hs.pop()], dim=1), temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
- def get_last_layer(self):
- return self.conv_out.weight
-
-
-class Encoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla",
- **ignore_kwargs):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.in_ch_mult = in_ch_mult
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- 2*z_channels if double_z else z_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- # timestep embedding
- temb = None
-
- # downsampling
- hs = [self.conv_in(x)]
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](hs[-1], temb)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- hs.append(h)
- if i_level != self.num_resolutions-1:
- hs.append(self.down[i_level].downsample(hs[-1]))
-
- # middle
- h = hs[-1]
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class Decoder(nn.Module):
- def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
- resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
- attn_type="vanilla", **ignorekwargs):
- super().__init__()
- if use_linear_attn: attn_type = "linear"
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
- self.give_pre_end = give_pre_end
- self.tanh_out = tanh_out
-
- # compute in_ch_mult, block_in and curr_res at lowest res
- in_ch_mult = (1,)+tuple(ch_mult)
- block_in = ch*ch_mult[self.num_resolutions-1]
- curr_res = resolution // 2**(self.num_resolutions-1)
- self.z_shape = (1,z_channels,curr_res,curr_res)
- print("Working with z of shape {} = {} dimensions.".format(
- self.z_shape, np.prod(self.z_shape)))
-
- # z to block_in
- self.conv_in = torch.nn.Conv2d(z_channels,
- block_in,
- kernel_size=3,
- stride=1,
- padding=1)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
- self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=dropout)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(make_attn(block_in, attn_type=attn_type))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, z):
- #assert z.shape[1:] == self.z_shape[1:]
- self.last_z_shape = z.shape
-
- # timestep embedding
- temb = None
-
- # z to block_in
- h = self.conv_in(z)
-
- # middle
- h = self.mid.block_1(h, temb)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h, temb)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](h, temb)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- # end
- if self.give_pre_end:
- return h
-
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- if self.tanh_out:
- h = torch.tanh(h)
- return h
-
-
-class SimpleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, *args, **kwargs):
- super().__init__()
- self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),
- ResnetBlock(in_channels=in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=2 * in_channels,
- out_channels=4 * in_channels,
- temb_channels=0, dropout=0.0),
- ResnetBlock(in_channels=4 * in_channels,
- out_channels=2 * in_channels,
- temb_channels=0, dropout=0.0),
- nn.Conv2d(2*in_channels, in_channels, 1),
- Upsample(in_channels, with_conv=True)])
- # end
- self.norm_out = Normalize(in_channels)
- self.conv_out = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- for i, layer in enumerate(self.model):
- if i in [1,2,3]:
- x = layer(x, None)
- else:
- x = layer(x)
-
- h = self.norm_out(x)
- h = nonlinearity(h)
- x = self.conv_out(h)
- return x
-
-
-class UpsampleDecoder(nn.Module):
- def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,
- ch_mult=(2,2), dropout=0.0):
- super().__init__()
- # upsampling
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- block_in = in_channels
- curr_res = resolution // 2 ** (self.num_resolutions - 1)
- self.res_blocks = nn.ModuleList()
- self.upsample_blocks = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- res_block = []
- block_out = ch * ch_mult[i_level]
- for i_block in range(self.num_res_blocks + 1):
- res_block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=dropout))
- block_in = block_out
- self.res_blocks.append(nn.ModuleList(res_block))
- if i_level != self.num_resolutions - 1:
- self.upsample_blocks.append(Upsample(block_in, True))
- curr_res = curr_res * 2
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- # upsampling
- h = x
- for k, i_level in enumerate(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks + 1):
- h = self.res_blocks[i_level][i_block](h, None)
- if i_level != self.num_resolutions - 1:
- h = self.upsample_blocks[k](h)
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class LatentRescaler(nn.Module):
- def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):
- super().__init__()
- # residual block, interpolate, residual block
- self.factor = factor
- self.conv_in = nn.Conv2d(in_channels,
- mid_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
- out_channels=mid_channels,
- temb_channels=0,
- dropout=0.0) for _ in range(depth)])
- self.attn = AttnBlock(mid_channels)
- self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
- out_channels=mid_channels,
- temb_channels=0,
- dropout=0.0) for _ in range(depth)])
-
- self.conv_out = nn.Conv2d(mid_channels,
- out_channels,
- kernel_size=1,
- )
-
- def forward(self, x):
- x = self.conv_in(x)
- for block in self.res_block1:
- x = block(x, None)
- x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor))))
- x = self.attn(x)
- for block in self.res_block2:
- x = block(x, None)
- x = self.conv_out(x)
- return x
-
-
-class MergedRescaleEncoder(nn.Module):
- def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,
- attn_resolutions, dropout=0.0, resamp_with_conv=True,
- ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1):
- super().__init__()
- intermediate_chn = ch * ch_mult[-1]
- self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,
- z_channels=intermediate_chn, double_z=False, resolution=resolution,
- attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,
- out_ch=None)
- self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,
- mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)
-
- def forward(self, x):
- x = self.encoder(x)
- x = self.rescaler(x)
- return x
-
-
-class MergedRescaleDecoder(nn.Module):
- def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8),
- dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):
- super().__init__()
- tmp_chn = z_channels*ch_mult[-1]
- self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,
- resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,
- ch_mult=ch_mult, resolution=resolution, ch=ch)
- self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,
- out_channels=tmp_chn, depth=rescale_module_depth)
-
- def forward(self, x):
- x = self.rescaler(x)
- x = self.decoder(x)
- return x
-
-
-class Upsampler(nn.Module):
- def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):
- super().__init__()
- assert out_size >= in_size
- num_blocks = int(np.log2(out_size//in_size))+1
- factor_up = 1.+ (out_size % in_size)
- print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}")
- self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels,
- out_channels=in_channels)
- self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,
- attn_resolutions=[], in_channels=None, ch=in_channels,
- ch_mult=[ch_mult for _ in range(num_blocks)])
-
- def forward(self, x):
- x = self.rescaler(x)
- x = self.decoder(x)
- return x
-
-
-class Resize(nn.Module):
- def __init__(self, in_channels=None, learned=False, mode="bilinear"):
- super().__init__()
- self.with_conv = learned
- self.mode = mode
- if self.with_conv:
- print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode")
- raise NotImplementedError()
- assert in_channels is not None
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=4,
- stride=2,
- padding=1)
-
- def forward(self, x, scale_factor=1.0):
- if scale_factor==1.0:
- return x
- else:
- x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)
- return x
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/upernet_uniformer.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/upernet_uniformer.py
deleted file mode 100644
index 41aa4db809dc6e2c508e98051f61807d07477903..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/upernet_uniformer.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# model settings
-norm_cfg = dict(type='BN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- mlp_ratio=4.,
- qkv_bias=True,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.1),
- decode_head=dict(
- type='UPerHead',
- in_channels=[64, 128, 320, 512],
- in_index=[0, 1, 2, 3],
- pool_scales=(1, 2, 3, 6),
- channels=512,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=320,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
\ No newline at end of file
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/evaluation.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/evaluation.py
deleted file mode 100644
index 4d00999ce5665c53bded8de9e084943eee2d230d..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/evaluation.py
+++ /dev/null
@@ -1,509 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import warnings
-from math import inf
-
-import torch.distributed as dist
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.utils.data import DataLoader
-
-from annotator.uniformer.mmcv.fileio import FileClient
-from annotator.uniformer.mmcv.utils import is_seq_of
-from .hook import Hook
-from .logger import LoggerHook
-
-
-class EvalHook(Hook):
- """Non-Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in non-distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- Default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader, and return the test results. If ``None``, the default
- test function ``mmcv.engine.single_gpu_test`` will be used.
- (default: ``None``)
- greater_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'greater' comparison rule. If ``None``,
- _default_greater_keys will be used. (default: ``None``)
- less_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'less' comparison rule. If ``None``, _default_less_keys
- will be used. (default: ``None``)
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- `New in version 1.3.16.`
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- `New in version 1.3.16.`
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
-
- Notes:
- If new arguments are added for EvalHook, tools/test.py,
- tools/eval_metric.py may be affected.
- """
-
- # Since the key for determine greater or less is related to the downstream
- # tasks, downstream repos may need to overwrite the following inner
- # variable accordingly.
-
- rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
- init_value_map = {'greater': -inf, 'less': inf}
- _default_greater_keys = [
- 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU',
- 'mAcc', 'aAcc'
- ]
- _default_less_keys = ['loss']
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
- if not isinstance(dataloader, DataLoader):
- raise TypeError(f'dataloader must be a pytorch DataLoader, '
- f'but got {type(dataloader)}')
-
- if interval <= 0:
- raise ValueError(f'interval must be a positive number, '
- f'but got {interval}')
-
- assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean'
-
- if start is not None and start < 0:
- raise ValueError(f'The evaluation start epoch {start} is smaller '
- f'than 0')
-
- self.dataloader = dataloader
- self.interval = interval
- self.start = start
- self.by_epoch = by_epoch
-
- assert isinstance(save_best, str) or save_best is None, \
- '""save_best"" should be a str or None ' \
- f'rather than {type(save_best)}'
- self.save_best = save_best
- self.eval_kwargs = eval_kwargs
- self.initial_flag = True
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import single_gpu_test
- self.test_fn = single_gpu_test
- else:
- self.test_fn = test_fn
-
- if greater_keys is None:
- self.greater_keys = self._default_greater_keys
- else:
- if not isinstance(greater_keys, (list, tuple)):
- greater_keys = (greater_keys, )
- assert is_seq_of(greater_keys, str)
- self.greater_keys = greater_keys
-
- if less_keys is None:
- self.less_keys = self._default_less_keys
- else:
- if not isinstance(less_keys, (list, tuple)):
- less_keys = (less_keys, )
- assert is_seq_of(less_keys, str)
- self.less_keys = less_keys
-
- if self.save_best is not None:
- self.best_ckpt_path = None
- self._init_rule(rule, self.save_best)
-
- self.out_dir = out_dir
- self.file_client_args = file_client_args
-
- def _init_rule(self, rule, key_indicator):
- """Initialize rule, key_indicator, comparison_func, and best score.
-
- Here is the rule to determine which rule is used for key indicator
- when the rule is not specific (note that the key indicator matching
- is case-insensitive):
- 1. If the key indicator is in ``self.greater_keys``, the rule will be
- specified as 'greater'.
- 2. Or if the key indicator is in ``self.less_keys``, the rule will be
- specified as 'less'.
- 3. Or if the key indicator is equal to the substring in any one item
- in ``self.greater_keys``, the rule will be specified as 'greater'.
- 4. Or if the key indicator is equal to the substring in any one item
- in ``self.less_keys``, the rule will be specified as 'less'.
-
- Args:
- rule (str | None): Comparison rule for best score.
- key_indicator (str | None): Key indicator to determine the
- comparison rule.
- """
- if rule not in self.rule_map and rule is not None:
- raise KeyError(f'rule must be greater, less or None, '
- f'but got {rule}.')
-
- if rule is None:
- if key_indicator != 'auto':
- # `_lc` here means we use the lower case of keys for
- # case-insensitive matching
- key_indicator_lc = key_indicator.lower()
- greater_keys = [key.lower() for key in self.greater_keys]
- less_keys = [key.lower() for key in self.less_keys]
-
- if key_indicator_lc in greater_keys:
- rule = 'greater'
- elif key_indicator_lc in less_keys:
- rule = 'less'
- elif any(key in key_indicator_lc for key in greater_keys):
- rule = 'greater'
- elif any(key in key_indicator_lc for key in less_keys):
- rule = 'less'
- else:
- raise ValueError(f'Cannot infer the rule for key '
- f'{key_indicator}, thus a specific rule '
- f'must be specified.')
- self.rule = rule
- self.key_indicator = key_indicator
- if self.rule is not None:
- self.compare_func = self.rule_map[self.rule]
-
- def before_run(self, runner):
- if not self.out_dir:
- self.out_dir = runner.work_dir
-
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
-
- # if `self.out_dir` is not equal to `runner.work_dir`, it means that
- # `self.out_dir` is set so the final `self.out_dir` is the
- # concatenation of `self.out_dir` and the last level directory of
- # `runner.work_dir`
- if self.out_dir != runner.work_dir:
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
- runner.logger.info(
- (f'The best checkpoint will be saved to {self.out_dir} by '
- f'{self.file_client.name}'))
-
- if self.save_best is not None:
- if runner.meta is None:
- warnings.warn('runner.meta is None. Creating an empty one.')
- runner.meta = dict()
- runner.meta.setdefault('hook_msgs', dict())
- self.best_ckpt_path = runner.meta['hook_msgs'].get(
- 'best_ckpt', None)
-
- def before_train_iter(self, runner):
- """Evaluate the model only at the start of training by iteration."""
- if self.by_epoch or not self.initial_flag:
- return
- if self.start is not None and runner.iter >= self.start:
- self.after_train_iter(runner)
- self.initial_flag = False
-
- def before_train_epoch(self, runner):
- """Evaluate the model only at the start of training by epoch."""
- if not (self.by_epoch and self.initial_flag):
- return
- if self.start is not None and runner.epoch >= self.start:
- self.after_train_epoch(runner)
- self.initial_flag = False
-
- def after_train_iter(self, runner):
- """Called after every training iter to evaluate the results."""
- if not self.by_epoch and self._should_evaluate(runner):
- # Because the priority of EvalHook is higher than LoggerHook, the
- # training log and the evaluating log are mixed. Therefore,
- # we need to dump the training log and clear it before evaluating
- # log is generated. In addition, this problem will only appear in
- # `IterBasedRunner` whose `self.by_epoch` is False, because
- # `EpochBasedRunner` whose `self.by_epoch` is True calls
- # `_do_evaluate` in `after_train_epoch` stage, and at this stage
- # the training log has been printed, so it will not cause any
- # problem. more details at
- # https://github.com/open-mmlab/mmsegmentation/issues/694
- for hook in runner._hooks:
- if isinstance(hook, LoggerHook):
- hook.after_train_iter(runner)
- runner.log_buffer.clear()
-
- self._do_evaluate(runner)
-
- def after_train_epoch(self, runner):
- """Called after every training epoch to evaluate the results."""
- if self.by_epoch and self._should_evaluate(runner):
- self._do_evaluate(runner)
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- results = self.test_fn(runner.model, self.dataloader)
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to save
- # the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
-
- def _should_evaluate(self, runner):
- """Judge whether to perform evaluation.
-
- Here is the rule to judge whether to perform evaluation:
- 1. It will not perform evaluation during the epoch/iteration interval,
- which is determined by ``self.interval``.
- 2. It will not perform evaluation if the start time is larger than
- current time.
- 3. It will not perform evaluation when current time is larger than
- the start time but during epoch/iteration interval.
-
- Returns:
- bool: The flag indicating whether to perform evaluation.
- """
- if self.by_epoch:
- current = runner.epoch
- check_time = self.every_n_epochs
- else:
- current = runner.iter
- check_time = self.every_n_iters
-
- if self.start is None:
- if not check_time(runner, self.interval):
- # No evaluation during the interval.
- return False
- elif (current + 1) < self.start:
- # No evaluation if start is larger than the current time.
- return False
- else:
- # Evaluation only at epochs/iters 3, 5, 7...
- # if start==3 and interval==2
- if (current + 1 - self.start) % self.interval:
- return False
- return True
-
- def _save_ckpt(self, runner, key_score):
- """Save the best checkpoint.
-
- It will compare the score according to the compare function, write
- related information (best score, best checkpoint path) and save the
- best checkpoint into ``work_dir``.
- """
- if self.by_epoch:
- current = f'epoch_{runner.epoch + 1}'
- cur_type, cur_time = 'epoch', runner.epoch + 1
- else:
- current = f'iter_{runner.iter + 1}'
- cur_type, cur_time = 'iter', runner.iter + 1
-
- best_score = runner.meta['hook_msgs'].get(
- 'best_score', self.init_value_map[self.rule])
- if self.compare_func(key_score, best_score):
- best_score = key_score
- runner.meta['hook_msgs']['best_score'] = best_score
-
- if self.best_ckpt_path and self.file_client.isfile(
- self.best_ckpt_path):
- self.file_client.remove(self.best_ckpt_path)
- runner.logger.info(
- (f'The previous best checkpoint {self.best_ckpt_path} was '
- 'removed'))
-
- best_ckpt_name = f'best_{self.key_indicator}_{current}.pth'
- self.best_ckpt_path = self.file_client.join_path(
- self.out_dir, best_ckpt_name)
- runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path
-
- runner.save_checkpoint(
- self.out_dir, best_ckpt_name, create_symlink=False)
- runner.logger.info(
- f'Now best checkpoint is saved as {best_ckpt_name}.')
- runner.logger.info(
- f'Best {self.key_indicator} is {best_score:0.4f} '
- f'at {cur_time} {cur_type}.')
-
- def evaluate(self, runner, results):
- """Evaluate the results.
-
- Args:
- runner (:obj:`mmcv.Runner`): The underlined training runner.
- results (list): Output results.
- """
- eval_res = self.dataloader.dataset.evaluate(
- results, logger=runner.logger, **self.eval_kwargs)
-
- for name, val in eval_res.items():
- runner.log_buffer.output[name] = val
- runner.log_buffer.ready = True
-
- if self.save_best is not None:
- # If the performance of model is pool, the `eval_res` may be an
- # empty dict and it will raise exception when `self.save_best` is
- # not None. More details at
- # https://github.com/open-mmlab/mmdetection/issues/6265.
- if not eval_res:
- warnings.warn(
- 'Since `eval_res` is an empty dict, the behavior to save '
- 'the best checkpoint will be skipped in this evaluation.')
- return None
-
- if self.key_indicator == 'auto':
- # infer from eval_results
- self._init_rule(self.rule, list(eval_res.keys())[0])
- return eval_res[self.key_indicator]
-
- return None
-
-
-class DistEvalHook(EvalHook):
- """Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader in a multi-gpu manner, and return the test results. If
- ``None``, the default test function ``mmcv.engine.multi_gpu_test``
- will be used. (default: ``None``)
- tmpdir (str | None): Temporary directory to save the results of all
- processes. Default: None.
- gpu_collect (bool): Whether to use gpu or cpu to collect results.
- Default: False.
- broadcast_bn_buffer (bool): Whether to broadcast the
- buffer(running_mean and running_var) of rank 0 to other rank
- before evaluation. Default: True.
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
- """
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- broadcast_bn_buffer=True,
- tmpdir=None,
- gpu_collect=False,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import multi_gpu_test
- test_fn = multi_gpu_test
-
- super().__init__(
- dataloader,
- start=start,
- interval=interval,
- by_epoch=by_epoch,
- save_best=save_best,
- rule=rule,
- test_fn=test_fn,
- greater_keys=greater_keys,
- less_keys=less_keys,
- out_dir=out_dir,
- file_client_args=file_client_args,
- **eval_kwargs)
-
- self.broadcast_bn_buffer = broadcast_bn_buffer
- self.tmpdir = tmpdir
- self.gpu_collect = gpu_collect
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- # Synchronization of BatchNorm's buffer (running_mean
- # and running_var) is not supported in the DDP of pytorch,
- # which may cause the inconsistent performance of models in
- # different ranks, so we broadcast BatchNorm's buffers
- # of rank 0 to other ranks to avoid this.
- if self.broadcast_bn_buffer:
- model = runner.model
- for name, module in model.named_modules():
- if isinstance(module,
- _BatchNorm) and module.track_running_stats:
- dist.broadcast(module.running_var, 0)
- dist.broadcast(module.running_mean, 0)
-
- tmpdir = self.tmpdir
- if tmpdir is None:
- tmpdir = osp.join(runner.work_dir, '.eval_hook')
-
- results = self.test_fn(
- runner.model,
- self.dataloader,
- tmpdir=tmpdir,
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to
- # save the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/metrics.py
deleted file mode 100644
index 16c7dd47cadd53cf1caaa194e28a343f2aacc599..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/metrics.py
+++ /dev/null
@@ -1,326 +0,0 @@
-from collections import OrderedDict
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-import torch
-
-
-def f_score(precision, recall, beta=1):
- """calcuate the f-score value.
-
- Args:
- precision (float | torch.Tensor): The precision value.
- recall (float | torch.Tensor): The recall value.
- beta (int): Determines the weight of recall in the combined score.
- Default: False.
-
- Returns:
- [torch.tensor]: The f-score value.
- """
- score = (1 + beta**2) * (precision * recall) / (
- (beta**2 * precision) + recall)
- return score
-
-
-def intersect_and_union(pred_label,
- label,
- num_classes,
- ignore_index,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate intersection and Union.
-
- Args:
- pred_label (ndarray | str): Prediction segmentation map
- or predict result filename.
- label (ndarray | str): Ground truth segmentation map
- or label filename.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- label_map (dict): Mapping old labels to new labels. The parameter will
- work only when label is str. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. The parameter will
- work only when label is str. Default: False.
-
- Returns:
- torch.Tensor: The intersection of prediction and ground truth
- histogram on all classes.
- torch.Tensor: The union of prediction and ground truth histogram on
- all classes.
- torch.Tensor: The prediction histogram on all classes.
- torch.Tensor: The ground truth histogram on all classes.
- """
-
- if isinstance(pred_label, str):
- pred_label = torch.from_numpy(np.load(pred_label))
- else:
- pred_label = torch.from_numpy((pred_label))
-
- if isinstance(label, str):
- label = torch.from_numpy(
- mmcv.imread(label, flag='unchanged', backend='pillow'))
- else:
- label = torch.from_numpy(label)
-
- if label_map is not None:
- for old_id, new_id in label_map.items():
- label[label == old_id] = new_id
- if reduce_zero_label:
- label[label == 0] = 255
- label = label - 1
- label[label == 254] = 255
-
- mask = (label != ignore_index)
- pred_label = pred_label[mask]
- label = label[mask]
-
- intersect = pred_label[pred_label == label]
- area_intersect = torch.histc(
- intersect.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_pred_label = torch.histc(
- pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_label = torch.histc(
- label.float(), bins=(num_classes), min=0, max=num_classes - 1)
- area_union = area_pred_label + area_label - area_intersect
- return area_intersect, area_union, area_pred_label, area_label
-
-
-def total_intersect_and_union(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Total Intersection and Union.
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- ndarray: The intersection of prediction and ground truth histogram
- on all classes.
- ndarray: The union of prediction and ground truth histogram on all
- classes.
- ndarray: The prediction histogram on all classes.
- ndarray: The ground truth histogram on all classes.
- """
- num_imgs = len(results)
- assert len(gt_seg_maps) == num_imgs
- total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_union = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64)
- total_area_label = torch.zeros((num_classes, ), dtype=torch.float64)
- for i in range(num_imgs):
- area_intersect, area_union, area_pred_label, area_label = \
- intersect_and_union(
- results[i], gt_seg_maps[i], num_classes, ignore_index,
- label_map, reduce_zero_label)
- total_area_intersect += area_intersect
- total_area_union += area_union
- total_area_pred_label += area_pred_label
- total_area_label += area_label
- return total_area_intersect, total_area_union, total_area_pred_label, \
- total_area_label
-
-
-def mean_iou(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Mean Intersection and Union (mIoU)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- dict[str, float | ndarray]:
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category IoU, shape (num_classes, ).
- """
- iou_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mIoU'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label)
- return iou_result
-
-
-def mean_dice(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False):
- """Calculate Mean Dice (mDice)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
-
- Returns:
- dict[str, float | ndarray]: Default metrics.
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category dice, shape (num_classes, ).
- """
-
- dice_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mDice'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label)
- return dice_result
-
-
-def mean_fscore(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False,
- beta=1):
- """Calculate Mean Intersection and Union (mIoU)
-
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
- beta (int): Determines the weight of recall in the combined score.
- Default: False.
-
-
- Returns:
- dict[str, float | ndarray]: Default metrics.
- float: Overall accuracy on all images.
- ndarray: Per category recall, shape (num_classes, ).
- ndarray: Per category precision, shape (num_classes, ).
- ndarray: Per category f-score, shape (num_classes, ).
- """
- fscore_result = eval_metrics(
- results=results,
- gt_seg_maps=gt_seg_maps,
- num_classes=num_classes,
- ignore_index=ignore_index,
- metrics=['mFscore'],
- nan_to_num=nan_to_num,
- label_map=label_map,
- reduce_zero_label=reduce_zero_label,
- beta=beta)
- return fscore_result
-
-
-def eval_metrics(results,
- gt_seg_maps,
- num_classes,
- ignore_index,
- metrics=['mIoU'],
- nan_to_num=None,
- label_map=dict(),
- reduce_zero_label=False,
- beta=1):
- """Calculate evaluation metrics
- Args:
- results (list[ndarray] | list[str]): List of prediction segmentation
- maps or list of prediction result filenames.
- gt_seg_maps (list[ndarray] | list[str]): list of ground truth
- segmentation maps or list of label filenames.
- num_classes (int): Number of categories.
- ignore_index (int): Index that will be ignored in evaluation.
- metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'.
- nan_to_num (int, optional): If specified, NaN values will be replaced
- by the numbers defined by the user. Default: None.
- label_map (dict): Mapping old labels to new labels. Default: dict().
- reduce_zero_label (bool): Wether ignore zero label. Default: False.
- Returns:
- float: Overall accuracy on all images.
- ndarray: Per category accuracy, shape (num_classes, ).
- ndarray: Per category evaluation metrics, shape (num_classes, ).
- """
- if isinstance(metrics, str):
- metrics = [metrics]
- allowed_metrics = ['mIoU', 'mDice', 'mFscore']
- if not set(metrics).issubset(set(allowed_metrics)):
- raise KeyError('metrics {} is not supported'.format(metrics))
-
- total_area_intersect, total_area_union, total_area_pred_label, \
- total_area_label = total_intersect_and_union(
- results, gt_seg_maps, num_classes, ignore_index, label_map,
- reduce_zero_label)
- all_acc = total_area_intersect.sum() / total_area_label.sum()
- ret_metrics = OrderedDict({'aAcc': all_acc})
- for metric in metrics:
- if metric == 'mIoU':
- iou = total_area_intersect / total_area_union
- acc = total_area_intersect / total_area_label
- ret_metrics['IoU'] = iou
- ret_metrics['Acc'] = acc
- elif metric == 'mDice':
- dice = 2 * total_area_intersect / (
- total_area_pred_label + total_area_label)
- acc = total_area_intersect / total_area_label
- ret_metrics['Dice'] = dice
- ret_metrics['Acc'] = acc
- elif metric == 'mFscore':
- precision = total_area_intersect / total_area_pred_label
- recall = total_area_intersect / total_area_label
- f_value = torch.tensor(
- [f_score(x[0], x[1], beta) for x in zip(precision, recall)])
- ret_metrics['Fscore'] = f_value
- ret_metrics['Precision'] = precision
- ret_metrics['Recall'] = recall
-
- ret_metrics = {
- metric: value.numpy()
- for metric, value in ret_metrics.items()
- }
- if nan_to_num is not None:
- ret_metrics = OrderedDict({
- metric: np.nan_to_num(metric_value, nan=nan_to_num)
- for metric, metric_value in ret_metrics.items()
- })
- return ret_metrics
diff --git a/spaces/PBJ/Toxic-Comment-Classification/app.py b/spaces/PBJ/Toxic-Comment-Classification/app.py
deleted file mode 100644
index 2ff01da82a4182fc4f850cc34e66727c71c775de..0000000000000000000000000000000000000000
--- a/spaces/PBJ/Toxic-Comment-Classification/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Importing necessary libraries
-import streamlit as st
-import os
-import numpy as np
-import pandas as pd
-import matplotlib.pyplot as plt
-import re
-
-st.title('Toxic Comment Classification')
-comment = st.text_area("Enter Your Text", "Type Here")
-
-comment_input = []
-comment_input.append(comment)
-test_df = pd.DataFrame()
-test_df['comment_text'] = comment_input
-cols = {'toxic':[0], 'severe_toxic':[0], 'obscene':[0], 'threat':[0], 'insult':[0], 'identity_hate':[0], 'non_toxic': [0]}
-for key in cols.keys():
- test_df[key] = cols[key]
-test_df = test_df.reset_index()
-test_df.drop(columns=["index"], inplace=True)
-
-# Data Cleaning and Preprocessing
-# creating copy of data for data cleaning and preprocessing
-cleaned_data = test_df.copy()
-
-# Removing Hyperlinks from text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"https?://\S+|www\.\S+","",x) )
-
-# Removing emojis from text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub("["
- u"\U0001F600-\U0001F64F"
- u"\U0001F300-\U0001F5FF"
- u"\U0001F680-\U0001F6FF"
- u"\U0001F1E0-\U0001F1FF"
- u"\U00002702-\U000027B0"
- u"\U000024C2-\U0001F251"
- "]+","", x, flags=re.UNICODE))
-
-# Removing IP addresses from text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}","",x))
-
-# Removing html tags from text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"<.*?>","",x))
-
-# There are some comments which contain double quoted words like --> ""words"" we will convert these to --> "words"
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"\"\"", "\"",x)) # replacing "" with "
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"^\"", "",x)) # removing quotation from start and the end of the string
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"\"$", "",x))
-
-# Removing Punctuation / Special characters (;:'".?@!%&*+) which appears more than twice in the text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"[^a-zA-Z0-9\s][^a-zA-Z0-9\s]+", " ",x))
-
-# Removing Special characters
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"[^a-zA-Z0-9\s\"\',:;?!.()]", " ",x))
-
-# Removing extra spaces in text
-cleaned_data["comment_text"] = cleaned_data["comment_text"].map(lambda x: re.sub(r"\s\s+", " ",x))
-
-Final_data = cleaned_data.copy()
-
-# Model Building
-from transformers import DistilBertTokenizer
-import torch
-import torch.nn as nn
-from torch.utils.data import DataLoader, Dataset
-
-# Using Pretrained DistilBertTokenizer
-tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
-
-# Creating Dataset class for Toxic comments and Labels
-class Toxic_Dataset(Dataset):
- def __init__(self, Comments_, Labels_):
- self.comments = Comments_.copy()
- self.labels = Labels_.copy()
-
- self.comments["comment_text"] = self.comments["comment_text"].map(lambda x: tokenizer(x, padding="max_length", truncation=True, return_tensors="pt"))
-
- def __len__(self):
- return len(self.labels)
-
- def __getitem__(self, idx):
- comment = self.comments.loc[idx,"comment_text"]
- label = np.array(self.labels.loc[idx,:])
-
- return comment, label
-
-X_test = pd.DataFrame(test_df.iloc[:, 0])
-Y_test = test_df.iloc[:, 1:]
-Test_data = Toxic_Dataset(X_test, Y_test)
-Test_Loader = DataLoader(Test_data, shuffle=False)
-
-# Loading pre-trained weights of DistilBert model for sequence classification
-# and changing classifiers output to 7 because we have 7 labels to classify.
-# DistilBERT
-
-from transformers import DistilBertForSequenceClassification
-
-Distil_bert = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
-
-Distil_bert.classifier = nn.Sequential(
- nn.Linear(768,7),
- nn.Sigmoid()
- )
-# print(Distil_bert)
-
-# Instantiating the model and loading the weights
-model = Distil_bert
-model.to('cpu')
-model = torch.load('dsbert_toxic_balanced.pt', map_location=torch.device('cpu'))
-
-# Making Predictions
-for comments, labels in Test_Loader:
- labels = labels.to('cpu')
- labels = labels.float()
- masks = comments['attention_mask'].squeeze(1).to('cpu')
- input_ids = comments['input_ids'].squeeze(1).to('cpu')
-
- output = model(input_ids, masks)
- op = output.logits
-
- res = []
- for i in range(7):
- res.append(op[0, i])
- # print(res)
-
-preds = []
-
-for i in range(len(res)):
- preds.append(res[i].tolist())
-
-classes = ['Toxic', 'Severe Toxic', 'Obscene', 'Threat', 'Insult', 'Identity Hate', 'Non Toxic']
-
-if st.button('Classify'):
- for i in range(len(res)):
- st.write(f"{classes[i]} : {round(preds[i], 2)}\n")
- st.success('These are the outputs')
-
diff --git a/spaces/PIISA/PIISA_Demo/README.md b/spaces/PIISA/PIISA_Demo/README.md
deleted file mode 100644
index 1e7faff3e60e5a82f1defd47443000dc6d218e08..0000000000000000000000000000000000000000
--- a/spaces/PIISA/PIISA_Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PIISA Demo
-emoji: 👀
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-27.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-27.go
deleted file mode 100644
index 2491b523a9cea9797490d6b36101e50341c9c03c..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-27.go and /dev/null differ
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/default_constructor.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/default_constructor.py
deleted file mode 100644
index 3f1f5b44168768dfda3947393a63a6cf9cf50b41..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/default_constructor.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from .builder import RUNNER_BUILDERS, RUNNERS
-
-
-@RUNNER_BUILDERS.register_module()
-class DefaultRunnerConstructor:
- """Default constructor for runners.
-
- Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`.
- For example, We can inject some new properties and functions for `Runner`.
-
- Example:
- >>> from annotator.uniformer.mmcv.runner import RUNNER_BUILDERS, build_runner
- >>> # Define a new RunnerReconstructor
- >>> @RUNNER_BUILDERS.register_module()
- >>> class MyRunnerConstructor:
- ... def __init__(self, runner_cfg, default_args=None):
- ... if not isinstance(runner_cfg, dict):
- ... raise TypeError('runner_cfg should be a dict',
- ... f'but got {type(runner_cfg)}')
- ... self.runner_cfg = runner_cfg
- ... self.default_args = default_args
- ...
- ... def __call__(self):
- ... runner = RUNNERS.build(self.runner_cfg,
- ... default_args=self.default_args)
- ... # Add new properties for existing runner
- ... runner.my_name = 'my_runner'
- ... runner.my_function = lambda self: print(self.my_name)
- ... ...
- >>> # build your runner
- >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40,
- ... constructor='MyRunnerConstructor')
- >>> runner = build_runner(runner_cfg)
- """
-
- def __init__(self, runner_cfg, default_args=None):
- if not isinstance(runner_cfg, dict):
- raise TypeError('runner_cfg should be a dict',
- f'but got {type(runner_cfg)}')
- self.runner_cfg = runner_cfg
- self.default_args = default_args
-
- def __call__(self):
- return RUNNERS.build(self.runner_cfg, default_args=self.default_args)
diff --git a/spaces/Ragio/endometrial_disease_prediction/app.py b/spaces/Ragio/endometrial_disease_prediction/app.py
deleted file mode 100644
index 4be2a42b05c914b8a825d8bfd03789004f5d6bf7..0000000000000000000000000000000000000000
--- a/spaces/Ragio/endometrial_disease_prediction/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import gradio as gr
-import pandas as pd
-import numpy as np
-from pycaret.classification import *
-
-df = pd.read_pickle("endometrium.pkl")
-clf= setup(data=df, target = 'pathology', normalize=True,session_id=2828)
-best = compare_models(n_select=15)
-compare_model_results=pull()
-lda=create_model("lda")
-tuned_lda = tune_model(lda)
-
-def predict(model, BMI,Age,Endometrial_Thickness):
-
- df = pd.DataFrame.from_dict({"Age": [Age], "BMI": [BMI], "Endometrial Thickness": [Endometrial_Thickness]})
- model_index = list(compare_model_results['Model']).index(model)
- model = best[model_index]
- pred = predict_model(model, df, raw_score=True)
- return {"no": pred["prediction_score_no"][0].astype('float64'),
- "yes": pred["prediction_score_yes"][0].astype('float64')}
-
-description = "Endometrial Disease Prediction Model with Artificial Intelligence "
-title = "Classification of Patients with Endometrial Disease"
-model = gr.inputs.Dropdown(list(compare_model_results['Model']), label="Model")
-Age = gr.inputs.Slider(minimum=10, maximum=100,default=df['Age'].mean(), label = 'Age')
-BMI = gr.inputs.Slider(minimum=10, maximum=30,default=df['BMI'].mean(), label = 'BMI')
-Endometrial_Thickness = gr.inputs.Slider(minimum=1, maximum=100,default=df['Endometrial Thickness'].mean(),label = 'Endometrial Thickness')
-
-gr.Interface(predict,[model,Age, BMI, Endometrial_Thickness], "label",title=title,live=True).launch()
\ No newline at end of file
diff --git a/spaces/Rahorus/openjourney/app.py b/spaces/Rahorus/openjourney/app.py
deleted file mode 100644
index bea4accb45793c8e748731c184dee0ffaf509dd5..0000000000000000000000000000000000000000
--- a/spaces/Rahorus/openjourney/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-
-description = """
-
-
- """
-
-gr.Interface.load("models/prompthero/openjourney", description=description).launch()
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/color.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/color.py
deleted file mode 100644
index 6bca2da922c59151f42354ea92616faa1c6b37be..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/color.py
+++ /dev/null
@@ -1,615 +0,0 @@
-import platform
-import re
-from colorsys import rgb_to_hls
-from enum import IntEnum
-from functools import lru_cache
-from typing import TYPE_CHECKING, NamedTuple, Optional, Tuple
-
-from ._palettes import EIGHT_BIT_PALETTE, STANDARD_PALETTE, WINDOWS_PALETTE
-from .color_triplet import ColorTriplet
-from .repr import Result, rich_repr
-from .terminal_theme import DEFAULT_TERMINAL_THEME
-
-if TYPE_CHECKING: # pragma: no cover
- from .terminal_theme import TerminalTheme
- from .text import Text
-
-
-WINDOWS = platform.system() == "Windows"
-
-
-class ColorSystem(IntEnum):
- """One of the 3 color system supported by terminals."""
-
- STANDARD = 1
- EIGHT_BIT = 2
- TRUECOLOR = 3
- WINDOWS = 4
-
- def __repr__(self) -> str:
- return f"ColorSystem.{self.name}"
-
-
-class ColorType(IntEnum):
- """Type of color stored in Color class."""
-
- DEFAULT = 0
- STANDARD = 1
- EIGHT_BIT = 2
- TRUECOLOR = 3
- WINDOWS = 4
-
- def __repr__(self) -> str:
- return f"ColorType.{self.name}"
-
-
-ANSI_COLOR_NAMES = {
- "black": 0,
- "red": 1,
- "green": 2,
- "yellow": 3,
- "blue": 4,
- "magenta": 5,
- "cyan": 6,
- "white": 7,
- "bright_black": 8,
- "bright_red": 9,
- "bright_green": 10,
- "bright_yellow": 11,
- "bright_blue": 12,
- "bright_magenta": 13,
- "bright_cyan": 14,
- "bright_white": 15,
- "grey0": 16,
- "gray0": 16,
- "navy_blue": 17,
- "dark_blue": 18,
- "blue3": 20,
- "blue1": 21,
- "dark_green": 22,
- "deep_sky_blue4": 25,
- "dodger_blue3": 26,
- "dodger_blue2": 27,
- "green4": 28,
- "spring_green4": 29,
- "turquoise4": 30,
- "deep_sky_blue3": 32,
- "dodger_blue1": 33,
- "green3": 40,
- "spring_green3": 41,
- "dark_cyan": 36,
- "light_sea_green": 37,
- "deep_sky_blue2": 38,
- "deep_sky_blue1": 39,
- "spring_green2": 47,
- "cyan3": 43,
- "dark_turquoise": 44,
- "turquoise2": 45,
- "green1": 46,
- "spring_green1": 48,
- "medium_spring_green": 49,
- "cyan2": 50,
- "cyan1": 51,
- "dark_red": 88,
- "deep_pink4": 125,
- "purple4": 55,
- "purple3": 56,
- "blue_violet": 57,
- "orange4": 94,
- "grey37": 59,
- "gray37": 59,
- "medium_purple4": 60,
- "slate_blue3": 62,
- "royal_blue1": 63,
- "chartreuse4": 64,
- "dark_sea_green4": 71,
- "pale_turquoise4": 66,
- "steel_blue": 67,
- "steel_blue3": 68,
- "cornflower_blue": 69,
- "chartreuse3": 76,
- "cadet_blue": 73,
- "sky_blue3": 74,
- "steel_blue1": 81,
- "pale_green3": 114,
- "sea_green3": 78,
- "aquamarine3": 79,
- "medium_turquoise": 80,
- "chartreuse2": 112,
- "sea_green2": 83,
- "sea_green1": 85,
- "aquamarine1": 122,
- "dark_slate_gray2": 87,
- "dark_magenta": 91,
- "dark_violet": 128,
- "purple": 129,
- "light_pink4": 95,
- "plum4": 96,
- "medium_purple3": 98,
- "slate_blue1": 99,
- "yellow4": 106,
- "wheat4": 101,
- "grey53": 102,
- "gray53": 102,
- "light_slate_grey": 103,
- "light_slate_gray": 103,
- "medium_purple": 104,
- "light_slate_blue": 105,
- "dark_olive_green3": 149,
- "dark_sea_green": 108,
- "light_sky_blue3": 110,
- "sky_blue2": 111,
- "dark_sea_green3": 150,
- "dark_slate_gray3": 116,
- "sky_blue1": 117,
- "chartreuse1": 118,
- "light_green": 120,
- "pale_green1": 156,
- "dark_slate_gray1": 123,
- "red3": 160,
- "medium_violet_red": 126,
- "magenta3": 164,
- "dark_orange3": 166,
- "indian_red": 167,
- "hot_pink3": 168,
- "medium_orchid3": 133,
- "medium_orchid": 134,
- "medium_purple2": 140,
- "dark_goldenrod": 136,
- "light_salmon3": 173,
- "rosy_brown": 138,
- "grey63": 139,
- "gray63": 139,
- "medium_purple1": 141,
- "gold3": 178,
- "dark_khaki": 143,
- "navajo_white3": 144,
- "grey69": 145,
- "gray69": 145,
- "light_steel_blue3": 146,
- "light_steel_blue": 147,
- "yellow3": 184,
- "dark_sea_green2": 157,
- "light_cyan3": 152,
- "light_sky_blue1": 153,
- "green_yellow": 154,
- "dark_olive_green2": 155,
- "dark_sea_green1": 193,
- "pale_turquoise1": 159,
- "deep_pink3": 162,
- "magenta2": 200,
- "hot_pink2": 169,
- "orchid": 170,
- "medium_orchid1": 207,
- "orange3": 172,
- "light_pink3": 174,
- "pink3": 175,
- "plum3": 176,
- "violet": 177,
- "light_goldenrod3": 179,
- "tan": 180,
- "misty_rose3": 181,
- "thistle3": 182,
- "plum2": 183,
- "khaki3": 185,
- "light_goldenrod2": 222,
- "light_yellow3": 187,
- "grey84": 188,
- "gray84": 188,
- "light_steel_blue1": 189,
- "yellow2": 190,
- "dark_olive_green1": 192,
- "honeydew2": 194,
- "light_cyan1": 195,
- "red1": 196,
- "deep_pink2": 197,
- "deep_pink1": 199,
- "magenta1": 201,
- "orange_red1": 202,
- "indian_red1": 204,
- "hot_pink": 206,
- "dark_orange": 208,
- "salmon1": 209,
- "light_coral": 210,
- "pale_violet_red1": 211,
- "orchid2": 212,
- "orchid1": 213,
- "orange1": 214,
- "sandy_brown": 215,
- "light_salmon1": 216,
- "light_pink1": 217,
- "pink1": 218,
- "plum1": 219,
- "gold1": 220,
- "navajo_white1": 223,
- "misty_rose1": 224,
- "thistle1": 225,
- "yellow1": 226,
- "light_goldenrod1": 227,
- "khaki1": 228,
- "wheat1": 229,
- "cornsilk1": 230,
- "grey100": 231,
- "gray100": 231,
- "grey3": 232,
- "gray3": 232,
- "grey7": 233,
- "gray7": 233,
- "grey11": 234,
- "gray11": 234,
- "grey15": 235,
- "gray15": 235,
- "grey19": 236,
- "gray19": 236,
- "grey23": 237,
- "gray23": 237,
- "grey27": 238,
- "gray27": 238,
- "grey30": 239,
- "gray30": 239,
- "grey35": 240,
- "gray35": 240,
- "grey39": 241,
- "gray39": 241,
- "grey42": 242,
- "gray42": 242,
- "grey46": 243,
- "gray46": 243,
- "grey50": 244,
- "gray50": 244,
- "grey54": 245,
- "gray54": 245,
- "grey58": 246,
- "gray58": 246,
- "grey62": 247,
- "gray62": 247,
- "grey66": 248,
- "gray66": 248,
- "grey70": 249,
- "gray70": 249,
- "grey74": 250,
- "gray74": 250,
- "grey78": 251,
- "gray78": 251,
- "grey82": 252,
- "gray82": 252,
- "grey85": 253,
- "gray85": 253,
- "grey89": 254,
- "gray89": 254,
- "grey93": 255,
- "gray93": 255,
-}
-
-
-class ColorParseError(Exception):
- """The color could not be parsed."""
-
-
-RE_COLOR = re.compile(
- r"""^
-\#([0-9a-f]{6})$|
-color\(([0-9]{1,3})\)$|
-rgb\(([\d\s,]+)\)$
-""",
- re.VERBOSE,
-)
-
-
-@rich_repr
-class Color(NamedTuple):
- """Terminal color definition."""
-
- name: str
- """The name of the color (typically the input to Color.parse)."""
- type: ColorType
- """The type of the color."""
- number: Optional[int] = None
- """The color number, if a standard color, or None."""
- triplet: Optional[ColorTriplet] = None
- """A triplet of color components, if an RGB color."""
-
- def __rich__(self) -> "Text":
- """Dispays the actual color if Rich printed."""
- from .style import Style
- from .text import Text
-
- return Text.assemble(
- f"",
- )
-
- def __rich_repr__(self) -> Result:
- yield self.name
- yield self.type
- yield "number", self.number, None
- yield "triplet", self.triplet, None
-
- @property
- def system(self) -> ColorSystem:
- """Get the native color system for this color."""
- if self.type == ColorType.DEFAULT:
- return ColorSystem.STANDARD
- return ColorSystem(int(self.type))
-
- @property
- def is_system_defined(self) -> bool:
- """Check if the color is ultimately defined by the system."""
- return self.system not in (ColorSystem.EIGHT_BIT, ColorSystem.TRUECOLOR)
-
- @property
- def is_default(self) -> bool:
- """Check if the color is a default color."""
- return self.type == ColorType.DEFAULT
-
- def get_truecolor(
- self, theme: Optional["TerminalTheme"] = None, foreground: bool = True
- ) -> ColorTriplet:
- """Get an equivalent color triplet for this color.
-
- Args:
- theme (TerminalTheme, optional): Optional terminal theme, or None to use default. Defaults to None.
- foreground (bool, optional): True for a foreground color, or False for background. Defaults to True.
-
- Returns:
- ColorTriplet: A color triplet containing RGB components.
- """
-
- if theme is None:
- theme = DEFAULT_TERMINAL_THEME
- if self.type == ColorType.TRUECOLOR:
- assert self.triplet is not None
- return self.triplet
- elif self.type == ColorType.EIGHT_BIT:
- assert self.number is not None
- return EIGHT_BIT_PALETTE[self.number]
- elif self.type == ColorType.STANDARD:
- assert self.number is not None
- return theme.ansi_colors[self.number]
- elif self.type == ColorType.WINDOWS:
- assert self.number is not None
- return WINDOWS_PALETTE[self.number]
- else: # self.type == ColorType.DEFAULT:
- assert self.number is None
- return theme.foreground_color if foreground else theme.background_color
-
- @classmethod
- def from_ansi(cls, number: int) -> "Color":
- """Create a Color number from it's 8-bit ansi number.
-
- Args:
- number (int): A number between 0-255 inclusive.
-
- Returns:
- Color: A new Color instance.
- """
- return cls(
- name=f"color({number})",
- type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT),
- number=number,
- )
-
- @classmethod
- def from_triplet(cls, triplet: "ColorTriplet") -> "Color":
- """Create a truecolor RGB color from a triplet of values.
-
- Args:
- triplet (ColorTriplet): A color triplet containing red, green and blue components.
-
- Returns:
- Color: A new color object.
- """
- return cls(name=triplet.hex, type=ColorType.TRUECOLOR, triplet=triplet)
-
- @classmethod
- def from_rgb(cls, red: float, green: float, blue: float) -> "Color":
- """Create a truecolor from three color components in the range(0->255).
-
- Args:
- red (float): Red component in range 0-255.
- green (float): Green component in range 0-255.
- blue (float): Blue component in range 0-255.
-
- Returns:
- Color: A new color object.
- """
- return cls.from_triplet(ColorTriplet(int(red), int(green), int(blue)))
-
- @classmethod
- def default(cls) -> "Color":
- """Get a Color instance representing the default color.
-
- Returns:
- Color: Default color.
- """
- return cls(name="default", type=ColorType.DEFAULT)
-
- @classmethod
- @lru_cache(maxsize=1024)
- def parse(cls, color: str) -> "Color":
- """Parse a color definition."""
- original_color = color
- color = color.lower().strip()
-
- if color == "default":
- return cls(color, type=ColorType.DEFAULT)
-
- color_number = ANSI_COLOR_NAMES.get(color)
- if color_number is not None:
- return cls(
- color,
- type=(ColorType.STANDARD if color_number < 16 else ColorType.EIGHT_BIT),
- number=color_number,
- )
-
- color_match = RE_COLOR.match(color)
- if color_match is None:
- raise ColorParseError(f"{original_color!r} is not a valid color")
-
- color_24, color_8, color_rgb = color_match.groups()
- if color_24:
- triplet = ColorTriplet(
- int(color_24[0:2], 16), int(color_24[2:4], 16), int(color_24[4:6], 16)
- )
- return cls(color, ColorType.TRUECOLOR, triplet=triplet)
-
- elif color_8:
- number = int(color_8)
- if number > 255:
- raise ColorParseError(f"color number must be <= 255 in {color!r}")
- return cls(
- color,
- type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT),
- number=number,
- )
-
- else: # color_rgb:
- components = color_rgb.split(",")
- if len(components) != 3:
- raise ColorParseError(
- f"expected three components in {original_color!r}"
- )
- red, green, blue = components
- triplet = ColorTriplet(int(red), int(green), int(blue))
- if not all(component <= 255 for component in triplet):
- raise ColorParseError(
- f"color components must be <= 255 in {original_color!r}"
- )
- return cls(color, ColorType.TRUECOLOR, triplet=triplet)
-
- @lru_cache(maxsize=1024)
- def get_ansi_codes(self, foreground: bool = True) -> Tuple[str, ...]:
- """Get the ANSI escape codes for this color."""
- _type = self.type
- if _type == ColorType.DEFAULT:
- return ("39" if foreground else "49",)
-
- elif _type == ColorType.WINDOWS:
- number = self.number
- assert number is not None
- fore, back = (30, 40) if number < 8 else (82, 92)
- return (str(fore + number if foreground else back + number),)
-
- elif _type == ColorType.STANDARD:
- number = self.number
- assert number is not None
- fore, back = (30, 40) if number < 8 else (82, 92)
- return (str(fore + number if foreground else back + number),)
-
- elif _type == ColorType.EIGHT_BIT:
- assert self.number is not None
- return ("38" if foreground else "48", "5", str(self.number))
-
- else: # self.standard == ColorStandard.TRUECOLOR:
- assert self.triplet is not None
- red, green, blue = self.triplet
- return ("38" if foreground else "48", "2", str(red), str(green), str(blue))
-
- @lru_cache(maxsize=1024)
- def downgrade(self, system: ColorSystem) -> "Color":
- """Downgrade a color system to a system with fewer colors."""
-
- if self.type in [ColorType.DEFAULT, system]:
- return self
- # Convert to 8-bit color from truecolor color
- if system == ColorSystem.EIGHT_BIT and self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- red, green, blue = self.triplet.normalized
- _h, l, s = rgb_to_hls(red, green, blue)
- # If saturation is under 10% assume it is grayscale
- if s < 0.1:
- gray = round(l * 25.0)
- if gray == 0:
- color_number = 16
- elif gray == 25:
- color_number = 231
- else:
- color_number = 231 + gray
- return Color(self.name, ColorType.EIGHT_BIT, number=color_number)
-
- color_number = (
- 16 + 36 * round(red * 5.0) + 6 * round(green * 5.0) + round(blue * 5.0)
- )
- return Color(self.name, ColorType.EIGHT_BIT, number=color_number)
-
- # Convert to standard from truecolor or 8-bit
- elif system == ColorSystem.STANDARD:
- if self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- triplet = self.triplet
- else: # self.system == ColorSystem.EIGHT_BIT
- assert self.number is not None
- triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number])
-
- color_number = STANDARD_PALETTE.match(triplet)
- return Color(self.name, ColorType.STANDARD, number=color_number)
-
- elif system == ColorSystem.WINDOWS:
- if self.system == ColorSystem.TRUECOLOR:
- assert self.triplet is not None
- triplet = self.triplet
- else: # self.system == ColorSystem.EIGHT_BIT
- assert self.number is not None
- if self.number < 16:
- return Color(self.name, ColorType.WINDOWS, number=self.number)
- triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number])
-
- color_number = WINDOWS_PALETTE.match(triplet)
- return Color(self.name, ColorType.WINDOWS, number=color_number)
-
- return self
-
-
-def parse_rgb_hex(hex_color: str) -> ColorTriplet:
- """Parse six hex characters in to RGB triplet."""
- assert len(hex_color) == 6, "must be 6 characters"
- color = ColorTriplet(
- int(hex_color[0:2], 16), int(hex_color[2:4], 16), int(hex_color[4:6], 16)
- )
- return color
-
-
-def blend_rgb(
- color1: ColorTriplet, color2: ColorTriplet, cross_fade: float = 0.5
-) -> ColorTriplet:
- """Blend one RGB color in to another."""
- r1, g1, b1 = color1
- r2, g2, b2 = color2
- new_color = ColorTriplet(
- int(r1 + (r2 - r1) * cross_fade),
- int(g1 + (g2 - g1) * cross_fade),
- int(b1 + (b2 - b1) * cross_fade),
- )
- return new_color
-
-
-if __name__ == "__main__": # pragma: no cover
-
- from .console import Console
- from .table import Table
- from .text import Text
-
- console = Console()
-
- table = Table(show_footer=False, show_edge=True)
- table.add_column("Color", width=10, overflow="ellipsis")
- table.add_column("Number", justify="right", style="yellow")
- table.add_column("Name", style="green")
- table.add_column("Hex", style="blue")
- table.add_column("RGB", style="magenta")
-
- colors = sorted((v, k) for k, v in ANSI_COLOR_NAMES.items())
- for color_number, name in colors:
- if "grey" in name:
- continue
- color_cell = Text(" " * 10, style=f"on {name}")
- if color_number < 16:
- table.add_row(color_cell, f"{color_number}", Text(f'"{name}"'))
- else:
- color = EIGHT_BIT_PALETTE[color_number] # type: ignore[has-type]
- table.add_row(
- color_cell, str(color_number), Text(f'"{name}"'), color.hex, color.rgb
- )
-
- console.print(table)
diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/experiment.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/experiment.py
deleted file mode 100644
index 0a2d5c0dc359cec13304813ac7732c5968d70a80..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/experiment.py
+++ /dev/null
@@ -1,261 +0,0 @@
-"""
-Main file to launch training and testing experiments.
-"""
-
-import yaml
-import os
-import argparse
-import numpy as np
-import torch
-
-from .config.project_config import Config as cfg
-from .train import train_net
-from .export import export_predictions, export_homograpy_adaptation
-
-
-# Pytorch configurations
-torch.cuda.empty_cache()
-torch.backends.cudnn.benchmark = True
-
-
-def load_config(config_path):
- """Load configurations from a given yaml file."""
- # Check file exists
- if not os.path.exists(config_path):
- raise ValueError("[Error] The provided config path is not valid.")
-
- # Load the configuration
- with open(config_path, "r") as f:
- config = yaml.safe_load(f)
-
- return config
-
-
-def update_config(path, model_cfg=None, dataset_cfg=None):
- """Update configuration file from the resume path."""
- # Check we need to update or completely override.
- model_cfg = {} if model_cfg is None else model_cfg
- dataset_cfg = {} if dataset_cfg is None else dataset_cfg
-
- # Load saved configs
- with open(os.path.join(path, "model_cfg.yaml"), "r") as f:
- model_cfg_saved = yaml.safe_load(f)
- model_cfg.update(model_cfg_saved)
- with open(os.path.join(path, "dataset_cfg.yaml"), "r") as f:
- dataset_cfg_saved = yaml.safe_load(f)
- dataset_cfg.update(dataset_cfg_saved)
-
- # Update the saved yaml file
- if not model_cfg == model_cfg_saved:
- with open(os.path.join(path, "model_cfg.yaml"), "w") as f:
- yaml.dump(model_cfg, f)
- if not dataset_cfg == dataset_cfg_saved:
- with open(os.path.join(path, "dataset_cfg.yaml"), "w") as f:
- yaml.dump(dataset_cfg, f)
-
- return model_cfg, dataset_cfg
-
-
-def record_config(model_cfg, dataset_cfg, output_path):
- """Record dataset config to the log path."""
- # Record model config
- with open(os.path.join(output_path, "model_cfg.yaml"), "w") as f:
- yaml.safe_dump(model_cfg, f)
-
- # Record dataset config
- with open(os.path.join(output_path, "dataset_cfg.yaml"), "w") as f:
- yaml.safe_dump(dataset_cfg, f)
-
-
-def train(args, dataset_cfg, model_cfg, output_path):
- """Training function."""
- # Update model config from the resume path (only in resume mode)
- if args.resume:
- if os.path.realpath(output_path) != os.path.realpath(args.resume_path):
- record_config(model_cfg, dataset_cfg, output_path)
-
- # First time, then write the config file to the output path
- else:
- record_config(model_cfg, dataset_cfg, output_path)
-
- # Launch the training
- train_net(args, dataset_cfg, model_cfg, output_path)
-
-
-def export(
- args,
- dataset_cfg,
- model_cfg,
- output_path,
- export_dataset_mode=None,
- device=torch.device("cuda"),
-):
- """Export function."""
- # Choose between normal predictions export or homography adaptation
- if dataset_cfg.get("homography_adaptation") is not None:
- print("[Info] Export predictions with homography adaptation.")
- export_homograpy_adaptation(
- args, dataset_cfg, model_cfg, output_path, export_dataset_mode, device
- )
- else:
- print("[Info] Export predictions normally.")
- export_predictions(
- args, dataset_cfg, model_cfg, output_path, export_dataset_mode
- )
-
-
-def main(
- args, dataset_cfg, model_cfg, export_dataset_mode=None, device=torch.device("cuda")
-):
- """Main function."""
- # Make the output path
- output_path = os.path.join(cfg.EXP_PATH, args.exp_name)
-
- if args.mode == "train":
- if not os.path.exists(output_path):
- os.makedirs(output_path)
- print("[Info] Training mode")
- print("\t Output path: %s" % output_path)
- train(args, dataset_cfg, model_cfg, output_path)
- elif args.mode == "export":
- # Different output_path in export mode
- output_path = os.path.join(cfg.export_dataroot, args.exp_name)
- print("[Info] Export mode")
- print("\t Output path: %s" % output_path)
- export(
- args,
- dataset_cfg,
- model_cfg,
- output_path,
- export_dataset_mode,
- device=device,
- )
- else:
- raise ValueError("[Error]: Unknown mode: " + args.mode)
-
-
-def set_random_seed(seed):
- np.random.seed(seed)
- torch.manual_seed(seed)
-
-
-if __name__ == "__main__":
- # Parse input arguments
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--mode", type=str, default="train", help="'train' or 'export'."
- )
- parser.add_argument(
- "--dataset_config", type=str, default=None, help="Path to the dataset config."
- )
- parser.add_argument(
- "--model_config", type=str, default=None, help="Path to the model config."
- )
- parser.add_argument("--exp_name", type=str, default="exp", help="Experiment name.")
- parser.add_argument(
- "--resume",
- action="store_true",
- default=False,
- help="Load a previously trained model.",
- )
- parser.add_argument(
- "--pretrained",
- action="store_true",
- default=False,
- help="Start training from a pre-trained model.",
- )
- parser.add_argument(
- "--resume_path", default=None, help="Path from which to resume training."
- )
- parser.add_argument(
- "--pretrained_path", default=None, help="Path to the pre-trained model."
- )
- parser.add_argument(
- "--checkpoint_name", default=None, help="Name of the checkpoint to use."
- )
- parser.add_argument(
- "--export_dataset_mode", default=None, help="'train' or 'test'."
- )
- parser.add_argument(
- "--export_batch_size", default=4, type=int, help="Export batch size."
- )
-
- args = parser.parse_args()
-
- # Check if GPU is available
- # Get the model
- if torch.cuda.is_available():
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
-
- # Check if dataset config and model config is given.
- if (
- ((args.dataset_config is None) or (args.model_config is None))
- and (not args.resume)
- and (args.mode == "train")
- ):
- raise ValueError(
- "[Error] The dataset config and model config should be given in non-resume mode"
- )
-
- # If resume, check if the resume path has been given
- if args.resume and (args.resume_path is None):
- raise ValueError("[Error] Missing resume path.")
-
- # [Training] Load the config file.
- if args.mode == "train" and (not args.resume):
- # Check the pretrained checkpoint_path exists
- if args.pretrained:
- checkpoint_folder = args.resume_path
- checkpoint_path = os.path.join(args.pretrained_path, args.checkpoint_name)
- if not os.path.exists(checkpoint_path):
- raise ValueError("[Error] Missing checkpoint: " + checkpoint_path)
- dataset_cfg = load_config(args.dataset_config)
- model_cfg = load_config(args.model_config)
-
- # [resume Training, Test, Export] Load the config file.
- elif (args.mode == "train" and args.resume) or (args.mode == "export"):
- # Check checkpoint path exists
- checkpoint_folder = args.resume_path
- checkpoint_path = os.path.join(args.resume_path, args.checkpoint_name)
- if not os.path.exists(checkpoint_path):
- raise ValueError("[Error] Missing checkpoint: " + checkpoint_path)
-
- # Load model_cfg from checkpoint folder if not provided
- if args.model_config is None:
- print("[Info] No model config provided. Loading from checkpoint folder.")
- model_cfg_path = os.path.join(checkpoint_folder, "model_cfg.yaml")
- if not os.path.exists(model_cfg_path):
- raise ValueError("[Error] Missing model config in checkpoint path.")
- model_cfg = load_config(model_cfg_path)
- else:
- model_cfg = load_config(args.model_config)
-
- # Load dataset_cfg from checkpoint folder if not provided
- if args.dataset_config is None:
- print("[Info] No dataset config provided. Loading from checkpoint folder.")
- dataset_cfg_path = os.path.join(checkpoint_folder, "dataset_cfg.yaml")
- if not os.path.exists(dataset_cfg_path):
- raise ValueError("[Error] Missing dataset config in checkpoint path.")
- dataset_cfg = load_config(dataset_cfg_path)
- else:
- dataset_cfg = load_config(args.dataset_config)
-
- # Check the --export_dataset_mode flag
- if (args.mode == "export") and (args.export_dataset_mode is None):
- raise ValueError("[Error] Empty --export_dataset_mode flag.")
- else:
- raise ValueError("[Error] Unknown mode: " + args.mode)
-
- # Set the random seed
- seed = dataset_cfg.get("random_seed", 0)
- set_random_seed(seed)
-
- main(
- args,
- dataset_cfg,
- model_cfg,
- export_dataset_mode=args.export_dataset_mode,
- device=device,
- )
diff --git a/spaces/Redgon/bingo/src/components/chat-list.tsx b/spaces/Redgon/bingo/src/components/chat-list.tsx
deleted file mode 100644
index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000
--- a/spaces/Redgon/bingo/src/components/chat-list.tsx
+++ /dev/null
@@ -1,28 +0,0 @@
-import React from 'react'
-
-import { Separator } from '@/components/ui/separator'
-import { ChatMessage } from '@/components/chat-message'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-
-export interface ChatList {
- messages: ChatMessageModel[]
-}
-
-export function ChatList({ messages }: ChatList) {
- if (!messages.length) {
- return null
- }
-
- return (
-