Commit
·
c90dd39
1
Parent(s):
7d4442e
Update parquet files (step 41 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devathai Sonna Kavithai Tamil Full Movie Download BETTER.md +0 -23
- spaces/1gistliPinn/ChatGPT4/Examples/Fifa 12 [CRACK ONLY] 100 WORKING Serial Key BETTER.md +0 -6
- spaces/1line/AutoGPT/ui/app.py +0 -145
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Alias 2023 Free 30-Day Trial with 480p Video Option.md +0 -111
- spaces/1phancelerku/anime-remove-background/Download Nicoo Free Fire Max APK and Enhance Your Gaming Experience.md +0 -101
- spaces/1toTree/lora_test/ppdiffusers/models/unet_1d_blocks.py +0 -668
- spaces/AIFILMS/StyleGANEX/scripts/inference.py +0 -136
- spaces/AIFILMS/audioldm-text-to-audio-generation/README.md +0 -22
- spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/__init__.py +0 -1
- spaces/AIZero2Hero4Health/8-NLPSimilarityHeatmapCluster-SL/app.py +0 -77
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py +0 -2861
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_cifar.py +0 -16
- spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/LayoutWritable.js +0 -9
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements.d.ts +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/CreatExpandContainer.js +0 -11
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateCanvas.js +0 -23
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateNinePatch2.js +0 -12
- spaces/Agusbs98/automatic-ecg-diagnosis/nets/nets.py +0 -73
- spaces/Alichuan/VITS-Umamusume-voice-synthesizer/README.md +0 -13
- spaces/AlphaDragon/Voice-Clone/app.py +0 -80
- spaces/Amrrs/pdf-table-extractor/README.md +0 -37
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae.py +0 -600
- spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py +0 -15
- spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_fpn_2x_coco.py +0 -4
- spaces/AndySAnker/DeepStruc/predict.py +0 -30
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deform_conv.py +0 -405
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/util.py +0 -38
- spaces/Apex-X/nono/roop/processors/frame/core.py +0 -91
- spaces/Arvi/feedback_generator/app.py +0 -407
- spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/datasets/object365.py +0 -111
- spaces/Bart92/RVC_HF/infer/modules/ipex/attention.py +0 -128
- spaces/Benebene/Chat-question-answering/interface.py +0 -12
- spaces/Benson/text-generation/Examples/Descargar Apk Mod Fox App.md +0 -57
- spaces/Benson/text-generation/Examples/Descargar Carreras Lmites Mod Apk.md +0 -89
- spaces/Benson/text-generation/Examples/Descargar Do Cabra Simulador.md +0 -83
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/locations/_sysconfig.py +0 -213
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/legacy/__init__.py +0 -0
- spaces/Buckeyes2019/NLP_Demonstration/app.py +0 -129
- spaces/CVPR/Dual-Key_Backdoor_Attacks/original_README.md +0 -514
- spaces/CVPR/LIVE/thrust/cub/cmake/cub-config-version.cmake +0 -33
- spaces/CVPR/LIVE/thrust/thrust/detail/config/forceinline.h +0 -36
- spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/iter_swap.h +0 -47
- spaces/CVPR/LIVE/thrust/thrust/system/tbb/pointer.h +0 -354
- spaces/CVPR/transfiner/README.md +0 -13
- spaces/CVPR/visual-clustering/README.md +0 -12
- spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/__init__.py +0 -0
- spaces/Cosmopolitan/stabilityai-stable-diffusion-2-1/app.py +0 -3
- spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/tasks/__init__.py +0 -28
- spaces/DarwinAnim8or/convert-to-safet/convert.py +0 -306
- spaces/DebasishDhal99/Youtube_Playlist/README.md +0 -45
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devathai Sonna Kavithai Tamil Full Movie Download BETTER.md
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download Devathai Sonna Kavithai Tamil Full Movie Online</h1>
|
3 |
-
<p>Devathai Sonna Kavithai is a 2014 Tamil romantic movie directed by Thesigan and starring newcomers in the lead roles. The movie is about a young man who falls in love with a girl who speaks to him through poetry. The movie was released on January 1, 2014 and received mixed reviews from critics and audiences.</p>
|
4 |
-
<p>If you are looking for a way to download Devathai Sonna Kavithai Tamil full movie online, you have come to the right place. In this article, we will show you some of the best websites and platforms where you can watch or download this movie legally and safely. We will also give you some tips on how to optimize your search and avoid any malware or viruses.</p>
|
5 |
-
<h2>Devathai Sonna Kavithai Tamil Full Movie Download</h2><br /><p><b><b>Download File</b> ››››› <a href="https://byltly.com/2uKy1r">https://byltly.com/2uKy1r</a></b></p><br /><br />
|
6 |
-
<h2>Best Websites to Download Devathai Sonna Kavithai Tamil Full Movie Online</h2>
|
7 |
-
<p>There are many websites that offer Tamil movies for download or streaming, but not all of them are reliable or trustworthy. Some of them may contain harmful ads, pop-ups, or links that can infect your device with malware or viruses. Some of them may also violate the copyright laws and infringe on the rights of the movie makers and distributors.</p>
|
8 |
-
<p>To avoid any such risks, we recommend you to use only the following websites that are legal and safe to download Devathai Sonna Kavithai Tamil full movie online:</p>
|
9 |
-
<ul>
|
10 |
-
<li><b>YouTube</b>: YouTube is one of the most popular and widely used platforms for watching and downloading videos online. You can find Devathai Sonna Kavithai Tamil full movie on YouTube by searching for its title or using the keyword "Devathai Sonna Kavithai Tamil Full Movie Download". You can also use filters such as duration, upload date, or quality to narrow down your results. You can watch the movie for free on YouTube or download it using a third-party app or website that allows YouTube video downloads.</li>
|
11 |
-
<li><b>Dailymotion</b>: Dailymotion is another video-sharing platform that hosts a variety of content, including movies, TV shows, music, sports, and more. You can find Devathai Sonna Kavithai Tamil full movie on Dailymotion by searching for its title or using the keyword "Devathai Sonna Kavithai Tamil Full Movie Download". You can also use filters such as duration, upload date, or quality to narrow down your results. You can watch the movie for free on Dailymotion or download it using a third-party app or website that allows Dailymotion video downloads.</li>
|
12 |
-
</ul>
|
13 |
-
<h2>Tips to Optimize Your Search and Avoid Malware or Viruses</h2>
|
14 |
-
<p>While using the above websites to download Devathai Sonna Kavithai Tamil full movie online, you should keep in mind some tips to optimize your search and avoid any malware or viruses:</p>
|
15 |
-
<ul>
|
16 |
-
<li><b>Use a VPN</b>: A VPN (Virtual Private Network) is a service that encrypts your internet traffic and hides your IP address and location from prying eyes. This can help you access geo-restricted content, bypass censorship, and protect your privacy and security online. You can use a VPN to access the above websites from anywhere in the world and enjoy Devathai Sonna Kavithai Tamil full movie without any hassle.</li>
|
17 |
-
<li><b>Use an antivirus</b>: An antivirus is a software that detects and removes any malicious programs or files that may harm your device or data. You should always use an antivirus to scan your device before and after downloading any file from the internet. This can help you prevent any malware or viruses from infecting your device or stealing your information.</li>
|
18 |
-
<li><b>Use a trusted source</b>: As mentioned earlier, not all websites that offer Tamil movies for download or streaming are reliable or trustworthy. You should always use a trusted source that has a good reputation and reviews from other users. You should also avoid clicking on any suspicious ads, pop-ups, or links that may redirect you to unwanted or harmful sites.</li>
|
19 |
-
</ul>
|
20 |
-
<h2>Conclusion</h2>
|
21 |
-
<p>Devathai Sonna Kavithai is a 2014 Tamil romantic movie that you</p> 81aa517590<br />
|
22 |
-
<br />
|
23 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Fifa 12 [CRACK ONLY] 100 WORKING Serial Key BETTER.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Fifa 12 [CRACK ONLY] 100% WORKING Serial Key</h2><br /><p><b><b>Download Zip</b> ⚹⚹⚹ <a href="https://imgfil.com/2uxYJX">https://imgfil.com/2uxYJX</a></b></p><br /><br />
|
2 |
-
|
3 |
-
FIFA 11 (2011) Reloaded + Keygen + CRACK FIX FIFA 11 . ... relax in 12/10/2012 · Pro Evolution Soccer 2014 (PES 2014) PC Keygen ‡ Crack ... We provide you 100% working game torrent setup, full version, PC game & free ... 4d29de3e1b<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1line/AutoGPT/ui/app.py
DELETED
@@ -1,145 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import utils
|
3 |
-
from api import AutoAPI, get_openai_api_key
|
4 |
-
import os, shutil
|
5 |
-
import json
|
6 |
-
|
7 |
-
FILE_DIR = os.path.dirname(os.path.abspath(__file__))
|
8 |
-
OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace")
|
9 |
-
if not os.path.exists(OUTPUT_DIR):
|
10 |
-
os.mkdir(OUTPUT_DIR)
|
11 |
-
|
12 |
-
CSS = """
|
13 |
-
#chatbot {font-family: monospace;}
|
14 |
-
#files .generating {display: none;}
|
15 |
-
#files .min {min-height: 0px;}
|
16 |
-
"""
|
17 |
-
|
18 |
-
with gr.Blocks(css=CSS) as app:
|
19 |
-
with gr.Column() as setup_pane:
|
20 |
-
gr.Markdown(f"""# Auto-GPT
|
21 |
-
1. Duplicate this Space: <a href="https://huggingface.co/spaces/{os.getenv('SPACE_ID')}?duplicate=true"><img style="display: inline; margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space" /></a> This will **NOT** work without duplication!
|
22 |
-
2. Enter your <a href="https://platform.openai.com/account/api-keys">OpenAI API Key</a> below.
|
23 |
-
""")
|
24 |
-
with gr.Row():
|
25 |
-
open_ai_key = gr.Textbox(
|
26 |
-
value=get_openai_api_key(),
|
27 |
-
label="OpenAI API Key",
|
28 |
-
type="password",
|
29 |
-
)
|
30 |
-
gr.Markdown(
|
31 |
-
"3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page."
|
32 |
-
)
|
33 |
-
with gr.Row():
|
34 |
-
ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT")
|
35 |
-
ai_role = gr.Textbox(
|
36 |
-
label="AI Role",
|
37 |
-
placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.",
|
38 |
-
)
|
39 |
-
top_5_goals = gr.Dataframe(
|
40 |
-
row_count=(5, "fixed"),
|
41 |
-
col_count=(1, "fixed"),
|
42 |
-
headers=["AI Goals - Enter up to 5"],
|
43 |
-
type="array"
|
44 |
-
)
|
45 |
-
start_btn = gr.Button("Start", variant="primary")
|
46 |
-
with open(os.path.join(FILE_DIR, "examples.json"), "r") as f:
|
47 |
-
example_values = json.load(f)
|
48 |
-
gr.Examples(
|
49 |
-
example_values,
|
50 |
-
[ai_name, ai_role, top_5_goals],
|
51 |
-
)
|
52 |
-
with gr.Column(visible=False) as main_pane:
|
53 |
-
with gr.Row():
|
54 |
-
with gr.Column(scale=2):
|
55 |
-
chatbot = gr.Chatbot(elem_id="chatbot")
|
56 |
-
with gr.Row():
|
57 |
-
yes_btn = gr.Button("Yes", variant="primary", interactive=False)
|
58 |
-
consecutive_yes = gr.Slider(
|
59 |
-
1, 10, 1, step=1, label="Consecutive Yes", interactive=False
|
60 |
-
)
|
61 |
-
custom_response = gr.Textbox(
|
62 |
-
label="Custom Response",
|
63 |
-
placeholder="Press 'Enter' to Submit.",
|
64 |
-
interactive=False,
|
65 |
-
)
|
66 |
-
with gr.Column(scale=1):
|
67 |
-
gr.HTML(
|
68 |
-
lambda: f"""
|
69 |
-
Generated Files
|
70 |
-
<pre><code style='overflow-x: auto'>{utils.format_directory(OUTPUT_DIR)}</pre></code>
|
71 |
-
""", every=3, elem_id="files"
|
72 |
-
)
|
73 |
-
download_btn = gr.Button("Download All Files")
|
74 |
-
|
75 |
-
chat_history = gr.State([[None, None]])
|
76 |
-
api = gr.State(None)
|
77 |
-
|
78 |
-
def start(open_ai_key, ai_name, ai_role, top_5_goals):
|
79 |
-
auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals)
|
80 |
-
return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api
|
81 |
-
|
82 |
-
def bot_response(chat, api):
|
83 |
-
messages = []
|
84 |
-
for message in api.get_chatbot_response():
|
85 |
-
messages.append(message)
|
86 |
-
chat[-1][1] = "\n".join(messages) + "..."
|
87 |
-
yield chat
|
88 |
-
chat[-1][1] = "\n".join(messages)
|
89 |
-
yield chat
|
90 |
-
|
91 |
-
def send_message(count, chat, api, message="Y"):
|
92 |
-
if message != "Y":
|
93 |
-
count = 1
|
94 |
-
for i in range(count):
|
95 |
-
chat.append([message, None])
|
96 |
-
yield chat, count - i
|
97 |
-
api.send_message(message)
|
98 |
-
for updated_chat in bot_response(chat, api):
|
99 |
-
yield updated_chat, count - i
|
100 |
-
|
101 |
-
def activate_inputs():
|
102 |
-
return {
|
103 |
-
yes_btn: gr.Button.update(interactive=True),
|
104 |
-
consecutive_yes: gr.Slider.update(interactive=True),
|
105 |
-
custom_response: gr.Textbox.update(interactive=True),
|
106 |
-
}
|
107 |
-
|
108 |
-
def deactivate_inputs():
|
109 |
-
return {
|
110 |
-
yes_btn: gr.Button.update(interactive=False),
|
111 |
-
consecutive_yes: gr.Slider.update(interactive=False),
|
112 |
-
custom_response: gr.Textbox.update(interactive=False),
|
113 |
-
}
|
114 |
-
|
115 |
-
start_btn.click(
|
116 |
-
start,
|
117 |
-
[open_ai_key, ai_name, ai_role, top_5_goals],
|
118 |
-
[setup_pane, main_pane, api],
|
119 |
-
).then(bot_response, [chat_history, api], chatbot).then(
|
120 |
-
activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
|
121 |
-
)
|
122 |
-
|
123 |
-
yes_btn.click(
|
124 |
-
deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
|
125 |
-
).then(
|
126 |
-
send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes]
|
127 |
-
).then(
|
128 |
-
activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
|
129 |
-
)
|
130 |
-
custom_response.submit(
|
131 |
-
deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
|
132 |
-
).then(
|
133 |
-
send_message,
|
134 |
-
[consecutive_yes, chat_history, api, custom_response],
|
135 |
-
[chatbot, consecutive_yes],
|
136 |
-
).then(
|
137 |
-
activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
|
138 |
-
)
|
139 |
-
|
140 |
-
def download_all_files():
|
141 |
-
shutil.make_archive("outputs", "zip", OUTPUT_DIR)
|
142 |
-
|
143 |
-
download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS)
|
144 |
-
|
145 |
-
app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Alias 2023 Free 30-Day Trial with 480p Video Option.md
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Alias TV Series in 480p Resolution</h1>
|
3 |
-
<p>If you are a fan of action, thriller, and science fiction genres, you might have heard of Alias, a popular TV series that ran from 2001 to 2006. The show starred Jennifer Garner as Sydney Bristow, a double agent for the CIA who poses as an operative for a criminal organization called SD-6. The series was created by J.J. Abrams, who also produced other hit shows like Lost, Fringe, and Westworld.</p>
|
4 |
-
<p>In this article, we will tell you more about Alias TV series and why you should watch it. We will also explain what 480p resolution is and why you need it for downloading videos. Finally, we will show you how to download Alias TV series in 480p resolution using a free tool called YouTube-DL.</p>
|
5 |
-
<h2>alias 480p download</h2><br /><p><b><b>Download</b> ⇒ <a href="https://urlin.us/2uSX1I">https://urlin.us/2uSX1I</a></b></p><br /><br />
|
6 |
-
<h2>What is Alias TV Series and Why You Should Watch It</h2>
|
7 |
-
<h3>The Plot and Characters of Alias</h3>
|
8 |
-
<p>The plot of Alias revolves around Sydney Bristow, who was recruited as a spy by a man who claimed to work for the CIA when she was a college student. She later discovered that she was actually working for SD-6, a rogue faction of the CIA that was part of a larger alliance of criminal organizations. She then decided to become a double agent for the real CIA and work to bring down SD-6 and its allies.</p>
|
9 |
-
<p>Along the way, she faced many dangers and challenges, such as dealing with her estranged father Jack Bristow, who was also a double agent, her complicated relationship with her fellow agent Michael Vaughn, her best friend Francie Calfo, who was replaced by a look-alike assassin, and her mother Irina Derevko, who was a former KGB spy and a key figure in a global conspiracy involving an ancient prophet named Rambaldi.</p>
|
10 |
-
<p>The show featured many twists and turns, cliffhangers, action sequences, gadgets, disguises, and exotic locations. It also had a stellar cast of supporting characters, such as Arvin Sloane, the leader of SD-6 who had a personal connection to Sydney; Marshall Flinkman, the quirky tech genius who helped Sydney on her missions; Marcus Dixon, Sydney's loyal partner at SD-6; Julian Sark, a ruthless mercenary who worked for various factions; Lauren Reed, Vaughn's wife who turned out to be a double agent; Nadia Santos, Sydney's half-sister who was also involved in the Rambaldi prophecy; Rachel Gibson, a young hacker who joined Sydney's team after being betrayed by her employer; Thomas Grace, a former Delta Force soldier who became Sydney's new partner; Kelly Peyton, a former friend of Rachel who became an enemy agent; and Renée Rienne, a mysterious freelance spy who had ties to Sydney's past.</h3>
|
11 |
-
<h3>The Awards and Recognition of Alias</h3>
|
12 |
-
<p>Alias was well received by critics and audiences alike. It won four Emmy Awards out of 11 nominations, including Outstanding Lead Actress in a Drama Series for Jennifer Garner in 2002. It also won a Golden Globe Award for Best Actress in a Television Series – Drama for Garner in 2002. The show was nominated for several other awards, such as Screen Actors Guild Awards, Teen Choice Awards, Saturn Awards, and People's Choice Awards.</p>
|
13 |
-
<p>Alias was also included in several "best of" lists by various media outlets. For example, it was ranked number 36 on TV Guide's list of "50 Greatest TV Shows of All Time" in 2002. It was also ranked number seven on Entertainment Weekly's list of "The New Classics: TV" in 2008. The American Film Institute named it one of the top ten television programs of the year in <p>2003 and 2005. The show also influenced other spy-themed shows, such as Chuck, Nikita, and Covert Affairs.</p>
|
14 |
-
<h2>What is 480p Resolution and Why You Need It</h2>
|
15 |
-
<h3>The Definition and Features of 480p Resolution</h3>
|
16 |
-
<p>480p resolution is a term that refers to the video quality of a digital display. It means that the video has 480 horizontal lines of pixels that are progressively scanned, meaning that each line is displayed in sequence. The "p" stands for progressive scan, as opposed to interlaced scan, which alternates between odd and even lines of pixels. Progressive scan produces a smoother and clearer image than interlaced scan.</p>
|
17 |
-
<p>The aspect ratio of 480p resolution is usually 4:3, which means that the width of the screen is four times the height. However, some widescreen formats, such as 16:9, can also use 480p resolution. The pixel dimensions of 480p resolution are typically 640 x 480 for 4:3 aspect ratio and 854 x 480 for 16:9 aspect ratio.</p>
|
18 |
-
<p>alias season 1 480p download<br />
|
19 |
-
alias tv series 480p download<br />
|
20 |
-
alias 480p mkv download<br />
|
21 |
-
alias 480p free download<br />
|
22 |
-
alias 480p torrent download<br />
|
23 |
-
alias 480p direct download<br />
|
24 |
-
alias 480p google drive download<br />
|
25 |
-
alias 480p mega.nz download<br />
|
26 |
-
alias 480p english subtitles download<br />
|
27 |
-
alias 480p all episodes download<br />
|
28 |
-
alias season 2 480p download<br />
|
29 |
-
alias season 3 480p download<br />
|
30 |
-
alias season 4 480p download<br />
|
31 |
-
alias season 5 480p download<br />
|
32 |
-
alias complete series 480p download<br />
|
33 |
-
alias industrial design software 480p download<br />
|
34 |
-
alias autodesk free trial 480p download<br />
|
35 |
-
alias autodesk tutorial 480p download<br />
|
36 |
-
alias autodesk crack 480p download<br />
|
37 |
-
alias autodesk keygen 480p download<br />
|
38 |
-
alias youtube-dl quality selection 480p download<br />
|
39 |
-
alias youtube-dl best format 480p download<br />
|
40 |
-
alias youtube-dl command line 480p download<br />
|
41 |
-
alias youtube-dl video downloader 480p download<br />
|
42 |
-
alias youtube-dl ask ubuntu 480p download<br />
|
43 |
-
vincenzo s01e01 alias 480p download<br />
|
44 |
-
vincenzo s01e02 alias 480p download<br />
|
45 |
-
vincenzo s01e03 alias 480p download<br />
|
46 |
-
vincenzo s01e04 alias 480p download<br />
|
47 |
-
vincenzo s01e05 alias 480p download<br />
|
48 |
-
vincenzo s01e06 alias 480p download<br />
|
49 |
-
vincenzo s01e07 alias 480p download<br />
|
50 |
-
vincenzo s01e08 alias 480p download<br />
|
51 |
-
vincenzo s01e09 alias 480p download<br />
|
52 |
-
vincenzo s01e10 alias 480p download<br />
|
53 |
-
vincenzo s01e11 alias 480p download<br />
|
54 |
-
vincenzo s01e12 alias 480p download<br />
|
55 |
-
vincenzo s01e13 alias 480p download<br />
|
56 |
-
vincenzo s01e14 alias 480p download<br />
|
57 |
-
vincenzo s01e15 alias 480p download<br />
|
58 |
-
vincenzo s01e16 alias 480p download<br />
|
59 |
-
vincenzo s01e17 alias 480p download<br />
|
60 |
-
vincenzo s01e18 alias 480p download<br />
|
61 |
-
vincenzo s01e19 alias 480p download<br />
|
62 |
-
vincenzo s01e20 alias 480p download<br />
|
63 |
-
vincenzo korean drama alias 480p download<br />
|
64 |
-
vincenzo english subtitles alias 480p download<br />
|
65 |
-
vincenzo internet archive alias 480p download<br />
|
66 |
-
vincenzo mp4 format alias 480p download</p>
|
67 |
-
<h3>The Benefits and Drawbacks of 480p Resolution</h3>
|
68 |
-
<p>One of the main benefits of 480p resolution is that it requires less bandwidth and storage space than higher resolutions, such as 720p, 1080p, or 4K. This means that you can download and stream videos faster and easier with 480p resolution. It also means that you can fit more videos on your device or hard drive with 480p resolution.</p>
|
69 |
-
<p>Another benefit of 480p resolution is that it is compatible with most devices and platforms, such as TVs, computers, smartphones, tablets, DVD players, and game consoles. You can watch videos in 480p resolution on almost any screen without worrying about compatibility issues or format conversions.</p>
|
70 |
-
<p>However, 480p resolution also has some drawbacks. One of them is that it has lower image quality than higher resolutions, especially when viewed on larger screens or from close distances. You might notice pixelation, blurriness, or distortion when watching videos in 480p resolution on a big screen or a high-definition display. You might also miss some details or colors that are present in higher resolutions.</p>
|
71 |
-
<p>Another drawback of 480p resolution is that it might not be suitable for some types of videos, such as those that have fast motion, complex graphics, or high contrast. These videos might look choppy, blurry, or noisy when viewed in 480p resolution. You might also experience some lagging or buffering when streaming these videos in 480p resolution.</p>
|
72 |
-
<h2>How to Download Alias TV Series in 480p Resolution Using YouTube-DL</h2>
|
73 |
-
<h3>What is YouTube-DL and How to Install It</h3>
|
74 |
-
<p>YouTube-DL is a free and open-source command-line tool that allows you to download videos from YouTube and other websites. You can use it to download videos in various formats and resolutions, including 480p. You can also use it to download audio files, subtitles, playlists, channels, and live streams.</p>
|
75 |
-
<p>To install YouTube-DL on your device, you need to follow these steps:</p>
|
76 |
-
<ul>
|
77 |
-
<li>Download the latest version of YouTube-DL from its official website: https://youtube-dl.org/</li>
|
78 |
-
<li>Extract the zip file and save the youtube-dl.exe file in a folder of your choice.</li>
|
79 |
-
<li>Add the folder to your system's PATH environment variable so that you can run YouTube-DL from any directory.</li>
|
80 |
-
<li>Open a command prompt window and type youtube-dl --version to check if YouTube-DL is installed correctly.</li>
|
81 |
-
</ul>
|
82 |
-
<h3>How to Find and Select the Video Quality from YouTube-DL</h3>
|
83 |
-
<p>To find and select the video quality from YouTube-DL, you need to follow these steps:</p>
|
84 |
-
<ul>
|
85 |
-
<li>Copy the URL of the video that you want to download from YouTube or any other website.</li>
|
86 |
-
<li>Open a command prompt window and type youtube-dl -F [URL] to list all the available formats and resolutions for the video.</li>
|
87 |
-
<li>Look for the format code that corresponds to the video quality that you want to download. For example, if you want to download Alias TV series in 480p resolution with MP4 format, you might look for something like this: <code>18 mp4 640x360 medium , avc1.42001E, mp4a.40.2@96k (best)</code></li>
|
88 |
-
<li>Note down the format code (in this case, 18) for later use.</li>
|
89 |
-
</ul>
|
90 |
-
<h3>How to Download the Video Using YouTube-DL</h3>
|
91 |
-
<p>To download the video using YouTube-DL , you need to follow these steps:</p>
|
92 |
-
<ul>
|
93 |
-
<li>Open a command prompt window and type youtube-dl -f [format code] [URL] to download the video with the selected format and resolution. For example, if you want to download Alias TV series in 480p resolution with MP4 format, you might type something like this: <code>youtube-dl -f 18 https://www.youtube.com/watch?v=0hX-YLAjI_A</code></li>
|
94 |
-
<li>Wait for the download to finish. You can check the progress and speed of the download on the command prompt window.</li>
|
95 |
-
<li>Find the downloaded video file in the same folder where you saved the youtube-dl.exe file. You can rename or move the file as you wish.</li>
|
96 |
-
</ul>
|
97 |
-
<h2>Conclusion</h2>
|
98 |
-
<p>In this article, we have shown you how to download Alias TV series in 480p resolution using YouTube-DL. We have also given you some information about Alias TV series and why you should watch it, as well as 480p resolution and why you need it. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.</p>
|
99 |
-
<h2>FAQs</h2>
|
100 |
-
<h3>Q: Is YouTube-DL legal?</h3>
|
101 |
-
<p>A: YouTube-DL is legal as long as you use it for personal and non-commercial purposes. However, downloading videos from YouTube or other websites might violate their terms of service or copyright laws, so you should always check the source and legality of the videos before downloading them.</p>
|
102 |
-
<h3>Q: Can I use YouTube-DL to download videos from other websites besides YouTube?</h3>
|
103 |
-
<p>A: Yes, YouTube-DL supports many other websites, such as Vimeo, Dailymotion, Facebook, Instagram, Twitter, and more. You can check the full list of supported websites here: https://ytdl-org.github.io/youtube-dl/supportedsites.html</p>
|
104 |
-
<h3>Q: Can I use YouTube-DL to download videos in other resolutions besides 480p?</h3>
|
105 |
-
<p>A: Yes, YouTube-DL can download videos in various resolutions, such as 240p, 360p, 720p, 1080p, or even 4K. You just need to find and select the appropriate format code from the list of available formats and resolutions for each video.</p>
|
106 |
-
<h3>Q: Can I use YouTube-DL to download audio files or subtitles from videos?</h3>
|
107 |
-
<p>A: Yes, YouTube-DL can download audio files or subtitles from videos. You can use the -x option to extract audio files from videos, or the --write-sub option to download subtitles from videos. You can also specify the format and language of the audio files or subtitles using other options. You can check the full list of options and examples here: https://github.com/ytdl-org/youtube-dl/blob/master/README.md#readme</p>
|
108 |
-
<h3>Q: Can I use YouTube-DL to download playlists or channels from YouTube?</h3>
|
109 |
-
<p>A: Yes, YouTube-DL can download playlists or channels from YouTube. You just need to copy and paste the URL of the playlist or channel instead of a single video. You can also use the --playlist-start and --playlist-end options to specify which videos from the playlist or channel you want to download.</p> 197e85843d<br />
|
110 |
-
<br />
|
111 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Nicoo Free Fire Max APK and Enhance Your Gaming Experience.md
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Nicoo Free Fire Max APK Download 2023: How to Get Free Skins and More</h1>
|
3 |
-
<p>If you are a fan of the popular FPS game Free Fire, you might have heard of Nicoo Free Fire Max, a third-party app that allows you to customize your avatars with various skins and accessories. But what is Nicoo Free Fire Max exactly, and how can you download and use it safely? In this article, we will answer these questions and more, so keep reading!</p>
|
4 |
-
<h2>What is Nicoo Free Fire Max?</h2>
|
5 |
-
<p>Nicoo Free Fire Max is an action app developed by Naviemu.inc that works as a skin injector for Free Fire. It lets you unlock and apply different skins for your characters, weapons, vehicles, parachutes, and more. You can also change the background and theme of the game, as well as the interface and sound effects. With Nicoo Free Fire Max, you can personalize your gaming experience and make it more fun and unique.</p>
|
6 |
-
<h2>nicoo free fire max apk download 2023</h2><br /><p><b><b>Download</b> ↔ <a href="https://jinyurl.com/2uNUcV">https://jinyurl.com/2uNUcV</a></b></p><br /><br />
|
7 |
-
<h3>Features of Nicoo Free Fire Max</h3>
|
8 |
-
<p>Some of the features that Nicoo Free Fire Max offers are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Free access to all skins in the game, including premium and exclusive ones.</li>
|
11 |
-
<li>Easy to use interface with a simple tap-to-apply function.</li>
|
12 |
-
<li>No need to root your device or modify the game files.</li>
|
13 |
-
<li>Compatible with both Android and PC devices.</li>
|
14 |
-
<li>Regular updates with new skins and features.</li>
|
15 |
-
</ul>
|
16 |
-
<h3>How to Download and Install Nicoo Free Fire Max APK</h3>
|
17 |
-
<p>To download and install Nicoo Free Fire Max APK on your device, follow these steps:</p>
|
18 |
-
<ol>
|
19 |
-
<li>Go to the official website of Nicoo Free Fire Max or click on this link .</li>
|
20 |
-
<li>Select the latest version of the app and click on the download button.</li>
|
21 |
-
<li>Wait for the download to finish and then locate the APK file on your device.</li>
|
22 |
-
<li>Enable the installation from unknown sources on your device settings.</li>
|
23 |
-
<li>Tap on the APK file and follow the instructions to install the app.</li>
|
24 |
-
<li>Launch the app and grant it the necessary permissions.</li>
|
25 |
-
<li>Open Free Fire from the app and enjoy the free skins!</li>
|
26 |
-
</ol>
|
27 |
-
<h2>Why Use Nicoo Free Fire Max?</h2>
|
28 |
-
<p>Nicoo Free Fire Max is a great app for those who want to spice up their gameplay with different skins and accessories. But what are the benefits and risks of using it?</p>
|
29 |
-
<p>nicoo app for free fire skins 2023<br />
|
30 |
-
how to install nicoo free fire max apk<br />
|
31 |
-
nicoo free fire max latest version download<br />
|
32 |
-
unlock all free fire skins with nicoo apk<br />
|
33 |
-
nicoo free fire max mod apk 2023<br />
|
34 |
-
nicoo app free fire bundle and weapons<br />
|
35 |
-
download nicoo apk for android 5.0 and above<br />
|
36 |
-
nicoo free fire max hack apk 2023<br />
|
37 |
-
nicoo app secured source for free fire<br />
|
38 |
-
nicoo free fire max apk no root 2023<br />
|
39 |
-
nicoo app review and features for free fire<br />
|
40 |
-
nicoo free fire max apk unlimited diamonds<br />
|
41 |
-
nicoo app download link for free fire 2023<br />
|
42 |
-
nicoo free fire max apk obb download<br />
|
43 |
-
nicoo app tutorial and guide for free fire<br />
|
44 |
-
nicoo free fire max apk update 2023<br />
|
45 |
-
nicoo app support and feedback for free fire<br />
|
46 |
-
nicoo free fire max apk online generator<br />
|
47 |
-
nicoo app alternative and similar apps for free fire<br />
|
48 |
-
nicoo free fire max apk offline installer<br />
|
49 |
-
nicoo app benefits and advantages for free fire<br />
|
50 |
-
nicoo free fire max apk compatible devices<br />
|
51 |
-
nicoo app requirements and specifications for free fire<br />
|
52 |
-
nicoo free fire max apk file size and format<br />
|
53 |
-
nicoo app license and terms of service for free fire</p>
|
54 |
-
<h3>Benefits of Using Nicoo Free Fire Max</h3>
|
55 |
-
<p>Some of the benefits of using Nicoo Free Fire Max are:</p>
|
56 |
-
<ul>
|
57 |
-
<li>You can save money by not having to buy diamonds or coins to get skins in the game.</li>
|
58 |
-
<li>You can impress your friends and enemies with your cool and stylish appearance.</li>
|
59 |
-
<li>You can enhance your performance and confidence in the game with better skins.</li>
|
60 |
-
<li>You can explore different combinations and styles with various skins.</li>
|
61 |
-
</ul>
|
62 |
-
<h3>Risks of Using Nicoo Free Fire Max</h3>
|
63 |
-
<p>Some of the risks of using Nicoo Free Fire Max are:</p>
|
64 |
-
<ul>
|
65 |
-
<li>You might get banned from the game if you are detected by the anti-cheat system.</li>
|
66 |
-
<li>You might expose your device to malware or viruses if you download from untrusted sources.</li>
|
67 |
-
<li>You might lose your account data or personal information if you give them to fake or phishing websites.</li>
|
68 |
-
<li>You might violate the terms and conditions of the game by using an unauthorized app.</li>
|
69 |
-
</ul>
|
70 |
-
<h2>Alternatives to Nicoo Free Fire Max</h2>
|
71 |
-
<p>If you are not comfortable with using Nicoo Free Fire Max, or you want to try other apps that offer similar features, you can check out these alternatives:</p>
|
72 |
-
<h3>Lulubox</h3>
|
73 |
-
<p>Lulubox is another popular app that allows you to get free skins and mods for various games, including Free Fire. It also has a built-in game booster that can improve your device performance and battery life. You can download Lulubox from its official website or from the Google Play Store .</p>
|
74 |
-
<h3>Tool Skin</h3>
|
75 |
-
<p>Tool Skin is a simple and lightweight app that lets you change the skins of your characters, weapons, backpacks, and more in Free Fire. It has a user-friendly interface and a large collection of skins to choose from. You can download Tool Skin from its official website or from the Google Play Store .</p>
|
76 |
-
<h2>Conclusion</h2>
|
77 |
-
<p>Nicoo Free Fire Max is an app that can help you customize your Free Fire gameplay with various skins and accessories. It is easy to use and compatible with both Android and PC devices. However, it also comes with some risks, such as getting banned or infected by malware. Therefore, you should use it at your own discretion and with caution. Alternatively, you can try other apps like Lulubox or Tool Skin that offer similar features.</p>
|
78 |
-
<h3>Summary of the article</h3>
|
79 |
-
<p>In this article, we have discussed the following points:</p>
|
80 |
-
<ul>
|
81 |
-
<li>What is Nicoo Free Fire Max and what are its features?</li>
|
82 |
-
<li>How to download and install Nicoo Free Fire Max APK on your device?</li>
|
83 |
-
<li>Why use Nicoo Free Fire Max and what are the benefits and risks of using it?</li>
|
84 |
-
<li>What are some alternatives to Nicoo Free Fire Max that you can try?</li>
|
85 |
-
</ul>
|
86 |
-
<h3>FAQs</h3>
|
87 |
-
<p>Here are some frequently asked questions about Nicoo Free Fire Max:</p>
|
88 |
-
<ol>
|
89 |
-
<li><b>Is Nicoo Free Fire Max safe to use?</b><br>
|
90 |
-
Nicoo Free Fire Max is not an official app from the developers of Free Fire, so it is not guaranteed to be safe or secure. You should only download it from trusted sources and scan it for viruses before installing it. You should also avoid giving your account details or personal information to any website or app that claims to be associated with Nicoo Free Fire Max.</li>
|
91 |
-
<li><b>Is Nicoo Free Fire Max legal to use?</b><br>
|
92 |
-
Nicoo Free Fire Max is not legal to use, as it violates the terms and conditions of Free Fire. Using it may result in your account being banned or suspended by the game authorities. You should only use it at your own risk and responsibility.</li>
|
93 |
-
<li><b>Do other players see my skins when I use Nicoo Free Fire Max?</b><br>
|
94 |
-
No, other players do not see your skins when you use Nicoo Free Fire Max. The skins are only visible to you on your device, as they are not part of the game data. Therefore, using Nicoo Free Fire Max does not give you any advantage or disadvantage over other players.</li>
|
95 |
-
<li><b>Does Nicoo Free Fire Max work with Free Fire Max?</b><br>
|
96 |
-
Yes, Nicoo Free Fire Max works with both Free Fire and Free Fire Max, as they are based on the same game engine. However, you may need to update the app regularly to match the latest version of the game.</li>
|
97 |
-
<li><b>How can I contact the developers of Nicoo Free Fire Max?</b><br>
|
98 |
-
You can contact the developers of Nicoo Free Fire Max by visiting their official website or by sending them an email at [email protected]. You can also follow them on their social media accounts for updates and news.</li>
|
99 |
-
</ol></p> 401be4b1e0<br />
|
100 |
-
<br />
|
101 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/models/unet_1d_blocks.py
DELETED
@@ -1,668 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
import math
|
16 |
-
|
17 |
-
import paddle
|
18 |
-
import paddle.nn.functional as F
|
19 |
-
from paddle import nn
|
20 |
-
|
21 |
-
from .resnet import Downsample1D, ResidualTemporalBlock1D, Upsample1D, rearrange_dims
|
22 |
-
|
23 |
-
|
24 |
-
class DownResnetBlock1D(nn.Layer):
|
25 |
-
def __init__(
|
26 |
-
self,
|
27 |
-
in_channels,
|
28 |
-
out_channels=None,
|
29 |
-
num_layers=1,
|
30 |
-
conv_shortcut=False,
|
31 |
-
temb_channels=32,
|
32 |
-
groups=32,
|
33 |
-
groups_out=None,
|
34 |
-
non_linearity=None,
|
35 |
-
time_embedding_norm="default",
|
36 |
-
output_scale_factor=1.0,
|
37 |
-
add_downsample=True,
|
38 |
-
):
|
39 |
-
super().__init__()
|
40 |
-
self.in_channels = in_channels
|
41 |
-
out_channels = in_channels if out_channels is None else out_channels
|
42 |
-
self.out_channels = out_channels
|
43 |
-
self.use_conv_shortcut = conv_shortcut
|
44 |
-
self.time_embedding_norm = time_embedding_norm
|
45 |
-
self.add_downsample = add_downsample
|
46 |
-
self.output_scale_factor = output_scale_factor
|
47 |
-
|
48 |
-
if groups_out is None:
|
49 |
-
groups_out = groups
|
50 |
-
|
51 |
-
# there will always be at least one resnet
|
52 |
-
resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=temb_channels)]
|
53 |
-
|
54 |
-
for _ in range(num_layers):
|
55 |
-
resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels))
|
56 |
-
|
57 |
-
self.resnets = nn.LayerList(resnets)
|
58 |
-
|
59 |
-
if non_linearity == "swish":
|
60 |
-
self.nonlinearity = lambda x: F.silu(x)
|
61 |
-
elif non_linearity == "mish":
|
62 |
-
self.nonlinearity = nn.Mish()
|
63 |
-
elif non_linearity == "silu":
|
64 |
-
self.nonlinearity = nn.Silu()
|
65 |
-
else:
|
66 |
-
self.nonlinearity = None
|
67 |
-
|
68 |
-
self.downsample = None
|
69 |
-
if add_downsample:
|
70 |
-
self.downsample = Downsample1D(out_channels, use_conv=True, padding=1)
|
71 |
-
|
72 |
-
def forward(self, hidden_states, temb=None):
|
73 |
-
output_states = ()
|
74 |
-
|
75 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
76 |
-
for resnet in self.resnets[1:]:
|
77 |
-
hidden_states = resnet(hidden_states, temb)
|
78 |
-
|
79 |
-
output_states += (hidden_states,)
|
80 |
-
|
81 |
-
if self.nonlinearity is not None:
|
82 |
-
hidden_states = self.nonlinearity(hidden_states)
|
83 |
-
|
84 |
-
if self.downsample is not None:
|
85 |
-
hidden_states = self.downsample(hidden_states)
|
86 |
-
|
87 |
-
return hidden_states, output_states
|
88 |
-
|
89 |
-
|
90 |
-
class UpResnetBlock1D(nn.Layer):
|
91 |
-
def __init__(
|
92 |
-
self,
|
93 |
-
in_channels,
|
94 |
-
out_channels=None,
|
95 |
-
num_layers=1,
|
96 |
-
temb_channels=32,
|
97 |
-
groups=32,
|
98 |
-
groups_out=None,
|
99 |
-
non_linearity=None,
|
100 |
-
time_embedding_norm="default",
|
101 |
-
output_scale_factor=1.0,
|
102 |
-
add_upsample=True,
|
103 |
-
):
|
104 |
-
super().__init__()
|
105 |
-
self.in_channels = in_channels
|
106 |
-
out_channels = in_channels if out_channels is None else out_channels
|
107 |
-
self.out_channels = out_channels
|
108 |
-
self.time_embedding_norm = time_embedding_norm
|
109 |
-
self.add_upsample = add_upsample
|
110 |
-
self.output_scale_factor = output_scale_factor
|
111 |
-
|
112 |
-
if groups_out is None:
|
113 |
-
groups_out = groups
|
114 |
-
|
115 |
-
# there will always be at least one resnet
|
116 |
-
resnets = [ResidualTemporalBlock1D(2 * in_channels, out_channels, embed_dim=temb_channels)]
|
117 |
-
|
118 |
-
for _ in range(num_layers):
|
119 |
-
resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels))
|
120 |
-
|
121 |
-
self.resnets = nn.LayerList(resnets)
|
122 |
-
|
123 |
-
if non_linearity == "swish":
|
124 |
-
self.nonlinearity = lambda x: F.silu(x)
|
125 |
-
elif non_linearity == "mish":
|
126 |
-
self.nonlinearity = nn.Mish()
|
127 |
-
elif non_linearity == "silu":
|
128 |
-
self.nonlinearity = nn.Silu()
|
129 |
-
else:
|
130 |
-
self.nonlinearity = None
|
131 |
-
|
132 |
-
self.upsample = None
|
133 |
-
if add_upsample:
|
134 |
-
self.upsample = Upsample1D(out_channels, use_conv_transpose=True)
|
135 |
-
|
136 |
-
def forward(self, hidden_states, res_hidden_states_tuple=None, temb=None):
|
137 |
-
if res_hidden_states_tuple is not None:
|
138 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
139 |
-
hidden_states = paddle.concat((hidden_states, res_hidden_states), axis=1)
|
140 |
-
|
141 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
142 |
-
for resnet in self.resnets[1:]:
|
143 |
-
hidden_states = resnet(hidden_states, temb)
|
144 |
-
|
145 |
-
if self.nonlinearity is not None:
|
146 |
-
hidden_states = self.nonlinearity(hidden_states)
|
147 |
-
|
148 |
-
if self.upsample is not None:
|
149 |
-
hidden_states = self.upsample(hidden_states)
|
150 |
-
|
151 |
-
return hidden_states
|
152 |
-
|
153 |
-
|
154 |
-
class ValueFunctionMidBlock1D(nn.Layer):
|
155 |
-
def __init__(self, in_channels, out_channels, embed_dim):
|
156 |
-
super().__init__()
|
157 |
-
self.in_channels = in_channels
|
158 |
-
self.out_channels = out_channels
|
159 |
-
self.embed_dim = embed_dim
|
160 |
-
|
161 |
-
self.res1 = ResidualTemporalBlock1D(in_channels, in_channels // 2, embed_dim=embed_dim)
|
162 |
-
self.down1 = Downsample1D(out_channels // 2, use_conv=True)
|
163 |
-
self.res2 = ResidualTemporalBlock1D(in_channels // 2, in_channels // 4, embed_dim=embed_dim)
|
164 |
-
self.down2 = Downsample1D(out_channels // 4, use_conv=True)
|
165 |
-
|
166 |
-
def forward(self, x, temb=None):
|
167 |
-
x = self.res1(x, temb)
|
168 |
-
x = self.down1(x)
|
169 |
-
x = self.res2(x, temb)
|
170 |
-
x = self.down2(x)
|
171 |
-
return x
|
172 |
-
|
173 |
-
|
174 |
-
class MidResTemporalBlock1D(nn.Layer):
|
175 |
-
def __init__(
|
176 |
-
self,
|
177 |
-
in_channels,
|
178 |
-
out_channels,
|
179 |
-
embed_dim,
|
180 |
-
num_layers: int = 1,
|
181 |
-
add_downsample: bool = False,
|
182 |
-
add_upsample: bool = False,
|
183 |
-
non_linearity=None,
|
184 |
-
):
|
185 |
-
super().__init__()
|
186 |
-
self.in_channels = in_channels
|
187 |
-
self.out_channels = out_channels
|
188 |
-
self.add_downsample = add_downsample
|
189 |
-
|
190 |
-
# there will always be at least one resnet
|
191 |
-
resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=embed_dim)]
|
192 |
-
|
193 |
-
for _ in range(num_layers):
|
194 |
-
resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=embed_dim))
|
195 |
-
|
196 |
-
self.resnets = nn.LayerList(resnets)
|
197 |
-
|
198 |
-
if non_linearity == "swish":
|
199 |
-
self.nonlinearity = lambda x: F.silu(x)
|
200 |
-
elif non_linearity == "mish":
|
201 |
-
self.nonlinearity = nn.Mish()
|
202 |
-
elif non_linearity == "silu":
|
203 |
-
self.nonlinearity = nn.Silu()
|
204 |
-
else:
|
205 |
-
self.nonlinearity = None
|
206 |
-
|
207 |
-
self.upsample = None
|
208 |
-
if add_upsample:
|
209 |
-
self.upsample = Downsample1D(out_channels, use_conv=True)
|
210 |
-
|
211 |
-
self.downsample = None
|
212 |
-
if add_downsample:
|
213 |
-
self.downsample = Downsample1D(out_channels, use_conv=True)
|
214 |
-
|
215 |
-
if self.upsample and self.downsample:
|
216 |
-
raise ValueError("Block cannot downsample and upsample")
|
217 |
-
|
218 |
-
def forward(self, hidden_states, temb):
|
219 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
220 |
-
for resnet in self.resnets[1:]:
|
221 |
-
hidden_states = resnet(hidden_states, temb)
|
222 |
-
|
223 |
-
if self.upsample:
|
224 |
-
hidden_states = self.upsample(hidden_states)
|
225 |
-
if self.downsample:
|
226 |
-
self.downsample = self.downsample(hidden_states)
|
227 |
-
|
228 |
-
return hidden_states
|
229 |
-
|
230 |
-
|
231 |
-
class OutConv1DBlock(nn.Layer):
|
232 |
-
def __init__(self, num_groups_out, out_channels, embed_dim, act_fn):
|
233 |
-
super().__init__()
|
234 |
-
self.final_conv1d_1 = nn.Conv1D(embed_dim, embed_dim, 5, padding=2)
|
235 |
-
self.final_conv1d_gn = nn.GroupNorm(num_groups_out, embed_dim)
|
236 |
-
if act_fn == "silu":
|
237 |
-
self.final_conv1d_act = nn.Silu()
|
238 |
-
if act_fn == "mish":
|
239 |
-
self.final_conv1d_act = nn.Mish()
|
240 |
-
self.final_conv1d_2 = nn.Conv1D(embed_dim, out_channels, 1)
|
241 |
-
|
242 |
-
def forward(self, hidden_states, temb=None):
|
243 |
-
hidden_states = self.final_conv1d_1(hidden_states)
|
244 |
-
hidden_states = rearrange_dims(hidden_states)
|
245 |
-
hidden_states = self.final_conv1d_gn(hidden_states)
|
246 |
-
hidden_states = rearrange_dims(hidden_states)
|
247 |
-
hidden_states = self.final_conv1d_act(hidden_states)
|
248 |
-
hidden_states = self.final_conv1d_2(hidden_states)
|
249 |
-
return hidden_states
|
250 |
-
|
251 |
-
|
252 |
-
class OutValueFunctionBlock(nn.Layer):
|
253 |
-
def __init__(self, fc_dim, embed_dim):
|
254 |
-
super().__init__()
|
255 |
-
self.final_block = nn.LayerList(
|
256 |
-
[
|
257 |
-
nn.Linear(fc_dim + embed_dim, fc_dim // 2),
|
258 |
-
nn.Mish(),
|
259 |
-
nn.Linear(fc_dim // 2, 1),
|
260 |
-
]
|
261 |
-
)
|
262 |
-
|
263 |
-
def forward(self, hidden_states, temb):
|
264 |
-
hidden_states = hidden_states.reshape([hidden_states.shape[0], -1])
|
265 |
-
hidden_states = paddle.concat((hidden_states, temb), axis=-1)
|
266 |
-
for layer in self.final_block:
|
267 |
-
hidden_states = layer(hidden_states)
|
268 |
-
|
269 |
-
return hidden_states
|
270 |
-
|
271 |
-
|
272 |
-
_kernels = {
|
273 |
-
"linear": [1 / 8, 3 / 8, 3 / 8, 1 / 8],
|
274 |
-
"cubic": [-0.01171875, -0.03515625, 0.11328125, 0.43359375, 0.43359375, 0.11328125, -0.03515625, -0.01171875],
|
275 |
-
"lanczos3": [
|
276 |
-
0.003689131001010537,
|
277 |
-
0.015056144446134567,
|
278 |
-
-0.03399861603975296,
|
279 |
-
-0.066637322306633,
|
280 |
-
0.13550527393817902,
|
281 |
-
0.44638532400131226,
|
282 |
-
0.44638532400131226,
|
283 |
-
0.13550527393817902,
|
284 |
-
-0.066637322306633,
|
285 |
-
-0.03399861603975296,
|
286 |
-
0.015056144446134567,
|
287 |
-
0.003689131001010537,
|
288 |
-
],
|
289 |
-
}
|
290 |
-
|
291 |
-
|
292 |
-
class Downsample1d(nn.Layer):
|
293 |
-
def __init__(self, kernel="linear", pad_mode="reflect"):
|
294 |
-
super().__init__()
|
295 |
-
self.pad_mode = pad_mode
|
296 |
-
kernel_1d = paddle.to_tensor(_kernels[kernel])
|
297 |
-
self.pad = kernel_1d.shape[0] // 2 - 1
|
298 |
-
self.register_buffer("kernel", kernel_1d)
|
299 |
-
|
300 |
-
def forward(self, hidden_states):
|
301 |
-
hidden_states = F.pad(hidden_states, (self.pad,) * 2, self.pad_mode, data_format="NCL")
|
302 |
-
weight = paddle.zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]])
|
303 |
-
indices = paddle.arange(hidden_states.shape[1])
|
304 |
-
weight[indices, indices] = self.kernel.cast(weight.dtype)
|
305 |
-
return F.conv1d(hidden_states, weight, stride=2)
|
306 |
-
|
307 |
-
|
308 |
-
class Upsample1d(nn.Layer):
|
309 |
-
def __init__(self, kernel="linear", pad_mode="reflect"):
|
310 |
-
super().__init__()
|
311 |
-
self.pad_mode = pad_mode
|
312 |
-
kernel_1d = paddle.to_tensor(_kernels[kernel]) * 2
|
313 |
-
self.pad = kernel_1d.shape[0] // 2 - 1
|
314 |
-
self.register_buffer("kernel", kernel_1d)
|
315 |
-
|
316 |
-
def forward(self, hidden_states, temb=None):
|
317 |
-
hidden_states = F.pad(hidden_states, ((self.pad + 1) // 2,) * 2, self.pad_mode, data_format="NCL")
|
318 |
-
weight = paddle.zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]])
|
319 |
-
indices = paddle.arange(hidden_states.shape[1])
|
320 |
-
weight[indices, indices] = self.kernel.cast(weight.dtype)
|
321 |
-
return F.conv1d_transpose(hidden_states, weight, stride=2, padding=self.pad * 2 + 1)
|
322 |
-
|
323 |
-
|
324 |
-
class SelfAttention1d(nn.Layer):
|
325 |
-
def __init__(self, in_channels, n_head=1, dropout_rate=0.0):
|
326 |
-
super().__init__()
|
327 |
-
self.channels = in_channels
|
328 |
-
self.group_norm = nn.GroupNorm(1, num_channels=in_channels)
|
329 |
-
self.num_heads = n_head
|
330 |
-
|
331 |
-
self.query = nn.Linear(self.channels, self.channels)
|
332 |
-
self.key = nn.Linear(self.channels, self.channels)
|
333 |
-
self.value = nn.Linear(self.channels, self.channels)
|
334 |
-
|
335 |
-
self.proj_attn = nn.Linear(self.channels, self.channels)
|
336 |
-
|
337 |
-
self.dropout = nn.Dropout(dropout_rate)
|
338 |
-
|
339 |
-
# (TODO junnyu) refactor self attention
|
340 |
-
def transpose_for_scores(self, projection: paddle.Tensor) -> paddle.Tensor:
|
341 |
-
new_projection_shape = projection.shape[:-1] + [self.num_heads, -1]
|
342 |
-
# move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D)
|
343 |
-
new_projection = projection.reshape(new_projection_shape).transpose([0, 2, 1, 3])
|
344 |
-
return new_projection
|
345 |
-
|
346 |
-
def forward(self, hidden_states):
|
347 |
-
residual = hidden_states
|
348 |
-
|
349 |
-
hidden_states = self.group_norm(hidden_states)
|
350 |
-
hidden_states = hidden_states.transpose([0, 2, 1])
|
351 |
-
|
352 |
-
query_proj = self.query(hidden_states)
|
353 |
-
key_proj = self.key(hidden_states)
|
354 |
-
value_proj = self.value(hidden_states)
|
355 |
-
|
356 |
-
query_states = self.transpose_for_scores(query_proj)
|
357 |
-
key_states = self.transpose_for_scores(key_proj)
|
358 |
-
value_states = self.transpose_for_scores(value_proj)
|
359 |
-
|
360 |
-
scale = 1 / math.sqrt(math.sqrt(key_states.shape[-1]))
|
361 |
-
|
362 |
-
attention_scores = paddle.matmul(query_states * scale, key_states * scale, transpose_y=True)
|
363 |
-
attention_probs = F.softmax(attention_scores, axis=-1)
|
364 |
-
|
365 |
-
# compute attention output
|
366 |
-
hidden_states = paddle.matmul(attention_probs, value_states)
|
367 |
-
|
368 |
-
hidden_states = hidden_states.transpose([0, 2, 1, 3])
|
369 |
-
new_hidden_states_shape = hidden_states.shape[:-2] + [
|
370 |
-
self.channels,
|
371 |
-
]
|
372 |
-
hidden_states = hidden_states.reshape(new_hidden_states_shape)
|
373 |
-
|
374 |
-
# compute next hidden_states
|
375 |
-
hidden_states = self.proj_attn(hidden_states)
|
376 |
-
hidden_states = hidden_states.transpose([0, 2, 1])
|
377 |
-
hidden_states = self.dropout(hidden_states)
|
378 |
-
output = hidden_states + residual
|
379 |
-
|
380 |
-
return output
|
381 |
-
|
382 |
-
|
383 |
-
class ResConvBlock(nn.Layer):
|
384 |
-
def __init__(self, in_channels, mid_channels, out_channels, is_last=False):
|
385 |
-
super().__init__()
|
386 |
-
self.is_last = is_last
|
387 |
-
self.has_conv_skip = in_channels != out_channels
|
388 |
-
|
389 |
-
if self.has_conv_skip:
|
390 |
-
self.conv_skip = nn.Conv1D(in_channels, out_channels, 1, bias_attr=False)
|
391 |
-
|
392 |
-
self.conv_1 = nn.Conv1D(in_channels, mid_channels, 5, padding=2)
|
393 |
-
self.group_norm_1 = nn.GroupNorm(1, mid_channels)
|
394 |
-
self.gelu_1 = nn.GELU()
|
395 |
-
self.conv_2 = nn.Conv1D(mid_channels, out_channels, 5, padding=2)
|
396 |
-
|
397 |
-
if not self.is_last:
|
398 |
-
self.group_norm_2 = nn.GroupNorm(1, out_channels)
|
399 |
-
self.gelu_2 = nn.GELU()
|
400 |
-
|
401 |
-
def forward(self, hidden_states):
|
402 |
-
residual = self.conv_skip(hidden_states) if self.has_conv_skip else hidden_states
|
403 |
-
|
404 |
-
hidden_states = self.conv_1(hidden_states)
|
405 |
-
hidden_states = self.group_norm_1(hidden_states)
|
406 |
-
hidden_states = self.gelu_1(hidden_states)
|
407 |
-
hidden_states = self.conv_2(hidden_states)
|
408 |
-
|
409 |
-
if not self.is_last:
|
410 |
-
hidden_states = self.group_norm_2(hidden_states)
|
411 |
-
hidden_states = self.gelu_2(hidden_states)
|
412 |
-
|
413 |
-
output = hidden_states + residual
|
414 |
-
return output
|
415 |
-
|
416 |
-
|
417 |
-
class UNetMidBlock1D(nn.Layer):
|
418 |
-
def __init__(self, mid_channels, in_channels, out_channels=None):
|
419 |
-
super().__init__()
|
420 |
-
|
421 |
-
out_channels = in_channels if out_channels is None else out_channels
|
422 |
-
|
423 |
-
# there is always at least one resnet
|
424 |
-
self.down = Downsample1d("cubic")
|
425 |
-
resnets = [
|
426 |
-
ResConvBlock(in_channels, mid_channels, mid_channels),
|
427 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
428 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
429 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
430 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
431 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
432 |
-
]
|
433 |
-
attentions = [
|
434 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
435 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
436 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
437 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
438 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
439 |
-
SelfAttention1d(out_channels, out_channels // 32),
|
440 |
-
]
|
441 |
-
self.up = Upsample1d(kernel="cubic")
|
442 |
-
|
443 |
-
self.attentions = nn.LayerList(attentions)
|
444 |
-
self.resnets = nn.LayerList(resnets)
|
445 |
-
|
446 |
-
def forward(self, hidden_states, temb=None):
|
447 |
-
hidden_states = self.down(hidden_states)
|
448 |
-
for attn, resnet in zip(self.attentions, self.resnets):
|
449 |
-
hidden_states = resnet(hidden_states)
|
450 |
-
hidden_states = attn(hidden_states)
|
451 |
-
|
452 |
-
hidden_states = self.up(hidden_states)
|
453 |
-
|
454 |
-
return hidden_states
|
455 |
-
|
456 |
-
|
457 |
-
class AttnDownBlock1D(nn.Layer):
|
458 |
-
def __init__(self, out_channels, in_channels, mid_channels=None):
|
459 |
-
super().__init__()
|
460 |
-
mid_channels = out_channels if mid_channels is None else mid_channels
|
461 |
-
|
462 |
-
self.down = Downsample1d("cubic")
|
463 |
-
resnets = [
|
464 |
-
ResConvBlock(in_channels, mid_channels, mid_channels),
|
465 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
466 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
467 |
-
]
|
468 |
-
attentions = [
|
469 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
470 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
471 |
-
SelfAttention1d(out_channels, out_channels // 32),
|
472 |
-
]
|
473 |
-
|
474 |
-
self.attentions = nn.LayerList(attentions)
|
475 |
-
self.resnets = nn.LayerList(resnets)
|
476 |
-
|
477 |
-
def forward(self, hidden_states, temb=None):
|
478 |
-
hidden_states = self.down(hidden_states)
|
479 |
-
|
480 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
481 |
-
hidden_states = resnet(hidden_states)
|
482 |
-
hidden_states = attn(hidden_states)
|
483 |
-
|
484 |
-
return hidden_states, (hidden_states,)
|
485 |
-
|
486 |
-
|
487 |
-
class DownBlock1D(nn.Layer):
|
488 |
-
def __init__(self, out_channels, in_channels, mid_channels=None):
|
489 |
-
super().__init__()
|
490 |
-
mid_channels = out_channels if mid_channels is None else mid_channels
|
491 |
-
|
492 |
-
self.down = Downsample1d("cubic")
|
493 |
-
resnets = [
|
494 |
-
ResConvBlock(in_channels, mid_channels, mid_channels),
|
495 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
496 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
497 |
-
]
|
498 |
-
|
499 |
-
self.resnets = nn.LayerList(resnets)
|
500 |
-
|
501 |
-
def forward(self, hidden_states, temb=None):
|
502 |
-
hidden_states = self.down(hidden_states)
|
503 |
-
|
504 |
-
for resnet in self.resnets:
|
505 |
-
hidden_states = resnet(hidden_states)
|
506 |
-
|
507 |
-
return hidden_states, (hidden_states,)
|
508 |
-
|
509 |
-
|
510 |
-
class DownBlock1DNoSkip(nn.Layer):
|
511 |
-
def __init__(self, out_channels, in_channels, mid_channels=None):
|
512 |
-
super().__init__()
|
513 |
-
mid_channels = out_channels if mid_channels is None else mid_channels
|
514 |
-
|
515 |
-
resnets = [
|
516 |
-
ResConvBlock(in_channels, mid_channels, mid_channels),
|
517 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
518 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
519 |
-
]
|
520 |
-
|
521 |
-
self.resnets = nn.LayerList(resnets)
|
522 |
-
|
523 |
-
def forward(self, hidden_states, temb=None):
|
524 |
-
hidden_states = paddle.concat([hidden_states, temb], axis=1)
|
525 |
-
for resnet in self.resnets:
|
526 |
-
hidden_states = resnet(hidden_states)
|
527 |
-
|
528 |
-
return hidden_states, (hidden_states,)
|
529 |
-
|
530 |
-
|
531 |
-
class AttnUpBlock1D(nn.Layer):
|
532 |
-
def __init__(self, in_channels, out_channels, mid_channels=None):
|
533 |
-
super().__init__()
|
534 |
-
mid_channels = out_channels if mid_channels is None else mid_channels
|
535 |
-
|
536 |
-
resnets = [
|
537 |
-
ResConvBlock(2 * in_channels, mid_channels, mid_channels),
|
538 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
539 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
540 |
-
]
|
541 |
-
attentions = [
|
542 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
543 |
-
SelfAttention1d(mid_channels, mid_channels // 32),
|
544 |
-
SelfAttention1d(out_channels, out_channels // 32),
|
545 |
-
]
|
546 |
-
|
547 |
-
self.attentions = nn.LayerList(attentions)
|
548 |
-
self.resnets = nn.LayerList(resnets)
|
549 |
-
self.up = Upsample1d(kernel="cubic")
|
550 |
-
|
551 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
|
552 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
553 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
554 |
-
|
555 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
556 |
-
hidden_states = resnet(hidden_states)
|
557 |
-
hidden_states = attn(hidden_states)
|
558 |
-
|
559 |
-
hidden_states = self.up(hidden_states)
|
560 |
-
|
561 |
-
return hidden_states
|
562 |
-
|
563 |
-
|
564 |
-
class UpBlock1D(nn.Layer):
|
565 |
-
def __init__(self, in_channels, out_channels, mid_channels=None):
|
566 |
-
super().__init__()
|
567 |
-
mid_channels = in_channels if mid_channels is None else mid_channels
|
568 |
-
|
569 |
-
resnets = [
|
570 |
-
ResConvBlock(2 * in_channels, mid_channels, mid_channels),
|
571 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
572 |
-
ResConvBlock(mid_channels, mid_channels, out_channels),
|
573 |
-
]
|
574 |
-
|
575 |
-
self.resnets = nn.LayerList(resnets)
|
576 |
-
self.up = Upsample1d(kernel="cubic")
|
577 |
-
|
578 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
|
579 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
580 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
581 |
-
for resnet in self.resnets:
|
582 |
-
hidden_states = resnet(hidden_states)
|
583 |
-
|
584 |
-
hidden_states = self.up(hidden_states)
|
585 |
-
|
586 |
-
return hidden_states
|
587 |
-
|
588 |
-
|
589 |
-
class UpBlock1DNoSkip(nn.Layer):
|
590 |
-
def __init__(self, in_channels, out_channels, mid_channels=None):
|
591 |
-
super().__init__()
|
592 |
-
mid_channels = in_channels if mid_channels is None else mid_channels
|
593 |
-
|
594 |
-
resnets = [
|
595 |
-
ResConvBlock(2 * in_channels, mid_channels, mid_channels),
|
596 |
-
ResConvBlock(mid_channels, mid_channels, mid_channels),
|
597 |
-
ResConvBlock(mid_channels, mid_channels, out_channels, is_last=True),
|
598 |
-
]
|
599 |
-
|
600 |
-
self.resnets = nn.LayerList(resnets)
|
601 |
-
|
602 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
|
603 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
604 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
605 |
-
for resnet in self.resnets:
|
606 |
-
hidden_states = resnet(hidden_states)
|
607 |
-
|
608 |
-
return hidden_states
|
609 |
-
|
610 |
-
|
611 |
-
def get_down_block(down_block_type, num_layers, in_channels, out_channels, temb_channels, add_downsample):
|
612 |
-
if down_block_type == "DownResnetBlock1D":
|
613 |
-
return DownResnetBlock1D(
|
614 |
-
in_channels=in_channels,
|
615 |
-
num_layers=num_layers,
|
616 |
-
out_channels=out_channels,
|
617 |
-
temb_channels=temb_channels,
|
618 |
-
add_downsample=add_downsample,
|
619 |
-
)
|
620 |
-
elif down_block_type == "DownBlock1D":
|
621 |
-
return DownBlock1D(out_channels=out_channels, in_channels=in_channels)
|
622 |
-
elif down_block_type == "AttnDownBlock1D":
|
623 |
-
return AttnDownBlock1D(out_channels=out_channels, in_channels=in_channels)
|
624 |
-
elif down_block_type == "DownBlock1DNoSkip":
|
625 |
-
return DownBlock1DNoSkip(out_channels=out_channels, in_channels=in_channels)
|
626 |
-
raise ValueError(f"{down_block_type} does not exist.")
|
627 |
-
|
628 |
-
|
629 |
-
def get_up_block(up_block_type, num_layers, in_channels, out_channels, temb_channels, add_upsample):
|
630 |
-
if up_block_type == "UpResnetBlock1D":
|
631 |
-
return UpResnetBlock1D(
|
632 |
-
in_channels=in_channels,
|
633 |
-
num_layers=num_layers,
|
634 |
-
out_channels=out_channels,
|
635 |
-
temb_channels=temb_channels,
|
636 |
-
add_upsample=add_upsample,
|
637 |
-
)
|
638 |
-
elif up_block_type == "UpBlock1D":
|
639 |
-
return UpBlock1D(in_channels=in_channels, out_channels=out_channels)
|
640 |
-
elif up_block_type == "AttnUpBlock1D":
|
641 |
-
return AttnUpBlock1D(in_channels=in_channels, out_channels=out_channels)
|
642 |
-
elif up_block_type == "UpBlock1DNoSkip":
|
643 |
-
return UpBlock1DNoSkip(in_channels=in_channels, out_channels=out_channels)
|
644 |
-
raise ValueError(f"{up_block_type} does not exist.")
|
645 |
-
|
646 |
-
|
647 |
-
def get_mid_block(mid_block_type, num_layers, in_channels, mid_channels, out_channels, embed_dim, add_downsample):
|
648 |
-
if mid_block_type == "MidResTemporalBlock1D":
|
649 |
-
return MidResTemporalBlock1D(
|
650 |
-
num_layers=num_layers,
|
651 |
-
in_channels=in_channels,
|
652 |
-
out_channels=out_channels,
|
653 |
-
embed_dim=embed_dim,
|
654 |
-
add_downsample=add_downsample,
|
655 |
-
)
|
656 |
-
elif mid_block_type == "ValueFunctionMidBlock1D":
|
657 |
-
return ValueFunctionMidBlock1D(in_channels=in_channels, out_channels=out_channels, embed_dim=embed_dim)
|
658 |
-
elif mid_block_type == "UNetMidBlock1D":
|
659 |
-
return UNetMidBlock1D(in_channels=in_channels, mid_channels=mid_channels, out_channels=out_channels)
|
660 |
-
raise ValueError(f"{mid_block_type} does not exist.")
|
661 |
-
|
662 |
-
|
663 |
-
def get_out_block(*, out_block_type, num_groups_out, embed_dim, out_channels, act_fn, fc_dim):
|
664 |
-
if out_block_type == "OutConv1DBlock":
|
665 |
-
return OutConv1DBlock(num_groups_out, out_channels, embed_dim, act_fn)
|
666 |
-
elif out_block_type == "ValueFunction":
|
667 |
-
return OutValueFunctionBlock(fc_dim, embed_dim)
|
668 |
-
return None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/StyleGANEX/scripts/inference.py
DELETED
@@ -1,136 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from argparse import Namespace
|
3 |
-
|
4 |
-
from tqdm import tqdm
|
5 |
-
import time
|
6 |
-
import numpy as np
|
7 |
-
import torch
|
8 |
-
from PIL import Image
|
9 |
-
from torch.utils.data import DataLoader
|
10 |
-
import sys
|
11 |
-
|
12 |
-
sys.path.append(".")
|
13 |
-
sys.path.append("..")
|
14 |
-
|
15 |
-
from configs import data_configs
|
16 |
-
from datasets.inference_dataset import InferenceDataset
|
17 |
-
from utils.common import tensor2im, log_input_image
|
18 |
-
from options.test_options import TestOptions
|
19 |
-
from models.psp import pSp
|
20 |
-
|
21 |
-
|
22 |
-
def run():
|
23 |
-
test_opts = TestOptions().parse()
|
24 |
-
|
25 |
-
if test_opts.resize_factors is not None:
|
26 |
-
assert len(
|
27 |
-
test_opts.resize_factors.split(',')) == 1, "When running inference, provide a single downsampling factor!"
|
28 |
-
out_path_results = os.path.join(test_opts.exp_dir, 'inference_results',
|
29 |
-
'downsampling_{}'.format(test_opts.resize_factors))
|
30 |
-
out_path_coupled = os.path.join(test_opts.exp_dir, 'inference_coupled',
|
31 |
-
'downsampling_{}'.format(test_opts.resize_factors))
|
32 |
-
else:
|
33 |
-
out_path_results = os.path.join(test_opts.exp_dir, 'inference_results')
|
34 |
-
out_path_coupled = os.path.join(test_opts.exp_dir, 'inference_coupled')
|
35 |
-
|
36 |
-
os.makedirs(out_path_results, exist_ok=True)
|
37 |
-
os.makedirs(out_path_coupled, exist_ok=True)
|
38 |
-
|
39 |
-
# update test options with options used during training
|
40 |
-
ckpt = torch.load(test_opts.checkpoint_path, map_location='cpu')
|
41 |
-
opts = ckpt['opts']
|
42 |
-
opts.update(vars(test_opts))
|
43 |
-
if 'learn_in_w' not in opts:
|
44 |
-
opts['learn_in_w'] = False
|
45 |
-
if 'output_size' not in opts:
|
46 |
-
opts['output_size'] = 1024
|
47 |
-
opts = Namespace(**opts)
|
48 |
-
|
49 |
-
net = pSp(opts)
|
50 |
-
net.eval()
|
51 |
-
net.cuda()
|
52 |
-
|
53 |
-
print('Loading dataset for {}'.format(opts.dataset_type))
|
54 |
-
dataset_args = data_configs.DATASETS[opts.dataset_type]
|
55 |
-
transforms_dict = dataset_args['transforms'](opts).get_transforms()
|
56 |
-
dataset = InferenceDataset(root=opts.data_path,
|
57 |
-
transform=transforms_dict['transform_inference'],
|
58 |
-
opts=opts)
|
59 |
-
dataloader = DataLoader(dataset,
|
60 |
-
batch_size=opts.test_batch_size,
|
61 |
-
shuffle=False,
|
62 |
-
num_workers=int(opts.test_workers),
|
63 |
-
drop_last=True)
|
64 |
-
|
65 |
-
if opts.n_images is None:
|
66 |
-
opts.n_images = len(dataset)
|
67 |
-
|
68 |
-
global_i = 0
|
69 |
-
global_time = []
|
70 |
-
for input_batch in tqdm(dataloader):
|
71 |
-
if global_i >= opts.n_images:
|
72 |
-
break
|
73 |
-
with torch.no_grad():
|
74 |
-
input_cuda = input_batch.cuda().float()
|
75 |
-
tic = time.time()
|
76 |
-
result_batch = run_on_batch(input_cuda, net, opts)
|
77 |
-
toc = time.time()
|
78 |
-
global_time.append(toc - tic)
|
79 |
-
|
80 |
-
for i in range(opts.test_batch_size):
|
81 |
-
result = tensor2im(result_batch[i])
|
82 |
-
im_path = dataset.paths[global_i]
|
83 |
-
|
84 |
-
if opts.couple_outputs or global_i % 100 == 0:
|
85 |
-
input_im = log_input_image(input_batch[i], opts)
|
86 |
-
resize_amount = (256, 256) if opts.resize_outputs else (opts.output_size, opts.output_size)
|
87 |
-
if opts.resize_factors is not None:
|
88 |
-
# for super resolution, save the original, down-sampled, and output
|
89 |
-
source = Image.open(im_path)
|
90 |
-
res = np.concatenate([np.array(source.resize(resize_amount)),
|
91 |
-
np.array(input_im.resize(resize_amount, resample=Image.NEAREST)),
|
92 |
-
np.array(result.resize(resize_amount))], axis=1)
|
93 |
-
else:
|
94 |
-
# otherwise, save the original and output
|
95 |
-
res = np.concatenate([np.array(input_im.resize(resize_amount)),
|
96 |
-
np.array(result.resize(resize_amount))], axis=1)
|
97 |
-
Image.fromarray(res).save(os.path.join(out_path_coupled, os.path.basename(im_path)))
|
98 |
-
|
99 |
-
im_save_path = os.path.join(out_path_results, os.path.basename(im_path))
|
100 |
-
Image.fromarray(np.array(result)).save(im_save_path)
|
101 |
-
|
102 |
-
global_i += 1
|
103 |
-
|
104 |
-
stats_path = os.path.join(opts.exp_dir, 'stats.txt')
|
105 |
-
result_str = 'Runtime {:.4f}+-{:.4f}'.format(np.mean(global_time), np.std(global_time))
|
106 |
-
print(result_str)
|
107 |
-
|
108 |
-
with open(stats_path, 'w') as f:
|
109 |
-
f.write(result_str)
|
110 |
-
|
111 |
-
|
112 |
-
def run_on_batch(inputs, net, opts):
|
113 |
-
if opts.latent_mask is None:
|
114 |
-
result_batch = net(inputs, randomize_noise=False, resize=opts.resize_outputs)
|
115 |
-
else:
|
116 |
-
latent_mask = [int(l) for l in opts.latent_mask.split(",")]
|
117 |
-
result_batch = []
|
118 |
-
for image_idx, input_image in enumerate(inputs):
|
119 |
-
# get latent vector to inject into our input image
|
120 |
-
vec_to_inject = np.random.randn(1, 512).astype('float32')
|
121 |
-
_, latent_to_inject = net(torch.from_numpy(vec_to_inject).to("cuda"),
|
122 |
-
input_code=True,
|
123 |
-
return_latents=True)
|
124 |
-
# get output image with injected style vector
|
125 |
-
res = net(input_image.unsqueeze(0).to("cuda").float(),
|
126 |
-
latent_mask=latent_mask,
|
127 |
-
inject_latent=latent_to_inject,
|
128 |
-
alpha=opts.mix_alpha,
|
129 |
-
resize=opts.resize_outputs)
|
130 |
-
result_batch.append(res)
|
131 |
-
result_batch = torch.cat(result_batch, dim=0)
|
132 |
-
return result_batch
|
133 |
-
|
134 |
-
|
135 |
-
if __name__ == '__main__':
|
136 |
-
run()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/audioldm-text-to-audio-generation/README.md
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Audioldm Text To Audio Generation
|
3 |
-
emoji: 🔊
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.16.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: bigscience-openrail-m
|
11 |
-
duplicated_from: haoheliu/audioldm-text-to-audio-generation
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
15 |
-
|
16 |
-
## Reference
|
17 |
-
Part of the code from this repo is borrowed from the following repos. We would like to thank the authors of them for their contribution.
|
18 |
-
|
19 |
-
> https://github.com/LAION-AI/CLAP
|
20 |
-
> https://github.com/CompVis/stable-diffusion
|
21 |
-
> https://github.com/v-iashin/SpecVQGAN
|
22 |
-
> https://github.com/toshas/torch-fidelity
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from . import en, zh, zh_aishell_no_tone_sing
|
|
|
|
spaces/AIZero2Hero4Health/8-NLPSimilarityHeatmapCluster-SL/app.py
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import nltk
|
3 |
-
from transformers import pipeline
|
4 |
-
from sentence_transformers import SentenceTransformer
|
5 |
-
from scipy.spatial.distance import cosine
|
6 |
-
import numpy as np
|
7 |
-
import seaborn as sns
|
8 |
-
import matplotlib.pyplot as plt
|
9 |
-
from sklearn.cluster import KMeans
|
10 |
-
import tensorflow as tf
|
11 |
-
import tensorflow_hub as hub
|
12 |
-
|
13 |
-
|
14 |
-
def cluster_examples(messages, embed, nc=3):
|
15 |
-
km = KMeans(
|
16 |
-
n_clusters=nc, init='random',
|
17 |
-
n_init=10, max_iter=300,
|
18 |
-
tol=1e-04, random_state=0
|
19 |
-
)
|
20 |
-
km = km.fit_predict(embed)
|
21 |
-
for n in range(nc):
|
22 |
-
idxs = [i for i in range(len(km)) if km[i] == n]
|
23 |
-
ms = [messages[i] for i in idxs]
|
24 |
-
st.markdown ("CLUSTER : %d"%n)
|
25 |
-
for m in ms:
|
26 |
-
st.markdown (m)
|
27 |
-
|
28 |
-
|
29 |
-
def plot_heatmap(labels, heatmap, rotation=90):
|
30 |
-
sns.set(font_scale=1.2)
|
31 |
-
fig, ax = plt.subplots()
|
32 |
-
g = sns.heatmap(
|
33 |
-
heatmap,
|
34 |
-
xticklabels=labels,
|
35 |
-
yticklabels=labels,
|
36 |
-
vmin=-1,
|
37 |
-
vmax=1,
|
38 |
-
cmap="coolwarm")
|
39 |
-
g.set_xticklabels(labels, rotation=rotation)
|
40 |
-
g.set_title("Textual Similarity")
|
41 |
-
|
42 |
-
st.pyplot(fig)
|
43 |
-
#plt.show()
|
44 |
-
|
45 |
-
#st.header("Sentence Similarity Demo")
|
46 |
-
|
47 |
-
# Streamlit text boxes
|
48 |
-
text = st.text_area('Enter sentences:', value="Self confidence in outcomes helps us win and to make us successful.\nShe has a seriously impressive intellect and mind.\nStimulating and deep conversation helps us develop and grow.\nFrom basic quantum particles we get aerodynamics, friction, surface tension, weather, electromagnetism.\nIf she actively engages and comments positively, her anger disappears adapting into win-win's favor.\nI love interesting topics of conversation and the understanding and exploration of thoughts.\nThere is the ability to manipulate things the way you want in your mind to go how you want when you are self confident, that we don’t understand yet.")
|
49 |
-
|
50 |
-
nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3)
|
51 |
-
|
52 |
-
model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0)
|
53 |
-
|
54 |
-
# Model setup
|
55 |
-
if model_type == "Sentence Transformer":
|
56 |
-
model = SentenceTransformer('paraphrase-distilroberta-base-v1')
|
57 |
-
elif model_type == "Universal Sentence Encoder":
|
58 |
-
model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5"
|
59 |
-
model = hub.load(model_url)
|
60 |
-
|
61 |
-
nltk.download('punkt')
|
62 |
-
|
63 |
-
# Run model
|
64 |
-
if text:
|
65 |
-
sentences = nltk.tokenize.sent_tokenize(text)
|
66 |
-
if model_type == "Sentence Transformer":
|
67 |
-
embed = model.encode(sentences)
|
68 |
-
elif model_type == "Universal Sentence Encoder":
|
69 |
-
embed = model(sentences).numpy()
|
70 |
-
sim = np.zeros([len(embed), len(embed)])
|
71 |
-
for i,em in enumerate(embed):
|
72 |
-
for j,ea in enumerate(embed):
|
73 |
-
sim[i][j] = 1.0-cosine(em,ea)
|
74 |
-
st.subheader("Similarity Heatmap")
|
75 |
-
plot_heatmap(sentences, sim)
|
76 |
-
st.subheader("Results from K-Means Clustering")
|
77 |
-
cluster_examples(sentences, embed, nc)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192.py
DELETED
@@ -1,2861 +0,0 @@
|
|
1 |
-
default_scope = 'mmpose'
|
2 |
-
default_hooks = dict(
|
3 |
-
timer=dict(type='IterTimerHook'),
|
4 |
-
logger=dict(type='LoggerHook', interval=50),
|
5 |
-
param_scheduler=dict(type='ParamSchedulerHook'),
|
6 |
-
checkpoint=dict(
|
7 |
-
type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
|
8 |
-
sampler_seed=dict(type='DistSamplerSeedHook'),
|
9 |
-
visualization=dict(type='PoseVisualizationHook', enable=False))
|
10 |
-
custom_hooks = [dict(type='SyncBuffersHook')]
|
11 |
-
env_cfg = dict(
|
12 |
-
cudnn_benchmark=False,
|
13 |
-
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
|
14 |
-
dist_cfg=dict(backend='nccl'))
|
15 |
-
vis_backends = [dict(type='LocalVisBackend')]
|
16 |
-
visualizer = dict(
|
17 |
-
type='PoseLocalVisualizer',
|
18 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
19 |
-
dict(type='WandbVisBackend')],
|
20 |
-
name='visualizer')
|
21 |
-
log_processor = dict(
|
22 |
-
type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
|
23 |
-
log_level = 'INFO'
|
24 |
-
load_from = None
|
25 |
-
resume = False
|
26 |
-
backend_args = dict(backend='local')
|
27 |
-
train_cfg = dict(by_epoch=True, max_epochs=150, val_interval=10)
|
28 |
-
val_cfg = dict()
|
29 |
-
test_cfg = dict()
|
30 |
-
colors = dict(
|
31 |
-
sss=[255, 128, 0],
|
32 |
-
lss=[255, 0, 128],
|
33 |
-
sso=[128, 0, 255],
|
34 |
-
lso=[0, 128, 255],
|
35 |
-
vest=[0, 128, 128],
|
36 |
-
sling=[0, 0, 128],
|
37 |
-
shorts=[128, 128, 128],
|
38 |
-
trousers=[128, 0, 128],
|
39 |
-
skirt=[64, 128, 128],
|
40 |
-
ssd=[64, 64, 128],
|
41 |
-
lsd=[128, 64, 0],
|
42 |
-
vd=[128, 64, 255],
|
43 |
-
sd=[128, 64, 0])
|
44 |
-
dataset_info = dict(
|
45 |
-
dataset_name='deepfashion2',
|
46 |
-
paper_info=dict(
|
47 |
-
author=
|
48 |
-
'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
|
49 |
-
title=
|
50 |
-
'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
|
51 |
-
container=
|
52 |
-
'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
|
53 |
-
year='2019',
|
54 |
-
homepage='https://github.com/switchablenorms/DeepFashion2'),
|
55 |
-
keypoint_info=dict({
|
56 |
-
0:
|
57 |
-
dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
|
58 |
-
1:
|
59 |
-
dict(
|
60 |
-
name='sss_kpt2',
|
61 |
-
id=1,
|
62 |
-
color=[255, 128, 0],
|
63 |
-
type='',
|
64 |
-
swap='sss_kpt6'),
|
65 |
-
2:
|
66 |
-
dict(
|
67 |
-
name='sss_kpt3',
|
68 |
-
id=2,
|
69 |
-
color=[255, 128, 0],
|
70 |
-
type='',
|
71 |
-
swap='sss_kpt5'),
|
72 |
-
3:
|
73 |
-
dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
|
74 |
-
4:
|
75 |
-
dict(
|
76 |
-
name='sss_kpt5',
|
77 |
-
id=4,
|
78 |
-
color=[255, 128, 0],
|
79 |
-
type='',
|
80 |
-
swap='sss_kpt3'),
|
81 |
-
5:
|
82 |
-
dict(
|
83 |
-
name='sss_kpt6',
|
84 |
-
id=5,
|
85 |
-
color=[255, 128, 0],
|
86 |
-
type='',
|
87 |
-
swap='sss_kpt2'),
|
88 |
-
6:
|
89 |
-
dict(
|
90 |
-
name='sss_kpt7',
|
91 |
-
id=6,
|
92 |
-
color=[255, 128, 0],
|
93 |
-
type='',
|
94 |
-
swap='sss_kpt25'),
|
95 |
-
7:
|
96 |
-
dict(
|
97 |
-
name='sss_kpt8',
|
98 |
-
id=7,
|
99 |
-
color=[255, 128, 0],
|
100 |
-
type='',
|
101 |
-
swap='sss_kpt24'),
|
102 |
-
8:
|
103 |
-
dict(
|
104 |
-
name='sss_kpt9',
|
105 |
-
id=8,
|
106 |
-
color=[255, 128, 0],
|
107 |
-
type='',
|
108 |
-
swap='sss_kpt23'),
|
109 |
-
9:
|
110 |
-
dict(
|
111 |
-
name='sss_kpt10',
|
112 |
-
id=9,
|
113 |
-
color=[255, 128, 0],
|
114 |
-
type='',
|
115 |
-
swap='sss_kpt22'),
|
116 |
-
10:
|
117 |
-
dict(
|
118 |
-
name='sss_kpt11',
|
119 |
-
id=10,
|
120 |
-
color=[255, 128, 0],
|
121 |
-
type='',
|
122 |
-
swap='sss_kpt21'),
|
123 |
-
11:
|
124 |
-
dict(
|
125 |
-
name='sss_kpt12',
|
126 |
-
id=11,
|
127 |
-
color=[255, 128, 0],
|
128 |
-
type='',
|
129 |
-
swap='sss_kpt20'),
|
130 |
-
12:
|
131 |
-
dict(
|
132 |
-
name='sss_kpt13',
|
133 |
-
id=12,
|
134 |
-
color=[255, 128, 0],
|
135 |
-
type='',
|
136 |
-
swap='sss_kpt19'),
|
137 |
-
13:
|
138 |
-
dict(
|
139 |
-
name='sss_kpt14',
|
140 |
-
id=13,
|
141 |
-
color=[255, 128, 0],
|
142 |
-
type='',
|
143 |
-
swap='sss_kpt18'),
|
144 |
-
14:
|
145 |
-
dict(
|
146 |
-
name='sss_kpt15',
|
147 |
-
id=14,
|
148 |
-
color=[255, 128, 0],
|
149 |
-
type='',
|
150 |
-
swap='sss_kpt17'),
|
151 |
-
15:
|
152 |
-
dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
|
153 |
-
16:
|
154 |
-
dict(
|
155 |
-
name='sss_kpt17',
|
156 |
-
id=16,
|
157 |
-
color=[255, 128, 0],
|
158 |
-
type='',
|
159 |
-
swap='sss_kpt15'),
|
160 |
-
17:
|
161 |
-
dict(
|
162 |
-
name='sss_kpt18',
|
163 |
-
id=17,
|
164 |
-
color=[255, 128, 0],
|
165 |
-
type='',
|
166 |
-
swap='sss_kpt14'),
|
167 |
-
18:
|
168 |
-
dict(
|
169 |
-
name='sss_kpt19',
|
170 |
-
id=18,
|
171 |
-
color=[255, 128, 0],
|
172 |
-
type='',
|
173 |
-
swap='sss_kpt13'),
|
174 |
-
19:
|
175 |
-
dict(
|
176 |
-
name='sss_kpt20',
|
177 |
-
id=19,
|
178 |
-
color=[255, 128, 0],
|
179 |
-
type='',
|
180 |
-
swap='sss_kpt12'),
|
181 |
-
20:
|
182 |
-
dict(
|
183 |
-
name='sss_kpt21',
|
184 |
-
id=20,
|
185 |
-
color=[255, 128, 0],
|
186 |
-
type='',
|
187 |
-
swap='sss_kpt11'),
|
188 |
-
21:
|
189 |
-
dict(
|
190 |
-
name='sss_kpt22',
|
191 |
-
id=21,
|
192 |
-
color=[255, 128, 0],
|
193 |
-
type='',
|
194 |
-
swap='sss_kpt10'),
|
195 |
-
22:
|
196 |
-
dict(
|
197 |
-
name='sss_kpt23',
|
198 |
-
id=22,
|
199 |
-
color=[255, 128, 0],
|
200 |
-
type='',
|
201 |
-
swap='sss_kpt9'),
|
202 |
-
23:
|
203 |
-
dict(
|
204 |
-
name='sss_kpt24',
|
205 |
-
id=23,
|
206 |
-
color=[255, 128, 0],
|
207 |
-
type='',
|
208 |
-
swap='sss_kpt8'),
|
209 |
-
24:
|
210 |
-
dict(
|
211 |
-
name='sss_kpt25',
|
212 |
-
id=24,
|
213 |
-
color=[255, 128, 0],
|
214 |
-
type='',
|
215 |
-
swap='sss_kpt7'),
|
216 |
-
25:
|
217 |
-
dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
|
218 |
-
26:
|
219 |
-
dict(
|
220 |
-
name='lss_kpt2',
|
221 |
-
id=26,
|
222 |
-
color=[255, 0, 128],
|
223 |
-
type='',
|
224 |
-
swap='lss_kpt6'),
|
225 |
-
27:
|
226 |
-
dict(
|
227 |
-
name='lss_kpt3',
|
228 |
-
id=27,
|
229 |
-
color=[255, 0, 128],
|
230 |
-
type='',
|
231 |
-
swap='lss_kpt5'),
|
232 |
-
28:
|
233 |
-
dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
|
234 |
-
29:
|
235 |
-
dict(
|
236 |
-
name='lss_kpt5',
|
237 |
-
id=29,
|
238 |
-
color=[255, 0, 128],
|
239 |
-
type='',
|
240 |
-
swap='lss_kpt3'),
|
241 |
-
30:
|
242 |
-
dict(
|
243 |
-
name='lss_kpt6',
|
244 |
-
id=30,
|
245 |
-
color=[255, 0, 128],
|
246 |
-
type='',
|
247 |
-
swap='lss_kpt2'),
|
248 |
-
31:
|
249 |
-
dict(
|
250 |
-
name='lss_kpt7',
|
251 |
-
id=31,
|
252 |
-
color=[255, 0, 128],
|
253 |
-
type='',
|
254 |
-
swap='lss_kpt33'),
|
255 |
-
32:
|
256 |
-
dict(
|
257 |
-
name='lss_kpt8',
|
258 |
-
id=32,
|
259 |
-
color=[255, 0, 128],
|
260 |
-
type='',
|
261 |
-
swap='lss_kpt32'),
|
262 |
-
33:
|
263 |
-
dict(
|
264 |
-
name='lss_kpt9',
|
265 |
-
id=33,
|
266 |
-
color=[255, 0, 128],
|
267 |
-
type='',
|
268 |
-
swap='lss_kpt31'),
|
269 |
-
34:
|
270 |
-
dict(
|
271 |
-
name='lss_kpt10',
|
272 |
-
id=34,
|
273 |
-
color=[255, 0, 128],
|
274 |
-
type='',
|
275 |
-
swap='lss_kpt30'),
|
276 |
-
35:
|
277 |
-
dict(
|
278 |
-
name='lss_kpt11',
|
279 |
-
id=35,
|
280 |
-
color=[255, 0, 128],
|
281 |
-
type='',
|
282 |
-
swap='lss_kpt29'),
|
283 |
-
36:
|
284 |
-
dict(
|
285 |
-
name='lss_kpt12',
|
286 |
-
id=36,
|
287 |
-
color=[255, 0, 128],
|
288 |
-
type='',
|
289 |
-
swap='lss_kpt28'),
|
290 |
-
37:
|
291 |
-
dict(
|
292 |
-
name='lss_kpt13',
|
293 |
-
id=37,
|
294 |
-
color=[255, 0, 128],
|
295 |
-
type='',
|
296 |
-
swap='lss_kpt27'),
|
297 |
-
38:
|
298 |
-
dict(
|
299 |
-
name='lss_kpt14',
|
300 |
-
id=38,
|
301 |
-
color=[255, 0, 128],
|
302 |
-
type='',
|
303 |
-
swap='lss_kpt26'),
|
304 |
-
39:
|
305 |
-
dict(
|
306 |
-
name='lss_kpt15',
|
307 |
-
id=39,
|
308 |
-
color=[255, 0, 128],
|
309 |
-
type='',
|
310 |
-
swap='lss_kpt25'),
|
311 |
-
40:
|
312 |
-
dict(
|
313 |
-
name='lss_kpt16',
|
314 |
-
id=40,
|
315 |
-
color=[255, 0, 128],
|
316 |
-
type='',
|
317 |
-
swap='lss_kpt24'),
|
318 |
-
41:
|
319 |
-
dict(
|
320 |
-
name='lss_kpt17',
|
321 |
-
id=41,
|
322 |
-
color=[255, 0, 128],
|
323 |
-
type='',
|
324 |
-
swap='lss_kpt23'),
|
325 |
-
42:
|
326 |
-
dict(
|
327 |
-
name='lss_kpt18',
|
328 |
-
id=42,
|
329 |
-
color=[255, 0, 128],
|
330 |
-
type='',
|
331 |
-
swap='lss_kpt22'),
|
332 |
-
43:
|
333 |
-
dict(
|
334 |
-
name='lss_kpt19',
|
335 |
-
id=43,
|
336 |
-
color=[255, 0, 128],
|
337 |
-
type='',
|
338 |
-
swap='lss_kpt21'),
|
339 |
-
44:
|
340 |
-
dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
|
341 |
-
45:
|
342 |
-
dict(
|
343 |
-
name='lss_kpt21',
|
344 |
-
id=45,
|
345 |
-
color=[255, 0, 128],
|
346 |
-
type='',
|
347 |
-
swap='lss_kpt19'),
|
348 |
-
46:
|
349 |
-
dict(
|
350 |
-
name='lss_kpt22',
|
351 |
-
id=46,
|
352 |
-
color=[255, 0, 128],
|
353 |
-
type='',
|
354 |
-
swap='lss_kpt18'),
|
355 |
-
47:
|
356 |
-
dict(
|
357 |
-
name='lss_kpt23',
|
358 |
-
id=47,
|
359 |
-
color=[255, 0, 128],
|
360 |
-
type='',
|
361 |
-
swap='lss_kpt17'),
|
362 |
-
48:
|
363 |
-
dict(
|
364 |
-
name='lss_kpt24',
|
365 |
-
id=48,
|
366 |
-
color=[255, 0, 128],
|
367 |
-
type='',
|
368 |
-
swap='lss_kpt16'),
|
369 |
-
49:
|
370 |
-
dict(
|
371 |
-
name='lss_kpt25',
|
372 |
-
id=49,
|
373 |
-
color=[255, 0, 128],
|
374 |
-
type='',
|
375 |
-
swap='lss_kpt15'),
|
376 |
-
50:
|
377 |
-
dict(
|
378 |
-
name='lss_kpt26',
|
379 |
-
id=50,
|
380 |
-
color=[255, 0, 128],
|
381 |
-
type='',
|
382 |
-
swap='lss_kpt14'),
|
383 |
-
51:
|
384 |
-
dict(
|
385 |
-
name='lss_kpt27',
|
386 |
-
id=51,
|
387 |
-
color=[255, 0, 128],
|
388 |
-
type='',
|
389 |
-
swap='lss_kpt13'),
|
390 |
-
52:
|
391 |
-
dict(
|
392 |
-
name='lss_kpt28',
|
393 |
-
id=52,
|
394 |
-
color=[255, 0, 128],
|
395 |
-
type='',
|
396 |
-
swap='lss_kpt12'),
|
397 |
-
53:
|
398 |
-
dict(
|
399 |
-
name='lss_kpt29',
|
400 |
-
id=53,
|
401 |
-
color=[255, 0, 128],
|
402 |
-
type='',
|
403 |
-
swap='lss_kpt11'),
|
404 |
-
54:
|
405 |
-
dict(
|
406 |
-
name='lss_kpt30',
|
407 |
-
id=54,
|
408 |
-
color=[255, 0, 128],
|
409 |
-
type='',
|
410 |
-
swap='lss_kpt10'),
|
411 |
-
55:
|
412 |
-
dict(
|
413 |
-
name='lss_kpt31',
|
414 |
-
id=55,
|
415 |
-
color=[255, 0, 128],
|
416 |
-
type='',
|
417 |
-
swap='lss_kpt9'),
|
418 |
-
56:
|
419 |
-
dict(
|
420 |
-
name='lss_kpt32',
|
421 |
-
id=56,
|
422 |
-
color=[255, 0, 128],
|
423 |
-
type='',
|
424 |
-
swap='lss_kpt8'),
|
425 |
-
57:
|
426 |
-
dict(
|
427 |
-
name='lss_kpt33',
|
428 |
-
id=57,
|
429 |
-
color=[255, 0, 128],
|
430 |
-
type='',
|
431 |
-
swap='lss_kpt7'),
|
432 |
-
58:
|
433 |
-
dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
|
434 |
-
59:
|
435 |
-
dict(
|
436 |
-
name='sso_kpt2',
|
437 |
-
id=59,
|
438 |
-
color=[128, 0, 255],
|
439 |
-
type='',
|
440 |
-
swap='sso_kpt26'),
|
441 |
-
60:
|
442 |
-
dict(
|
443 |
-
name='sso_kpt3',
|
444 |
-
id=60,
|
445 |
-
color=[128, 0, 255],
|
446 |
-
type='',
|
447 |
-
swap='sso_kpt5'),
|
448 |
-
61:
|
449 |
-
dict(
|
450 |
-
name='sso_kpt4',
|
451 |
-
id=61,
|
452 |
-
color=[128, 0, 255],
|
453 |
-
type='',
|
454 |
-
swap='sso_kpt6'),
|
455 |
-
62:
|
456 |
-
dict(
|
457 |
-
name='sso_kpt5',
|
458 |
-
id=62,
|
459 |
-
color=[128, 0, 255],
|
460 |
-
type='',
|
461 |
-
swap='sso_kpt3'),
|
462 |
-
63:
|
463 |
-
dict(
|
464 |
-
name='sso_kpt6',
|
465 |
-
id=63,
|
466 |
-
color=[128, 0, 255],
|
467 |
-
type='',
|
468 |
-
swap='sso_kpt4'),
|
469 |
-
64:
|
470 |
-
dict(
|
471 |
-
name='sso_kpt7',
|
472 |
-
id=64,
|
473 |
-
color=[128, 0, 255],
|
474 |
-
type='',
|
475 |
-
swap='sso_kpt25'),
|
476 |
-
65:
|
477 |
-
dict(
|
478 |
-
name='sso_kpt8',
|
479 |
-
id=65,
|
480 |
-
color=[128, 0, 255],
|
481 |
-
type='',
|
482 |
-
swap='sso_kpt24'),
|
483 |
-
66:
|
484 |
-
dict(
|
485 |
-
name='sso_kpt9',
|
486 |
-
id=66,
|
487 |
-
color=[128, 0, 255],
|
488 |
-
type='',
|
489 |
-
swap='sso_kpt23'),
|
490 |
-
67:
|
491 |
-
dict(
|
492 |
-
name='sso_kpt10',
|
493 |
-
id=67,
|
494 |
-
color=[128, 0, 255],
|
495 |
-
type='',
|
496 |
-
swap='sso_kpt22'),
|
497 |
-
68:
|
498 |
-
dict(
|
499 |
-
name='sso_kpt11',
|
500 |
-
id=68,
|
501 |
-
color=[128, 0, 255],
|
502 |
-
type='',
|
503 |
-
swap='sso_kpt21'),
|
504 |
-
69:
|
505 |
-
dict(
|
506 |
-
name='sso_kpt12',
|
507 |
-
id=69,
|
508 |
-
color=[128, 0, 255],
|
509 |
-
type='',
|
510 |
-
swap='sso_kpt20'),
|
511 |
-
70:
|
512 |
-
dict(
|
513 |
-
name='sso_kpt13',
|
514 |
-
id=70,
|
515 |
-
color=[128, 0, 255],
|
516 |
-
type='',
|
517 |
-
swap='sso_kpt19'),
|
518 |
-
71:
|
519 |
-
dict(
|
520 |
-
name='sso_kpt14',
|
521 |
-
id=71,
|
522 |
-
color=[128, 0, 255],
|
523 |
-
type='',
|
524 |
-
swap='sso_kpt18'),
|
525 |
-
72:
|
526 |
-
dict(
|
527 |
-
name='sso_kpt15',
|
528 |
-
id=72,
|
529 |
-
color=[128, 0, 255],
|
530 |
-
type='',
|
531 |
-
swap='sso_kpt17'),
|
532 |
-
73:
|
533 |
-
dict(
|
534 |
-
name='sso_kpt16',
|
535 |
-
id=73,
|
536 |
-
color=[128, 0, 255],
|
537 |
-
type='',
|
538 |
-
swap='sso_kpt29'),
|
539 |
-
74:
|
540 |
-
dict(
|
541 |
-
name='sso_kpt17',
|
542 |
-
id=74,
|
543 |
-
color=[128, 0, 255],
|
544 |
-
type='',
|
545 |
-
swap='sso_kpt15'),
|
546 |
-
75:
|
547 |
-
dict(
|
548 |
-
name='sso_kpt18',
|
549 |
-
id=75,
|
550 |
-
color=[128, 0, 255],
|
551 |
-
type='',
|
552 |
-
swap='sso_kpt14'),
|
553 |
-
76:
|
554 |
-
dict(
|
555 |
-
name='sso_kpt19',
|
556 |
-
id=76,
|
557 |
-
color=[128, 0, 255],
|
558 |
-
type='',
|
559 |
-
swap='sso_kpt13'),
|
560 |
-
77:
|
561 |
-
dict(
|
562 |
-
name='sso_kpt20',
|
563 |
-
id=77,
|
564 |
-
color=[128, 0, 255],
|
565 |
-
type='',
|
566 |
-
swap='sso_kpt12'),
|
567 |
-
78:
|
568 |
-
dict(
|
569 |
-
name='sso_kpt21',
|
570 |
-
id=78,
|
571 |
-
color=[128, 0, 255],
|
572 |
-
type='',
|
573 |
-
swap='sso_kpt11'),
|
574 |
-
79:
|
575 |
-
dict(
|
576 |
-
name='sso_kpt22',
|
577 |
-
id=79,
|
578 |
-
color=[128, 0, 255],
|
579 |
-
type='',
|
580 |
-
swap='sso_kpt10'),
|
581 |
-
80:
|
582 |
-
dict(
|
583 |
-
name='sso_kpt23',
|
584 |
-
id=80,
|
585 |
-
color=[128, 0, 255],
|
586 |
-
type='',
|
587 |
-
swap='sso_kpt9'),
|
588 |
-
81:
|
589 |
-
dict(
|
590 |
-
name='sso_kpt24',
|
591 |
-
id=81,
|
592 |
-
color=[128, 0, 255],
|
593 |
-
type='',
|
594 |
-
swap='sso_kpt8'),
|
595 |
-
82:
|
596 |
-
dict(
|
597 |
-
name='sso_kpt25',
|
598 |
-
id=82,
|
599 |
-
color=[128, 0, 255],
|
600 |
-
type='',
|
601 |
-
swap='sso_kpt7'),
|
602 |
-
83:
|
603 |
-
dict(
|
604 |
-
name='sso_kpt26',
|
605 |
-
id=83,
|
606 |
-
color=[128, 0, 255],
|
607 |
-
type='',
|
608 |
-
swap='sso_kpt2'),
|
609 |
-
84:
|
610 |
-
dict(
|
611 |
-
name='sso_kpt27',
|
612 |
-
id=84,
|
613 |
-
color=[128, 0, 255],
|
614 |
-
type='',
|
615 |
-
swap='sso_kpt30'),
|
616 |
-
85:
|
617 |
-
dict(
|
618 |
-
name='sso_kpt28',
|
619 |
-
id=85,
|
620 |
-
color=[128, 0, 255],
|
621 |
-
type='',
|
622 |
-
swap='sso_kpt31'),
|
623 |
-
86:
|
624 |
-
dict(
|
625 |
-
name='sso_kpt29',
|
626 |
-
id=86,
|
627 |
-
color=[128, 0, 255],
|
628 |
-
type='',
|
629 |
-
swap='sso_kpt16'),
|
630 |
-
87:
|
631 |
-
dict(
|
632 |
-
name='sso_kpt30',
|
633 |
-
id=87,
|
634 |
-
color=[128, 0, 255],
|
635 |
-
type='',
|
636 |
-
swap='sso_kpt27'),
|
637 |
-
88:
|
638 |
-
dict(
|
639 |
-
name='sso_kpt31',
|
640 |
-
id=88,
|
641 |
-
color=[128, 0, 255],
|
642 |
-
type='',
|
643 |
-
swap='sso_kpt28'),
|
644 |
-
89:
|
645 |
-
dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
|
646 |
-
90:
|
647 |
-
dict(
|
648 |
-
name='lso_kpt2',
|
649 |
-
id=90,
|
650 |
-
color=[0, 128, 255],
|
651 |
-
type='',
|
652 |
-
swap='lso_kpt6'),
|
653 |
-
91:
|
654 |
-
dict(
|
655 |
-
name='lso_kpt3',
|
656 |
-
id=91,
|
657 |
-
color=[0, 128, 255],
|
658 |
-
type='',
|
659 |
-
swap='lso_kpt5'),
|
660 |
-
92:
|
661 |
-
dict(
|
662 |
-
name='lso_kpt4',
|
663 |
-
id=92,
|
664 |
-
color=[0, 128, 255],
|
665 |
-
type='',
|
666 |
-
swap='lso_kpt34'),
|
667 |
-
93:
|
668 |
-
dict(
|
669 |
-
name='lso_kpt5',
|
670 |
-
id=93,
|
671 |
-
color=[0, 128, 255],
|
672 |
-
type='',
|
673 |
-
swap='lso_kpt3'),
|
674 |
-
94:
|
675 |
-
dict(
|
676 |
-
name='lso_kpt6',
|
677 |
-
id=94,
|
678 |
-
color=[0, 128, 255],
|
679 |
-
type='',
|
680 |
-
swap='lso_kpt2'),
|
681 |
-
95:
|
682 |
-
dict(
|
683 |
-
name='lso_kpt7',
|
684 |
-
id=95,
|
685 |
-
color=[0, 128, 255],
|
686 |
-
type='',
|
687 |
-
swap='lso_kpt33'),
|
688 |
-
96:
|
689 |
-
dict(
|
690 |
-
name='lso_kpt8',
|
691 |
-
id=96,
|
692 |
-
color=[0, 128, 255],
|
693 |
-
type='',
|
694 |
-
swap='lso_kpt32'),
|
695 |
-
97:
|
696 |
-
dict(
|
697 |
-
name='lso_kpt9',
|
698 |
-
id=97,
|
699 |
-
color=[0, 128, 255],
|
700 |
-
type='',
|
701 |
-
swap='lso_kpt31'),
|
702 |
-
98:
|
703 |
-
dict(
|
704 |
-
name='lso_kpt10',
|
705 |
-
id=98,
|
706 |
-
color=[0, 128, 255],
|
707 |
-
type='',
|
708 |
-
swap='lso_kpt30'),
|
709 |
-
99:
|
710 |
-
dict(
|
711 |
-
name='lso_kpt11',
|
712 |
-
id=99,
|
713 |
-
color=[0, 128, 255],
|
714 |
-
type='',
|
715 |
-
swap='lso_kpt29'),
|
716 |
-
100:
|
717 |
-
dict(
|
718 |
-
name='lso_kpt12',
|
719 |
-
id=100,
|
720 |
-
color=[0, 128, 255],
|
721 |
-
type='',
|
722 |
-
swap='lso_kpt28'),
|
723 |
-
101:
|
724 |
-
dict(
|
725 |
-
name='lso_kpt13',
|
726 |
-
id=101,
|
727 |
-
color=[0, 128, 255],
|
728 |
-
type='',
|
729 |
-
swap='lso_kpt27'),
|
730 |
-
102:
|
731 |
-
dict(
|
732 |
-
name='lso_kpt14',
|
733 |
-
id=102,
|
734 |
-
color=[0, 128, 255],
|
735 |
-
type='',
|
736 |
-
swap='lso_kpt26'),
|
737 |
-
103:
|
738 |
-
dict(
|
739 |
-
name='lso_kpt15',
|
740 |
-
id=103,
|
741 |
-
color=[0, 128, 255],
|
742 |
-
type='',
|
743 |
-
swap='lso_kpt25'),
|
744 |
-
104:
|
745 |
-
dict(
|
746 |
-
name='lso_kpt16',
|
747 |
-
id=104,
|
748 |
-
color=[0, 128, 255],
|
749 |
-
type='',
|
750 |
-
swap='lso_kpt24'),
|
751 |
-
105:
|
752 |
-
dict(
|
753 |
-
name='lso_kpt17',
|
754 |
-
id=105,
|
755 |
-
color=[0, 128, 255],
|
756 |
-
type='',
|
757 |
-
swap='lso_kpt23'),
|
758 |
-
106:
|
759 |
-
dict(
|
760 |
-
name='lso_kpt18',
|
761 |
-
id=106,
|
762 |
-
color=[0, 128, 255],
|
763 |
-
type='',
|
764 |
-
swap='lso_kpt22'),
|
765 |
-
107:
|
766 |
-
dict(
|
767 |
-
name='lso_kpt19',
|
768 |
-
id=107,
|
769 |
-
color=[0, 128, 255],
|
770 |
-
type='',
|
771 |
-
swap='lso_kpt21'),
|
772 |
-
108:
|
773 |
-
dict(
|
774 |
-
name='lso_kpt20',
|
775 |
-
id=108,
|
776 |
-
color=[0, 128, 255],
|
777 |
-
type='',
|
778 |
-
swap='lso_kpt37'),
|
779 |
-
109:
|
780 |
-
dict(
|
781 |
-
name='lso_kpt21',
|
782 |
-
id=109,
|
783 |
-
color=[0, 128, 255],
|
784 |
-
type='',
|
785 |
-
swap='lso_kpt19'),
|
786 |
-
110:
|
787 |
-
dict(
|
788 |
-
name='lso_kpt22',
|
789 |
-
id=110,
|
790 |
-
color=[0, 128, 255],
|
791 |
-
type='',
|
792 |
-
swap='lso_kpt18'),
|
793 |
-
111:
|
794 |
-
dict(
|
795 |
-
name='lso_kpt23',
|
796 |
-
id=111,
|
797 |
-
color=[0, 128, 255],
|
798 |
-
type='',
|
799 |
-
swap='lso_kpt17'),
|
800 |
-
112:
|
801 |
-
dict(
|
802 |
-
name='lso_kpt24',
|
803 |
-
id=112,
|
804 |
-
color=[0, 128, 255],
|
805 |
-
type='',
|
806 |
-
swap='lso_kpt16'),
|
807 |
-
113:
|
808 |
-
dict(
|
809 |
-
name='lso_kpt25',
|
810 |
-
id=113,
|
811 |
-
color=[0, 128, 255],
|
812 |
-
type='',
|
813 |
-
swap='lso_kpt15'),
|
814 |
-
114:
|
815 |
-
dict(
|
816 |
-
name='lso_kpt26',
|
817 |
-
id=114,
|
818 |
-
color=[0, 128, 255],
|
819 |
-
type='',
|
820 |
-
swap='lso_kpt14'),
|
821 |
-
115:
|
822 |
-
dict(
|
823 |
-
name='lso_kpt27',
|
824 |
-
id=115,
|
825 |
-
color=[0, 128, 255],
|
826 |
-
type='',
|
827 |
-
swap='lso_kpt13'),
|
828 |
-
116:
|
829 |
-
dict(
|
830 |
-
name='lso_kpt28',
|
831 |
-
id=116,
|
832 |
-
color=[0, 128, 255],
|
833 |
-
type='',
|
834 |
-
swap='lso_kpt12'),
|
835 |
-
117:
|
836 |
-
dict(
|
837 |
-
name='lso_kpt29',
|
838 |
-
id=117,
|
839 |
-
color=[0, 128, 255],
|
840 |
-
type='',
|
841 |
-
swap='lso_kpt11'),
|
842 |
-
118:
|
843 |
-
dict(
|
844 |
-
name='lso_kpt30',
|
845 |
-
id=118,
|
846 |
-
color=[0, 128, 255],
|
847 |
-
type='',
|
848 |
-
swap='lso_kpt10'),
|
849 |
-
119:
|
850 |
-
dict(
|
851 |
-
name='lso_kpt31',
|
852 |
-
id=119,
|
853 |
-
color=[0, 128, 255],
|
854 |
-
type='',
|
855 |
-
swap='lso_kpt9'),
|
856 |
-
120:
|
857 |
-
dict(
|
858 |
-
name='lso_kpt32',
|
859 |
-
id=120,
|
860 |
-
color=[0, 128, 255],
|
861 |
-
type='',
|
862 |
-
swap='lso_kpt8'),
|
863 |
-
121:
|
864 |
-
dict(
|
865 |
-
name='lso_kpt33',
|
866 |
-
id=121,
|
867 |
-
color=[0, 128, 255],
|
868 |
-
type='',
|
869 |
-
swap='lso_kpt7'),
|
870 |
-
122:
|
871 |
-
dict(
|
872 |
-
name='lso_kpt34',
|
873 |
-
id=122,
|
874 |
-
color=[0, 128, 255],
|
875 |
-
type='',
|
876 |
-
swap='lso_kpt4'),
|
877 |
-
123:
|
878 |
-
dict(
|
879 |
-
name='lso_kpt35',
|
880 |
-
id=123,
|
881 |
-
color=[0, 128, 255],
|
882 |
-
type='',
|
883 |
-
swap='lso_kpt38'),
|
884 |
-
124:
|
885 |
-
dict(
|
886 |
-
name='lso_kpt36',
|
887 |
-
id=124,
|
888 |
-
color=[0, 128, 255],
|
889 |
-
type='',
|
890 |
-
swap='lso_kpt39'),
|
891 |
-
125:
|
892 |
-
dict(
|
893 |
-
name='lso_kpt37',
|
894 |
-
id=125,
|
895 |
-
color=[0, 128, 255],
|
896 |
-
type='',
|
897 |
-
swap='lso_kpt20'),
|
898 |
-
126:
|
899 |
-
dict(
|
900 |
-
name='lso_kpt38',
|
901 |
-
id=126,
|
902 |
-
color=[0, 128, 255],
|
903 |
-
type='',
|
904 |
-
swap='lso_kpt35'),
|
905 |
-
127:
|
906 |
-
dict(
|
907 |
-
name='lso_kpt39',
|
908 |
-
id=127,
|
909 |
-
color=[0, 128, 255],
|
910 |
-
type='',
|
911 |
-
swap='lso_kpt36'),
|
912 |
-
128:
|
913 |
-
dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
|
914 |
-
129:
|
915 |
-
dict(
|
916 |
-
name='vest_kpt2',
|
917 |
-
id=129,
|
918 |
-
color=[0, 128, 128],
|
919 |
-
type='',
|
920 |
-
swap='vest_kpt6'),
|
921 |
-
130:
|
922 |
-
dict(
|
923 |
-
name='vest_kpt3',
|
924 |
-
id=130,
|
925 |
-
color=[0, 128, 128],
|
926 |
-
type='',
|
927 |
-
swap='vest_kpt5'),
|
928 |
-
131:
|
929 |
-
dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
|
930 |
-
132:
|
931 |
-
dict(
|
932 |
-
name='vest_kpt5',
|
933 |
-
id=132,
|
934 |
-
color=[0, 128, 128],
|
935 |
-
type='',
|
936 |
-
swap='vest_kpt3'),
|
937 |
-
133:
|
938 |
-
dict(
|
939 |
-
name='vest_kpt6',
|
940 |
-
id=133,
|
941 |
-
color=[0, 128, 128],
|
942 |
-
type='',
|
943 |
-
swap='vest_kpt2'),
|
944 |
-
134:
|
945 |
-
dict(
|
946 |
-
name='vest_kpt7',
|
947 |
-
id=134,
|
948 |
-
color=[0, 128, 128],
|
949 |
-
type='',
|
950 |
-
swap='vest_kpt15'),
|
951 |
-
135:
|
952 |
-
dict(
|
953 |
-
name='vest_kpt8',
|
954 |
-
id=135,
|
955 |
-
color=[0, 128, 128],
|
956 |
-
type='',
|
957 |
-
swap='vest_kpt14'),
|
958 |
-
136:
|
959 |
-
dict(
|
960 |
-
name='vest_kpt9',
|
961 |
-
id=136,
|
962 |
-
color=[0, 128, 128],
|
963 |
-
type='',
|
964 |
-
swap='vest_kpt13'),
|
965 |
-
137:
|
966 |
-
dict(
|
967 |
-
name='vest_kpt10',
|
968 |
-
id=137,
|
969 |
-
color=[0, 128, 128],
|
970 |
-
type='',
|
971 |
-
swap='vest_kpt12'),
|
972 |
-
138:
|
973 |
-
dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
|
974 |
-
139:
|
975 |
-
dict(
|
976 |
-
name='vest_kpt12',
|
977 |
-
id=139,
|
978 |
-
color=[0, 128, 128],
|
979 |
-
type='',
|
980 |
-
swap='vest_kpt10'),
|
981 |
-
140:
|
982 |
-
dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
|
983 |
-
141:
|
984 |
-
dict(
|
985 |
-
name='vest_kpt14',
|
986 |
-
id=141,
|
987 |
-
color=[0, 128, 128],
|
988 |
-
type='',
|
989 |
-
swap='vest_kpt8'),
|
990 |
-
142:
|
991 |
-
dict(
|
992 |
-
name='vest_kpt15',
|
993 |
-
id=142,
|
994 |
-
color=[0, 128, 128],
|
995 |
-
type='',
|
996 |
-
swap='vest_kpt7'),
|
997 |
-
143:
|
998 |
-
dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
|
999 |
-
144:
|
1000 |
-
dict(
|
1001 |
-
name='sling_kpt2',
|
1002 |
-
id=144,
|
1003 |
-
color=[0, 0, 128],
|
1004 |
-
type='',
|
1005 |
-
swap='sling_kpt6'),
|
1006 |
-
145:
|
1007 |
-
dict(
|
1008 |
-
name='sling_kpt3',
|
1009 |
-
id=145,
|
1010 |
-
color=[0, 0, 128],
|
1011 |
-
type='',
|
1012 |
-
swap='sling_kpt5'),
|
1013 |
-
146:
|
1014 |
-
dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
|
1015 |
-
147:
|
1016 |
-
dict(
|
1017 |
-
name='sling_kpt5',
|
1018 |
-
id=147,
|
1019 |
-
color=[0, 0, 128],
|
1020 |
-
type='',
|
1021 |
-
swap='sling_kpt3'),
|
1022 |
-
148:
|
1023 |
-
dict(
|
1024 |
-
name='sling_kpt6',
|
1025 |
-
id=148,
|
1026 |
-
color=[0, 0, 128],
|
1027 |
-
type='',
|
1028 |
-
swap='sling_kpt2'),
|
1029 |
-
149:
|
1030 |
-
dict(
|
1031 |
-
name='sling_kpt7',
|
1032 |
-
id=149,
|
1033 |
-
color=[0, 0, 128],
|
1034 |
-
type='',
|
1035 |
-
swap='sling_kpt15'),
|
1036 |
-
150:
|
1037 |
-
dict(
|
1038 |
-
name='sling_kpt8',
|
1039 |
-
id=150,
|
1040 |
-
color=[0, 0, 128],
|
1041 |
-
type='',
|
1042 |
-
swap='sling_kpt14'),
|
1043 |
-
151:
|
1044 |
-
dict(
|
1045 |
-
name='sling_kpt9',
|
1046 |
-
id=151,
|
1047 |
-
color=[0, 0, 128],
|
1048 |
-
type='',
|
1049 |
-
swap='sling_kpt13'),
|
1050 |
-
152:
|
1051 |
-
dict(
|
1052 |
-
name='sling_kpt10',
|
1053 |
-
id=152,
|
1054 |
-
color=[0, 0, 128],
|
1055 |
-
type='',
|
1056 |
-
swap='sling_kpt12'),
|
1057 |
-
153:
|
1058 |
-
dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
|
1059 |
-
154:
|
1060 |
-
dict(
|
1061 |
-
name='sling_kpt12',
|
1062 |
-
id=154,
|
1063 |
-
color=[0, 0, 128],
|
1064 |
-
type='',
|
1065 |
-
swap='sling_kpt10'),
|
1066 |
-
155:
|
1067 |
-
dict(
|
1068 |
-
name='sling_kpt13',
|
1069 |
-
id=155,
|
1070 |
-
color=[0, 0, 128],
|
1071 |
-
type='',
|
1072 |
-
swap='sling_kpt9'),
|
1073 |
-
156:
|
1074 |
-
dict(
|
1075 |
-
name='sling_kpt14',
|
1076 |
-
id=156,
|
1077 |
-
color=[0, 0, 128],
|
1078 |
-
type='',
|
1079 |
-
swap='sling_kpt8'),
|
1080 |
-
157:
|
1081 |
-
dict(
|
1082 |
-
name='sling_kpt15',
|
1083 |
-
id=157,
|
1084 |
-
color=[0, 0, 128],
|
1085 |
-
type='',
|
1086 |
-
swap='sling_kpt7'),
|
1087 |
-
158:
|
1088 |
-
dict(
|
1089 |
-
name='shorts_kpt1',
|
1090 |
-
id=158,
|
1091 |
-
color=[128, 128, 128],
|
1092 |
-
type='',
|
1093 |
-
swap='shorts_kpt3'),
|
1094 |
-
159:
|
1095 |
-
dict(
|
1096 |
-
name='shorts_kpt2',
|
1097 |
-
id=159,
|
1098 |
-
color=[128, 128, 128],
|
1099 |
-
type='',
|
1100 |
-
swap=''),
|
1101 |
-
160:
|
1102 |
-
dict(
|
1103 |
-
name='shorts_kpt3',
|
1104 |
-
id=160,
|
1105 |
-
color=[128, 128, 128],
|
1106 |
-
type='',
|
1107 |
-
swap='shorts_kpt1'),
|
1108 |
-
161:
|
1109 |
-
dict(
|
1110 |
-
name='shorts_kpt4',
|
1111 |
-
id=161,
|
1112 |
-
color=[128, 128, 128],
|
1113 |
-
type='',
|
1114 |
-
swap='shorts_kpt10'),
|
1115 |
-
162:
|
1116 |
-
dict(
|
1117 |
-
name='shorts_kpt5',
|
1118 |
-
id=162,
|
1119 |
-
color=[128, 128, 128],
|
1120 |
-
type='',
|
1121 |
-
swap='shorts_kpt9'),
|
1122 |
-
163:
|
1123 |
-
dict(
|
1124 |
-
name='shorts_kpt6',
|
1125 |
-
id=163,
|
1126 |
-
color=[128, 128, 128],
|
1127 |
-
type='',
|
1128 |
-
swap='shorts_kpt8'),
|
1129 |
-
164:
|
1130 |
-
dict(
|
1131 |
-
name='shorts_kpt7',
|
1132 |
-
id=164,
|
1133 |
-
color=[128, 128, 128],
|
1134 |
-
type='',
|
1135 |
-
swap=''),
|
1136 |
-
165:
|
1137 |
-
dict(
|
1138 |
-
name='shorts_kpt8',
|
1139 |
-
id=165,
|
1140 |
-
color=[128, 128, 128],
|
1141 |
-
type='',
|
1142 |
-
swap='shorts_kpt6'),
|
1143 |
-
166:
|
1144 |
-
dict(
|
1145 |
-
name='shorts_kpt9',
|
1146 |
-
id=166,
|
1147 |
-
color=[128, 128, 128],
|
1148 |
-
type='',
|
1149 |
-
swap='shorts_kpt5'),
|
1150 |
-
167:
|
1151 |
-
dict(
|
1152 |
-
name='shorts_kpt10',
|
1153 |
-
id=167,
|
1154 |
-
color=[128, 128, 128],
|
1155 |
-
type='',
|
1156 |
-
swap='shorts_kpt4'),
|
1157 |
-
168:
|
1158 |
-
dict(
|
1159 |
-
name='trousers_kpt1',
|
1160 |
-
id=168,
|
1161 |
-
color=[128, 0, 128],
|
1162 |
-
type='',
|
1163 |
-
swap='trousers_kpt3'),
|
1164 |
-
169:
|
1165 |
-
dict(
|
1166 |
-
name='trousers_kpt2',
|
1167 |
-
id=169,
|
1168 |
-
color=[128, 0, 128],
|
1169 |
-
type='',
|
1170 |
-
swap=''),
|
1171 |
-
170:
|
1172 |
-
dict(
|
1173 |
-
name='trousers_kpt3',
|
1174 |
-
id=170,
|
1175 |
-
color=[128, 0, 128],
|
1176 |
-
type='',
|
1177 |
-
swap='trousers_kpt1'),
|
1178 |
-
171:
|
1179 |
-
dict(
|
1180 |
-
name='trousers_kpt4',
|
1181 |
-
id=171,
|
1182 |
-
color=[128, 0, 128],
|
1183 |
-
type='',
|
1184 |
-
swap='trousers_kpt14'),
|
1185 |
-
172:
|
1186 |
-
dict(
|
1187 |
-
name='trousers_kpt5',
|
1188 |
-
id=172,
|
1189 |
-
color=[128, 0, 128],
|
1190 |
-
type='',
|
1191 |
-
swap='trousers_kpt13'),
|
1192 |
-
173:
|
1193 |
-
dict(
|
1194 |
-
name='trousers_kpt6',
|
1195 |
-
id=173,
|
1196 |
-
color=[128, 0, 128],
|
1197 |
-
type='',
|
1198 |
-
swap='trousers_kpt12'),
|
1199 |
-
174:
|
1200 |
-
dict(
|
1201 |
-
name='trousers_kpt7',
|
1202 |
-
id=174,
|
1203 |
-
color=[128, 0, 128],
|
1204 |
-
type='',
|
1205 |
-
swap='trousers_kpt11'),
|
1206 |
-
175:
|
1207 |
-
dict(
|
1208 |
-
name='trousers_kpt8',
|
1209 |
-
id=175,
|
1210 |
-
color=[128, 0, 128],
|
1211 |
-
type='',
|
1212 |
-
swap='trousers_kpt10'),
|
1213 |
-
176:
|
1214 |
-
dict(
|
1215 |
-
name='trousers_kpt9',
|
1216 |
-
id=176,
|
1217 |
-
color=[128, 0, 128],
|
1218 |
-
type='',
|
1219 |
-
swap=''),
|
1220 |
-
177:
|
1221 |
-
dict(
|
1222 |
-
name='trousers_kpt10',
|
1223 |
-
id=177,
|
1224 |
-
color=[128, 0, 128],
|
1225 |
-
type='',
|
1226 |
-
swap='trousers_kpt8'),
|
1227 |
-
178:
|
1228 |
-
dict(
|
1229 |
-
name='trousers_kpt11',
|
1230 |
-
id=178,
|
1231 |
-
color=[128, 0, 128],
|
1232 |
-
type='',
|
1233 |
-
swap='trousers_kpt7'),
|
1234 |
-
179:
|
1235 |
-
dict(
|
1236 |
-
name='trousers_kpt12',
|
1237 |
-
id=179,
|
1238 |
-
color=[128, 0, 128],
|
1239 |
-
type='',
|
1240 |
-
swap='trousers_kpt6'),
|
1241 |
-
180:
|
1242 |
-
dict(
|
1243 |
-
name='trousers_kpt13',
|
1244 |
-
id=180,
|
1245 |
-
color=[128, 0, 128],
|
1246 |
-
type='',
|
1247 |
-
swap='trousers_kpt5'),
|
1248 |
-
181:
|
1249 |
-
dict(
|
1250 |
-
name='trousers_kpt14',
|
1251 |
-
id=181,
|
1252 |
-
color=[128, 0, 128],
|
1253 |
-
type='',
|
1254 |
-
swap='trousers_kpt4'),
|
1255 |
-
182:
|
1256 |
-
dict(
|
1257 |
-
name='skirt_kpt1',
|
1258 |
-
id=182,
|
1259 |
-
color=[64, 128, 128],
|
1260 |
-
type='',
|
1261 |
-
swap='skirt_kpt3'),
|
1262 |
-
183:
|
1263 |
-
dict(
|
1264 |
-
name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
|
1265 |
-
184:
|
1266 |
-
dict(
|
1267 |
-
name='skirt_kpt3',
|
1268 |
-
id=184,
|
1269 |
-
color=[64, 128, 128],
|
1270 |
-
type='',
|
1271 |
-
swap='skirt_kpt1'),
|
1272 |
-
185:
|
1273 |
-
dict(
|
1274 |
-
name='skirt_kpt4',
|
1275 |
-
id=185,
|
1276 |
-
color=[64, 128, 128],
|
1277 |
-
type='',
|
1278 |
-
swap='skirt_kpt8'),
|
1279 |
-
186:
|
1280 |
-
dict(
|
1281 |
-
name='skirt_kpt5',
|
1282 |
-
id=186,
|
1283 |
-
color=[64, 128, 128],
|
1284 |
-
type='',
|
1285 |
-
swap='skirt_kpt7'),
|
1286 |
-
187:
|
1287 |
-
dict(
|
1288 |
-
name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
|
1289 |
-
188:
|
1290 |
-
dict(
|
1291 |
-
name='skirt_kpt7',
|
1292 |
-
id=188,
|
1293 |
-
color=[64, 128, 128],
|
1294 |
-
type='',
|
1295 |
-
swap='skirt_kpt5'),
|
1296 |
-
189:
|
1297 |
-
dict(
|
1298 |
-
name='skirt_kpt8',
|
1299 |
-
id=189,
|
1300 |
-
color=[64, 128, 128],
|
1301 |
-
type='',
|
1302 |
-
swap='skirt_kpt4'),
|
1303 |
-
190:
|
1304 |
-
dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
|
1305 |
-
191:
|
1306 |
-
dict(
|
1307 |
-
name='ssd_kpt2',
|
1308 |
-
id=191,
|
1309 |
-
color=[64, 64, 128],
|
1310 |
-
type='',
|
1311 |
-
swap='ssd_kpt6'),
|
1312 |
-
192:
|
1313 |
-
dict(
|
1314 |
-
name='ssd_kpt3',
|
1315 |
-
id=192,
|
1316 |
-
color=[64, 64, 128],
|
1317 |
-
type='',
|
1318 |
-
swap='ssd_kpt5'),
|
1319 |
-
193:
|
1320 |
-
dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
|
1321 |
-
194:
|
1322 |
-
dict(
|
1323 |
-
name='ssd_kpt5',
|
1324 |
-
id=194,
|
1325 |
-
color=[64, 64, 128],
|
1326 |
-
type='',
|
1327 |
-
swap='ssd_kpt3'),
|
1328 |
-
195:
|
1329 |
-
dict(
|
1330 |
-
name='ssd_kpt6',
|
1331 |
-
id=195,
|
1332 |
-
color=[64, 64, 128],
|
1333 |
-
type='',
|
1334 |
-
swap='ssd_kpt2'),
|
1335 |
-
196:
|
1336 |
-
dict(
|
1337 |
-
name='ssd_kpt7',
|
1338 |
-
id=196,
|
1339 |
-
color=[64, 64, 128],
|
1340 |
-
type='',
|
1341 |
-
swap='ssd_kpt29'),
|
1342 |
-
197:
|
1343 |
-
dict(
|
1344 |
-
name='ssd_kpt8',
|
1345 |
-
id=197,
|
1346 |
-
color=[64, 64, 128],
|
1347 |
-
type='',
|
1348 |
-
swap='ssd_kpt28'),
|
1349 |
-
198:
|
1350 |
-
dict(
|
1351 |
-
name='ssd_kpt9',
|
1352 |
-
id=198,
|
1353 |
-
color=[64, 64, 128],
|
1354 |
-
type='',
|
1355 |
-
swap='ssd_kpt27'),
|
1356 |
-
199:
|
1357 |
-
dict(
|
1358 |
-
name='ssd_kpt10',
|
1359 |
-
id=199,
|
1360 |
-
color=[64, 64, 128],
|
1361 |
-
type='',
|
1362 |
-
swap='ssd_kpt26'),
|
1363 |
-
200:
|
1364 |
-
dict(
|
1365 |
-
name='ssd_kpt11',
|
1366 |
-
id=200,
|
1367 |
-
color=[64, 64, 128],
|
1368 |
-
type='',
|
1369 |
-
swap='ssd_kpt25'),
|
1370 |
-
201:
|
1371 |
-
dict(
|
1372 |
-
name='ssd_kpt12',
|
1373 |
-
id=201,
|
1374 |
-
color=[64, 64, 128],
|
1375 |
-
type='',
|
1376 |
-
swap='ssd_kpt24'),
|
1377 |
-
202:
|
1378 |
-
dict(
|
1379 |
-
name='ssd_kpt13',
|
1380 |
-
id=202,
|
1381 |
-
color=[64, 64, 128],
|
1382 |
-
type='',
|
1383 |
-
swap='ssd_kpt23'),
|
1384 |
-
203:
|
1385 |
-
dict(
|
1386 |
-
name='ssd_kpt14',
|
1387 |
-
id=203,
|
1388 |
-
color=[64, 64, 128],
|
1389 |
-
type='',
|
1390 |
-
swap='ssd_kpt22'),
|
1391 |
-
204:
|
1392 |
-
dict(
|
1393 |
-
name='ssd_kpt15',
|
1394 |
-
id=204,
|
1395 |
-
color=[64, 64, 128],
|
1396 |
-
type='',
|
1397 |
-
swap='ssd_kpt21'),
|
1398 |
-
205:
|
1399 |
-
dict(
|
1400 |
-
name='ssd_kpt16',
|
1401 |
-
id=205,
|
1402 |
-
color=[64, 64, 128],
|
1403 |
-
type='',
|
1404 |
-
swap='ssd_kpt20'),
|
1405 |
-
206:
|
1406 |
-
dict(
|
1407 |
-
name='ssd_kpt17',
|
1408 |
-
id=206,
|
1409 |
-
color=[64, 64, 128],
|
1410 |
-
type='',
|
1411 |
-
swap='ssd_kpt19'),
|
1412 |
-
207:
|
1413 |
-
dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
|
1414 |
-
208:
|
1415 |
-
dict(
|
1416 |
-
name='ssd_kpt19',
|
1417 |
-
id=208,
|
1418 |
-
color=[64, 64, 128],
|
1419 |
-
type='',
|
1420 |
-
swap='ssd_kpt17'),
|
1421 |
-
209:
|
1422 |
-
dict(
|
1423 |
-
name='ssd_kpt20',
|
1424 |
-
id=209,
|
1425 |
-
color=[64, 64, 128],
|
1426 |
-
type='',
|
1427 |
-
swap='ssd_kpt16'),
|
1428 |
-
210:
|
1429 |
-
dict(
|
1430 |
-
name='ssd_kpt21',
|
1431 |
-
id=210,
|
1432 |
-
color=[64, 64, 128],
|
1433 |
-
type='',
|
1434 |
-
swap='ssd_kpt15'),
|
1435 |
-
211:
|
1436 |
-
dict(
|
1437 |
-
name='ssd_kpt22',
|
1438 |
-
id=211,
|
1439 |
-
color=[64, 64, 128],
|
1440 |
-
type='',
|
1441 |
-
swap='ssd_kpt14'),
|
1442 |
-
212:
|
1443 |
-
dict(
|
1444 |
-
name='ssd_kpt23',
|
1445 |
-
id=212,
|
1446 |
-
color=[64, 64, 128],
|
1447 |
-
type='',
|
1448 |
-
swap='ssd_kpt13'),
|
1449 |
-
213:
|
1450 |
-
dict(
|
1451 |
-
name='ssd_kpt24',
|
1452 |
-
id=213,
|
1453 |
-
color=[64, 64, 128],
|
1454 |
-
type='',
|
1455 |
-
swap='ssd_kpt12'),
|
1456 |
-
214:
|
1457 |
-
dict(
|
1458 |
-
name='ssd_kpt25',
|
1459 |
-
id=214,
|
1460 |
-
color=[64, 64, 128],
|
1461 |
-
type='',
|
1462 |
-
swap='ssd_kpt11'),
|
1463 |
-
215:
|
1464 |
-
dict(
|
1465 |
-
name='ssd_kpt26',
|
1466 |
-
id=215,
|
1467 |
-
color=[64, 64, 128],
|
1468 |
-
type='',
|
1469 |
-
swap='ssd_kpt10'),
|
1470 |
-
216:
|
1471 |
-
dict(
|
1472 |
-
name='ssd_kpt27',
|
1473 |
-
id=216,
|
1474 |
-
color=[64, 64, 128],
|
1475 |
-
type='',
|
1476 |
-
swap='ssd_kpt9'),
|
1477 |
-
217:
|
1478 |
-
dict(
|
1479 |
-
name='ssd_kpt28',
|
1480 |
-
id=217,
|
1481 |
-
color=[64, 64, 128],
|
1482 |
-
type='',
|
1483 |
-
swap='ssd_kpt8'),
|
1484 |
-
218:
|
1485 |
-
dict(
|
1486 |
-
name='ssd_kpt29',
|
1487 |
-
id=218,
|
1488 |
-
color=[64, 64, 128],
|
1489 |
-
type='',
|
1490 |
-
swap='ssd_kpt7'),
|
1491 |
-
219:
|
1492 |
-
dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
|
1493 |
-
220:
|
1494 |
-
dict(
|
1495 |
-
name='lsd_kpt2',
|
1496 |
-
id=220,
|
1497 |
-
color=[128, 64, 0],
|
1498 |
-
type='',
|
1499 |
-
swap='lsd_kpt6'),
|
1500 |
-
221:
|
1501 |
-
dict(
|
1502 |
-
name='lsd_kpt3',
|
1503 |
-
id=221,
|
1504 |
-
color=[128, 64, 0],
|
1505 |
-
type='',
|
1506 |
-
swap='lsd_kpt5'),
|
1507 |
-
222:
|
1508 |
-
dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
|
1509 |
-
223:
|
1510 |
-
dict(
|
1511 |
-
name='lsd_kpt5',
|
1512 |
-
id=223,
|
1513 |
-
color=[128, 64, 0],
|
1514 |
-
type='',
|
1515 |
-
swap='lsd_kpt3'),
|
1516 |
-
224:
|
1517 |
-
dict(
|
1518 |
-
name='lsd_kpt6',
|
1519 |
-
id=224,
|
1520 |
-
color=[128, 64, 0],
|
1521 |
-
type='',
|
1522 |
-
swap='lsd_kpt2'),
|
1523 |
-
225:
|
1524 |
-
dict(
|
1525 |
-
name='lsd_kpt7',
|
1526 |
-
id=225,
|
1527 |
-
color=[128, 64, 0],
|
1528 |
-
type='',
|
1529 |
-
swap='lsd_kpt37'),
|
1530 |
-
226:
|
1531 |
-
dict(
|
1532 |
-
name='lsd_kpt8',
|
1533 |
-
id=226,
|
1534 |
-
color=[128, 64, 0],
|
1535 |
-
type='',
|
1536 |
-
swap='lsd_kpt36'),
|
1537 |
-
227:
|
1538 |
-
dict(
|
1539 |
-
name='lsd_kpt9',
|
1540 |
-
id=227,
|
1541 |
-
color=[128, 64, 0],
|
1542 |
-
type='',
|
1543 |
-
swap='lsd_kpt35'),
|
1544 |
-
228:
|
1545 |
-
dict(
|
1546 |
-
name='lsd_kpt10',
|
1547 |
-
id=228,
|
1548 |
-
color=[128, 64, 0],
|
1549 |
-
type='',
|
1550 |
-
swap='lsd_kpt34'),
|
1551 |
-
229:
|
1552 |
-
dict(
|
1553 |
-
name='lsd_kpt11',
|
1554 |
-
id=229,
|
1555 |
-
color=[128, 64, 0],
|
1556 |
-
type='',
|
1557 |
-
swap='lsd_kpt33'),
|
1558 |
-
230:
|
1559 |
-
dict(
|
1560 |
-
name='lsd_kpt12',
|
1561 |
-
id=230,
|
1562 |
-
color=[128, 64, 0],
|
1563 |
-
type='',
|
1564 |
-
swap='lsd_kpt32'),
|
1565 |
-
231:
|
1566 |
-
dict(
|
1567 |
-
name='lsd_kpt13',
|
1568 |
-
id=231,
|
1569 |
-
color=[128, 64, 0],
|
1570 |
-
type='',
|
1571 |
-
swap='lsd_kpt31'),
|
1572 |
-
232:
|
1573 |
-
dict(
|
1574 |
-
name='lsd_kpt14',
|
1575 |
-
id=232,
|
1576 |
-
color=[128, 64, 0],
|
1577 |
-
type='',
|
1578 |
-
swap='lsd_kpt30'),
|
1579 |
-
233:
|
1580 |
-
dict(
|
1581 |
-
name='lsd_kpt15',
|
1582 |
-
id=233,
|
1583 |
-
color=[128, 64, 0],
|
1584 |
-
type='',
|
1585 |
-
swap='lsd_kpt29'),
|
1586 |
-
234:
|
1587 |
-
dict(
|
1588 |
-
name='lsd_kpt16',
|
1589 |
-
id=234,
|
1590 |
-
color=[128, 64, 0],
|
1591 |
-
type='',
|
1592 |
-
swap='lsd_kpt28'),
|
1593 |
-
235:
|
1594 |
-
dict(
|
1595 |
-
name='lsd_kpt17',
|
1596 |
-
id=235,
|
1597 |
-
color=[128, 64, 0],
|
1598 |
-
type='',
|
1599 |
-
swap='lsd_kpt27'),
|
1600 |
-
236:
|
1601 |
-
dict(
|
1602 |
-
name='lsd_kpt18',
|
1603 |
-
id=236,
|
1604 |
-
color=[128, 64, 0],
|
1605 |
-
type='',
|
1606 |
-
swap='lsd_kpt26'),
|
1607 |
-
237:
|
1608 |
-
dict(
|
1609 |
-
name='lsd_kpt19',
|
1610 |
-
id=237,
|
1611 |
-
color=[128, 64, 0],
|
1612 |
-
type='',
|
1613 |
-
swap='lsd_kpt25'),
|
1614 |
-
238:
|
1615 |
-
dict(
|
1616 |
-
name='lsd_kpt20',
|
1617 |
-
id=238,
|
1618 |
-
color=[128, 64, 0],
|
1619 |
-
type='',
|
1620 |
-
swap='lsd_kpt24'),
|
1621 |
-
239:
|
1622 |
-
dict(
|
1623 |
-
name='lsd_kpt21',
|
1624 |
-
id=239,
|
1625 |
-
color=[128, 64, 0],
|
1626 |
-
type='',
|
1627 |
-
swap='lsd_kpt23'),
|
1628 |
-
240:
|
1629 |
-
dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
|
1630 |
-
241:
|
1631 |
-
dict(
|
1632 |
-
name='lsd_kpt23',
|
1633 |
-
id=241,
|
1634 |
-
color=[128, 64, 0],
|
1635 |
-
type='',
|
1636 |
-
swap='lsd_kpt21'),
|
1637 |
-
242:
|
1638 |
-
dict(
|
1639 |
-
name='lsd_kpt24',
|
1640 |
-
id=242,
|
1641 |
-
color=[128, 64, 0],
|
1642 |
-
type='',
|
1643 |
-
swap='lsd_kpt20'),
|
1644 |
-
243:
|
1645 |
-
dict(
|
1646 |
-
name='lsd_kpt25',
|
1647 |
-
id=243,
|
1648 |
-
color=[128, 64, 0],
|
1649 |
-
type='',
|
1650 |
-
swap='lsd_kpt19'),
|
1651 |
-
244:
|
1652 |
-
dict(
|
1653 |
-
name='lsd_kpt26',
|
1654 |
-
id=244,
|
1655 |
-
color=[128, 64, 0],
|
1656 |
-
type='',
|
1657 |
-
swap='lsd_kpt18'),
|
1658 |
-
245:
|
1659 |
-
dict(
|
1660 |
-
name='lsd_kpt27',
|
1661 |
-
id=245,
|
1662 |
-
color=[128, 64, 0],
|
1663 |
-
type='',
|
1664 |
-
swap='lsd_kpt17'),
|
1665 |
-
246:
|
1666 |
-
dict(
|
1667 |
-
name='lsd_kpt28',
|
1668 |
-
id=246,
|
1669 |
-
color=[128, 64, 0],
|
1670 |
-
type='',
|
1671 |
-
swap='lsd_kpt16'),
|
1672 |
-
247:
|
1673 |
-
dict(
|
1674 |
-
name='lsd_kpt29',
|
1675 |
-
id=247,
|
1676 |
-
color=[128, 64, 0],
|
1677 |
-
type='',
|
1678 |
-
swap='lsd_kpt15'),
|
1679 |
-
248:
|
1680 |
-
dict(
|
1681 |
-
name='lsd_kpt30',
|
1682 |
-
id=248,
|
1683 |
-
color=[128, 64, 0],
|
1684 |
-
type='',
|
1685 |
-
swap='lsd_kpt14'),
|
1686 |
-
249:
|
1687 |
-
dict(
|
1688 |
-
name='lsd_kpt31',
|
1689 |
-
id=249,
|
1690 |
-
color=[128, 64, 0],
|
1691 |
-
type='',
|
1692 |
-
swap='lsd_kpt13'),
|
1693 |
-
250:
|
1694 |
-
dict(
|
1695 |
-
name='lsd_kpt32',
|
1696 |
-
id=250,
|
1697 |
-
color=[128, 64, 0],
|
1698 |
-
type='',
|
1699 |
-
swap='lsd_kpt12'),
|
1700 |
-
251:
|
1701 |
-
dict(
|
1702 |
-
name='lsd_kpt33',
|
1703 |
-
id=251,
|
1704 |
-
color=[128, 64, 0],
|
1705 |
-
type='',
|
1706 |
-
swap='lsd_kpt11'),
|
1707 |
-
252:
|
1708 |
-
dict(
|
1709 |
-
name='lsd_kpt34',
|
1710 |
-
id=252,
|
1711 |
-
color=[128, 64, 0],
|
1712 |
-
type='',
|
1713 |
-
swap='lsd_kpt10'),
|
1714 |
-
253:
|
1715 |
-
dict(
|
1716 |
-
name='lsd_kpt35',
|
1717 |
-
id=253,
|
1718 |
-
color=[128, 64, 0],
|
1719 |
-
type='',
|
1720 |
-
swap='lsd_kpt9'),
|
1721 |
-
254:
|
1722 |
-
dict(
|
1723 |
-
name='lsd_kpt36',
|
1724 |
-
id=254,
|
1725 |
-
color=[128, 64, 0],
|
1726 |
-
type='',
|
1727 |
-
swap='lsd_kpt8'),
|
1728 |
-
255:
|
1729 |
-
dict(
|
1730 |
-
name='lsd_kpt37',
|
1731 |
-
id=255,
|
1732 |
-
color=[128, 64, 0],
|
1733 |
-
type='',
|
1734 |
-
swap='lsd_kpt7'),
|
1735 |
-
256:
|
1736 |
-
dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
|
1737 |
-
257:
|
1738 |
-
dict(
|
1739 |
-
name='vd_kpt2',
|
1740 |
-
id=257,
|
1741 |
-
color=[128, 64, 255],
|
1742 |
-
type='',
|
1743 |
-
swap='vd_kpt6'),
|
1744 |
-
258:
|
1745 |
-
dict(
|
1746 |
-
name='vd_kpt3',
|
1747 |
-
id=258,
|
1748 |
-
color=[128, 64, 255],
|
1749 |
-
type='',
|
1750 |
-
swap='vd_kpt5'),
|
1751 |
-
259:
|
1752 |
-
dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
|
1753 |
-
260:
|
1754 |
-
dict(
|
1755 |
-
name='vd_kpt5',
|
1756 |
-
id=260,
|
1757 |
-
color=[128, 64, 255],
|
1758 |
-
type='',
|
1759 |
-
swap='vd_kpt3'),
|
1760 |
-
261:
|
1761 |
-
dict(
|
1762 |
-
name='vd_kpt6',
|
1763 |
-
id=261,
|
1764 |
-
color=[128, 64, 255],
|
1765 |
-
type='',
|
1766 |
-
swap='vd_kpt2'),
|
1767 |
-
262:
|
1768 |
-
dict(
|
1769 |
-
name='vd_kpt7',
|
1770 |
-
id=262,
|
1771 |
-
color=[128, 64, 255],
|
1772 |
-
type='',
|
1773 |
-
swap='vd_kpt19'),
|
1774 |
-
263:
|
1775 |
-
dict(
|
1776 |
-
name='vd_kpt8',
|
1777 |
-
id=263,
|
1778 |
-
color=[128, 64, 255],
|
1779 |
-
type='',
|
1780 |
-
swap='vd_kpt18'),
|
1781 |
-
264:
|
1782 |
-
dict(
|
1783 |
-
name='vd_kpt9',
|
1784 |
-
id=264,
|
1785 |
-
color=[128, 64, 255],
|
1786 |
-
type='',
|
1787 |
-
swap='vd_kpt17'),
|
1788 |
-
265:
|
1789 |
-
dict(
|
1790 |
-
name='vd_kpt10',
|
1791 |
-
id=265,
|
1792 |
-
color=[128, 64, 255],
|
1793 |
-
type='',
|
1794 |
-
swap='vd_kpt16'),
|
1795 |
-
266:
|
1796 |
-
dict(
|
1797 |
-
name='vd_kpt11',
|
1798 |
-
id=266,
|
1799 |
-
color=[128, 64, 255],
|
1800 |
-
type='',
|
1801 |
-
swap='vd_kpt15'),
|
1802 |
-
267:
|
1803 |
-
dict(
|
1804 |
-
name='vd_kpt12',
|
1805 |
-
id=267,
|
1806 |
-
color=[128, 64, 255],
|
1807 |
-
type='',
|
1808 |
-
swap='vd_kpt14'),
|
1809 |
-
268:
|
1810 |
-
dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
|
1811 |
-
269:
|
1812 |
-
dict(
|
1813 |
-
name='vd_kpt14',
|
1814 |
-
id=269,
|
1815 |
-
color=[128, 64, 255],
|
1816 |
-
type='',
|
1817 |
-
swap='vd_kpt12'),
|
1818 |
-
270:
|
1819 |
-
dict(
|
1820 |
-
name='vd_kpt15',
|
1821 |
-
id=270,
|
1822 |
-
color=[128, 64, 255],
|
1823 |
-
type='',
|
1824 |
-
swap='vd_kpt11'),
|
1825 |
-
271:
|
1826 |
-
dict(
|
1827 |
-
name='vd_kpt16',
|
1828 |
-
id=271,
|
1829 |
-
color=[128, 64, 255],
|
1830 |
-
type='',
|
1831 |
-
swap='vd_kpt10'),
|
1832 |
-
272:
|
1833 |
-
dict(
|
1834 |
-
name='vd_kpt17',
|
1835 |
-
id=272,
|
1836 |
-
color=[128, 64, 255],
|
1837 |
-
type='',
|
1838 |
-
swap='vd_kpt9'),
|
1839 |
-
273:
|
1840 |
-
dict(
|
1841 |
-
name='vd_kpt18',
|
1842 |
-
id=273,
|
1843 |
-
color=[128, 64, 255],
|
1844 |
-
type='',
|
1845 |
-
swap='vd_kpt8'),
|
1846 |
-
274:
|
1847 |
-
dict(
|
1848 |
-
name='vd_kpt19',
|
1849 |
-
id=274,
|
1850 |
-
color=[128, 64, 255],
|
1851 |
-
type='',
|
1852 |
-
swap='vd_kpt7'),
|
1853 |
-
275:
|
1854 |
-
dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
|
1855 |
-
276:
|
1856 |
-
dict(
|
1857 |
-
name='sd_kpt2',
|
1858 |
-
id=276,
|
1859 |
-
color=[128, 64, 0],
|
1860 |
-
type='',
|
1861 |
-
swap='sd_kpt6'),
|
1862 |
-
277:
|
1863 |
-
dict(
|
1864 |
-
name='sd_kpt3',
|
1865 |
-
id=277,
|
1866 |
-
color=[128, 64, 0],
|
1867 |
-
type='',
|
1868 |
-
swap='sd_kpt5'),
|
1869 |
-
278:
|
1870 |
-
dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
|
1871 |
-
279:
|
1872 |
-
dict(
|
1873 |
-
name='sd_kpt5',
|
1874 |
-
id=279,
|
1875 |
-
color=[128, 64, 0],
|
1876 |
-
type='',
|
1877 |
-
swap='sd_kpt3'),
|
1878 |
-
280:
|
1879 |
-
dict(
|
1880 |
-
name='sd_kpt6',
|
1881 |
-
id=280,
|
1882 |
-
color=[128, 64, 0],
|
1883 |
-
type='',
|
1884 |
-
swap='sd_kpt2'),
|
1885 |
-
281:
|
1886 |
-
dict(
|
1887 |
-
name='sd_kpt7',
|
1888 |
-
id=281,
|
1889 |
-
color=[128, 64, 0],
|
1890 |
-
type='',
|
1891 |
-
swap='sd_kpt19'),
|
1892 |
-
282:
|
1893 |
-
dict(
|
1894 |
-
name='sd_kpt8',
|
1895 |
-
id=282,
|
1896 |
-
color=[128, 64, 0],
|
1897 |
-
type='',
|
1898 |
-
swap='sd_kpt18'),
|
1899 |
-
283:
|
1900 |
-
dict(
|
1901 |
-
name='sd_kpt9',
|
1902 |
-
id=283,
|
1903 |
-
color=[128, 64, 0],
|
1904 |
-
type='',
|
1905 |
-
swap='sd_kpt17'),
|
1906 |
-
284:
|
1907 |
-
dict(
|
1908 |
-
name='sd_kpt10',
|
1909 |
-
id=284,
|
1910 |
-
color=[128, 64, 0],
|
1911 |
-
type='',
|
1912 |
-
swap='sd_kpt16'),
|
1913 |
-
285:
|
1914 |
-
dict(
|
1915 |
-
name='sd_kpt11',
|
1916 |
-
id=285,
|
1917 |
-
color=[128, 64, 0],
|
1918 |
-
type='',
|
1919 |
-
swap='sd_kpt15'),
|
1920 |
-
286:
|
1921 |
-
dict(
|
1922 |
-
name='sd_kpt12',
|
1923 |
-
id=286,
|
1924 |
-
color=[128, 64, 0],
|
1925 |
-
type='',
|
1926 |
-
swap='sd_kpt14'),
|
1927 |
-
287:
|
1928 |
-
dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
|
1929 |
-
288:
|
1930 |
-
dict(
|
1931 |
-
name='sd_kpt14',
|
1932 |
-
id=288,
|
1933 |
-
color=[128, 64, 0],
|
1934 |
-
type='',
|
1935 |
-
swap='sd_kpt12'),
|
1936 |
-
289:
|
1937 |
-
dict(
|
1938 |
-
name='sd_kpt15',
|
1939 |
-
id=289,
|
1940 |
-
color=[128, 64, 0],
|
1941 |
-
type='',
|
1942 |
-
swap='sd_kpt11'),
|
1943 |
-
290:
|
1944 |
-
dict(
|
1945 |
-
name='sd_kpt16',
|
1946 |
-
id=290,
|
1947 |
-
color=[128, 64, 0],
|
1948 |
-
type='',
|
1949 |
-
swap='sd_kpt10'),
|
1950 |
-
291:
|
1951 |
-
dict(
|
1952 |
-
name='sd_kpt17',
|
1953 |
-
id=291,
|
1954 |
-
color=[128, 64, 0],
|
1955 |
-
type='',
|
1956 |
-
swap='sd_kpt9'),
|
1957 |
-
292:
|
1958 |
-
dict(
|
1959 |
-
name='sd_kpt18',
|
1960 |
-
id=292,
|
1961 |
-
color=[128, 64, 0],
|
1962 |
-
type='',
|
1963 |
-
swap='sd_kpt8'),
|
1964 |
-
293:
|
1965 |
-
dict(
|
1966 |
-
name='sd_kpt19',
|
1967 |
-
id=293,
|
1968 |
-
color=[128, 64, 0],
|
1969 |
-
type='',
|
1970 |
-
swap='sd_kpt7')
|
1971 |
-
}),
|
1972 |
-
skeleton_info=dict({
|
1973 |
-
0:
|
1974 |
-
dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
|
1975 |
-
1:
|
1976 |
-
dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
|
1977 |
-
2:
|
1978 |
-
dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
|
1979 |
-
3:
|
1980 |
-
dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
|
1981 |
-
4:
|
1982 |
-
dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
|
1983 |
-
5:
|
1984 |
-
dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
|
1985 |
-
6:
|
1986 |
-
dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
|
1987 |
-
7:
|
1988 |
-
dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
|
1989 |
-
8:
|
1990 |
-
dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
|
1991 |
-
9:
|
1992 |
-
dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
|
1993 |
-
10:
|
1994 |
-
dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
|
1995 |
-
11:
|
1996 |
-
dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
|
1997 |
-
12:
|
1998 |
-
dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
|
1999 |
-
13:
|
2000 |
-
dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
|
2001 |
-
14:
|
2002 |
-
dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
|
2003 |
-
15:
|
2004 |
-
dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
|
2005 |
-
16:
|
2006 |
-
dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
|
2007 |
-
17:
|
2008 |
-
dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
|
2009 |
-
18:
|
2010 |
-
dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
|
2011 |
-
19:
|
2012 |
-
dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
|
2013 |
-
20:
|
2014 |
-
dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
|
2015 |
-
21:
|
2016 |
-
dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
|
2017 |
-
22:
|
2018 |
-
dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
|
2019 |
-
23:
|
2020 |
-
dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
|
2021 |
-
24:
|
2022 |
-
dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
|
2023 |
-
25:
|
2024 |
-
dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
|
2025 |
-
26:
|
2026 |
-
dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
|
2027 |
-
27:
|
2028 |
-
dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
|
2029 |
-
28:
|
2030 |
-
dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
|
2031 |
-
29:
|
2032 |
-
dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
|
2033 |
-
30:
|
2034 |
-
dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
|
2035 |
-
31:
|
2036 |
-
dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
|
2037 |
-
32:
|
2038 |
-
dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
|
2039 |
-
33:
|
2040 |
-
dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
|
2041 |
-
34:
|
2042 |
-
dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
|
2043 |
-
35:
|
2044 |
-
dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
|
2045 |
-
36:
|
2046 |
-
dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
|
2047 |
-
37:
|
2048 |
-
dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
|
2049 |
-
38:
|
2050 |
-
dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
|
2051 |
-
39:
|
2052 |
-
dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
|
2053 |
-
40:
|
2054 |
-
dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
|
2055 |
-
41:
|
2056 |
-
dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
|
2057 |
-
42:
|
2058 |
-
dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
|
2059 |
-
43:
|
2060 |
-
dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
|
2061 |
-
44:
|
2062 |
-
dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
|
2063 |
-
45:
|
2064 |
-
dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
|
2065 |
-
46:
|
2066 |
-
dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
|
2067 |
-
47:
|
2068 |
-
dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
|
2069 |
-
48:
|
2070 |
-
dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
|
2071 |
-
49:
|
2072 |
-
dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
|
2073 |
-
50:
|
2074 |
-
dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
|
2075 |
-
51:
|
2076 |
-
dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
|
2077 |
-
52:
|
2078 |
-
dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
|
2079 |
-
53:
|
2080 |
-
dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
|
2081 |
-
54:
|
2082 |
-
dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
|
2083 |
-
55:
|
2084 |
-
dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
|
2085 |
-
56:
|
2086 |
-
dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
|
2087 |
-
57:
|
2088 |
-
dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
|
2089 |
-
58:
|
2090 |
-
dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
|
2091 |
-
59:
|
2092 |
-
dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
|
2093 |
-
60:
|
2094 |
-
dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
|
2095 |
-
61:
|
2096 |
-
dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
|
2097 |
-
62:
|
2098 |
-
dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
|
2099 |
-
63:
|
2100 |
-
dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
|
2101 |
-
64:
|
2102 |
-
dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
|
2103 |
-
65:
|
2104 |
-
dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
|
2105 |
-
66:
|
2106 |
-
dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
|
2107 |
-
67:
|
2108 |
-
dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
|
2109 |
-
68:
|
2110 |
-
dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
|
2111 |
-
69:
|
2112 |
-
dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
|
2113 |
-
70:
|
2114 |
-
dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
|
2115 |
-
71:
|
2116 |
-
dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
|
2117 |
-
72:
|
2118 |
-
dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
|
2119 |
-
73:
|
2120 |
-
dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
|
2121 |
-
74:
|
2122 |
-
dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
|
2123 |
-
75:
|
2124 |
-
dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
|
2125 |
-
76:
|
2126 |
-
dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
|
2127 |
-
77:
|
2128 |
-
dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
|
2129 |
-
78:
|
2130 |
-
dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
|
2131 |
-
79:
|
2132 |
-
dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
|
2133 |
-
80:
|
2134 |
-
dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
|
2135 |
-
81:
|
2136 |
-
dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
|
2137 |
-
82:
|
2138 |
-
dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
|
2139 |
-
83:
|
2140 |
-
dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
|
2141 |
-
84:
|
2142 |
-
dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
|
2143 |
-
85:
|
2144 |
-
dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
|
2145 |
-
86:
|
2146 |
-
dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
|
2147 |
-
87:
|
2148 |
-
dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
|
2149 |
-
88:
|
2150 |
-
dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
|
2151 |
-
89:
|
2152 |
-
dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
|
2153 |
-
90:
|
2154 |
-
dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
|
2155 |
-
91:
|
2156 |
-
dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
|
2157 |
-
92:
|
2158 |
-
dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
|
2159 |
-
93:
|
2160 |
-
dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
|
2161 |
-
94:
|
2162 |
-
dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
|
2163 |
-
95:
|
2164 |
-
dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
|
2165 |
-
96:
|
2166 |
-
dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
|
2167 |
-
97:
|
2168 |
-
dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
|
2169 |
-
98:
|
2170 |
-
dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
|
2171 |
-
99:
|
2172 |
-
dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
|
2173 |
-
100:
|
2174 |
-
dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
|
2175 |
-
101:
|
2176 |
-
dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
|
2177 |
-
102:
|
2178 |
-
dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
|
2179 |
-
103:
|
2180 |
-
dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
|
2181 |
-
104:
|
2182 |
-
dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
|
2183 |
-
105:
|
2184 |
-
dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
|
2185 |
-
106:
|
2186 |
-
dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
|
2187 |
-
107:
|
2188 |
-
dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
|
2189 |
-
108:
|
2190 |
-
dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
|
2191 |
-
109:
|
2192 |
-
dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
|
2193 |
-
110:
|
2194 |
-
dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
|
2195 |
-
111:
|
2196 |
-
dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
|
2197 |
-
112:
|
2198 |
-
dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
|
2199 |
-
113:
|
2200 |
-
dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
|
2201 |
-
114:
|
2202 |
-
dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
|
2203 |
-
115:
|
2204 |
-
dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
|
2205 |
-
116:
|
2206 |
-
dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
|
2207 |
-
117:
|
2208 |
-
dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
|
2209 |
-
118:
|
2210 |
-
dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
|
2211 |
-
119:
|
2212 |
-
dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
|
2213 |
-
120:
|
2214 |
-
dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
|
2215 |
-
121:
|
2216 |
-
dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
|
2217 |
-
122:
|
2218 |
-
dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
|
2219 |
-
123:
|
2220 |
-
dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
|
2221 |
-
124:
|
2222 |
-
dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
|
2223 |
-
125:
|
2224 |
-
dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
|
2225 |
-
126:
|
2226 |
-
dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
|
2227 |
-
127:
|
2228 |
-
dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
|
2229 |
-
128:
|
2230 |
-
dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
|
2231 |
-
129:
|
2232 |
-
dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
|
2233 |
-
130:
|
2234 |
-
dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
|
2235 |
-
131:
|
2236 |
-
dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
|
2237 |
-
132:
|
2238 |
-
dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
|
2239 |
-
133:
|
2240 |
-
dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
|
2241 |
-
134:
|
2242 |
-
dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
|
2243 |
-
135:
|
2244 |
-
dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
|
2245 |
-
136:
|
2246 |
-
dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
|
2247 |
-
137:
|
2248 |
-
dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
|
2249 |
-
138:
|
2250 |
-
dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
|
2251 |
-
139:
|
2252 |
-
dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
|
2253 |
-
140:
|
2254 |
-
dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
|
2255 |
-
141:
|
2256 |
-
dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
|
2257 |
-
142:
|
2258 |
-
dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
|
2259 |
-
143:
|
2260 |
-
dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
|
2261 |
-
144:
|
2262 |
-
dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
|
2263 |
-
145:
|
2264 |
-
dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
|
2265 |
-
146:
|
2266 |
-
dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
|
2267 |
-
147:
|
2268 |
-
dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
|
2269 |
-
148:
|
2270 |
-
dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
|
2271 |
-
149:
|
2272 |
-
dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
|
2273 |
-
150:
|
2274 |
-
dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
|
2275 |
-
151:
|
2276 |
-
dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
|
2277 |
-
152:
|
2278 |
-
dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
|
2279 |
-
153:
|
2280 |
-
dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
|
2281 |
-
154:
|
2282 |
-
dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
|
2283 |
-
155:
|
2284 |
-
dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
|
2285 |
-
156:
|
2286 |
-
dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
|
2287 |
-
157:
|
2288 |
-
dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
|
2289 |
-
158:
|
2290 |
-
dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
|
2291 |
-
159:
|
2292 |
-
dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
|
2293 |
-
160:
|
2294 |
-
dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
|
2295 |
-
161:
|
2296 |
-
dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
|
2297 |
-
162:
|
2298 |
-
dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
|
2299 |
-
163:
|
2300 |
-
dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
|
2301 |
-
164:
|
2302 |
-
dict(
|
2303 |
-
link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
|
2304 |
-
128]),
|
2305 |
-
165:
|
2306 |
-
dict(
|
2307 |
-
link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
|
2308 |
-
128]),
|
2309 |
-
166:
|
2310 |
-
dict(
|
2311 |
-
link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
|
2312 |
-
128]),
|
2313 |
-
167:
|
2314 |
-
dict(
|
2315 |
-
link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
|
2316 |
-
128]),
|
2317 |
-
168:
|
2318 |
-
dict(
|
2319 |
-
link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
|
2320 |
-
128]),
|
2321 |
-
169:
|
2322 |
-
dict(
|
2323 |
-
link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
|
2324 |
-
128]),
|
2325 |
-
170:
|
2326 |
-
dict(
|
2327 |
-
link=('shorts_kpt9', 'shorts_kpt10'),
|
2328 |
-
id=170,
|
2329 |
-
color=[128, 128, 128]),
|
2330 |
-
171:
|
2331 |
-
dict(
|
2332 |
-
link=('shorts_kpt10', 'shorts_kpt3'),
|
2333 |
-
id=171,
|
2334 |
-
color=[128, 128, 128]),
|
2335 |
-
172:
|
2336 |
-
dict(
|
2337 |
-
link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
|
2338 |
-
128]),
|
2339 |
-
173:
|
2340 |
-
dict(
|
2341 |
-
link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
|
2342 |
-
128]),
|
2343 |
-
174:
|
2344 |
-
dict(
|
2345 |
-
link=('trousers_kpt1', 'trousers_kpt4'),
|
2346 |
-
id=174,
|
2347 |
-
color=[128, 0, 128]),
|
2348 |
-
175:
|
2349 |
-
dict(
|
2350 |
-
link=('trousers_kpt4', 'trousers_kpt5'),
|
2351 |
-
id=175,
|
2352 |
-
color=[128, 0, 128]),
|
2353 |
-
176:
|
2354 |
-
dict(
|
2355 |
-
link=('trousers_kpt5', 'trousers_kpt6'),
|
2356 |
-
id=176,
|
2357 |
-
color=[128, 0, 128]),
|
2358 |
-
177:
|
2359 |
-
dict(
|
2360 |
-
link=('trousers_kpt6', 'trousers_kpt7'),
|
2361 |
-
id=177,
|
2362 |
-
color=[128, 0, 128]),
|
2363 |
-
178:
|
2364 |
-
dict(
|
2365 |
-
link=('trousers_kpt7', 'trousers_kpt8'),
|
2366 |
-
id=178,
|
2367 |
-
color=[128, 0, 128]),
|
2368 |
-
179:
|
2369 |
-
dict(
|
2370 |
-
link=('trousers_kpt8', 'trousers_kpt9'),
|
2371 |
-
id=179,
|
2372 |
-
color=[128, 0, 128]),
|
2373 |
-
180:
|
2374 |
-
dict(
|
2375 |
-
link=('trousers_kpt9', 'trousers_kpt10'),
|
2376 |
-
id=180,
|
2377 |
-
color=[128, 0, 128]),
|
2378 |
-
181:
|
2379 |
-
dict(
|
2380 |
-
link=('trousers_kpt10', 'trousers_kpt11'),
|
2381 |
-
id=181,
|
2382 |
-
color=[128, 0, 128]),
|
2383 |
-
182:
|
2384 |
-
dict(
|
2385 |
-
link=('trousers_kpt11', 'trousers_kpt12'),
|
2386 |
-
id=182,
|
2387 |
-
color=[128, 0, 128]),
|
2388 |
-
183:
|
2389 |
-
dict(
|
2390 |
-
link=('trousers_kpt12', 'trousers_kpt13'),
|
2391 |
-
id=183,
|
2392 |
-
color=[128, 0, 128]),
|
2393 |
-
184:
|
2394 |
-
dict(
|
2395 |
-
link=('trousers_kpt13', 'trousers_kpt14'),
|
2396 |
-
id=184,
|
2397 |
-
color=[128, 0, 128]),
|
2398 |
-
185:
|
2399 |
-
dict(
|
2400 |
-
link=('trousers_kpt14', 'trousers_kpt3'),
|
2401 |
-
id=185,
|
2402 |
-
color=[128, 0, 128]),
|
2403 |
-
186:
|
2404 |
-
dict(
|
2405 |
-
link=('trousers_kpt3', 'trousers_kpt2'),
|
2406 |
-
id=186,
|
2407 |
-
color=[128, 0, 128]),
|
2408 |
-
187:
|
2409 |
-
dict(
|
2410 |
-
link=('trousers_kpt2', 'trousers_kpt1'),
|
2411 |
-
id=187,
|
2412 |
-
color=[128, 0, 128]),
|
2413 |
-
188:
|
2414 |
-
dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
|
2415 |
-
189:
|
2416 |
-
dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
|
2417 |
-
190:
|
2418 |
-
dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
|
2419 |
-
191:
|
2420 |
-
dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
|
2421 |
-
192:
|
2422 |
-
dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
|
2423 |
-
193:
|
2424 |
-
dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
|
2425 |
-
194:
|
2426 |
-
dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
|
2427 |
-
195:
|
2428 |
-
dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
|
2429 |
-
196:
|
2430 |
-
dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
|
2431 |
-
197:
|
2432 |
-
dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
|
2433 |
-
198:
|
2434 |
-
dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
|
2435 |
-
199:
|
2436 |
-
dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
|
2437 |
-
200:
|
2438 |
-
dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
|
2439 |
-
201:
|
2440 |
-
dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
|
2441 |
-
202:
|
2442 |
-
dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
|
2443 |
-
203:
|
2444 |
-
dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
|
2445 |
-
204:
|
2446 |
-
dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
|
2447 |
-
205:
|
2448 |
-
dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
|
2449 |
-
206:
|
2450 |
-
dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
|
2451 |
-
207:
|
2452 |
-
dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
|
2453 |
-
208:
|
2454 |
-
dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
|
2455 |
-
209:
|
2456 |
-
dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
|
2457 |
-
210:
|
2458 |
-
dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
|
2459 |
-
211:
|
2460 |
-
dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
|
2461 |
-
212:
|
2462 |
-
dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
|
2463 |
-
213:
|
2464 |
-
dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
|
2465 |
-
214:
|
2466 |
-
dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
|
2467 |
-
215:
|
2468 |
-
dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
|
2469 |
-
216:
|
2470 |
-
dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
|
2471 |
-
217:
|
2472 |
-
dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
|
2473 |
-
218:
|
2474 |
-
dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
|
2475 |
-
219:
|
2476 |
-
dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
|
2477 |
-
220:
|
2478 |
-
dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
|
2479 |
-
221:
|
2480 |
-
dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
|
2481 |
-
222:
|
2482 |
-
dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
|
2483 |
-
223:
|
2484 |
-
dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
|
2485 |
-
224:
|
2486 |
-
dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
|
2487 |
-
225:
|
2488 |
-
dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
|
2489 |
-
226:
|
2490 |
-
dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
|
2491 |
-
227:
|
2492 |
-
dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
|
2493 |
-
228:
|
2494 |
-
dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
|
2495 |
-
229:
|
2496 |
-
dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
|
2497 |
-
230:
|
2498 |
-
dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
|
2499 |
-
231:
|
2500 |
-
dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
|
2501 |
-
232:
|
2502 |
-
dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
|
2503 |
-
233:
|
2504 |
-
dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
|
2505 |
-
234:
|
2506 |
-
dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
|
2507 |
-
235:
|
2508 |
-
dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
|
2509 |
-
236:
|
2510 |
-
dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
|
2511 |
-
237:
|
2512 |
-
dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
|
2513 |
-
238:
|
2514 |
-
dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
|
2515 |
-
239:
|
2516 |
-
dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
|
2517 |
-
240:
|
2518 |
-
dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
|
2519 |
-
241:
|
2520 |
-
dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
|
2521 |
-
242:
|
2522 |
-
dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
|
2523 |
-
243:
|
2524 |
-
dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
|
2525 |
-
244:
|
2526 |
-
dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
|
2527 |
-
245:
|
2528 |
-
dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
|
2529 |
-
246:
|
2530 |
-
dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
|
2531 |
-
247:
|
2532 |
-
dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
|
2533 |
-
248:
|
2534 |
-
dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
|
2535 |
-
249:
|
2536 |
-
dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
|
2537 |
-
250:
|
2538 |
-
dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
|
2539 |
-
251:
|
2540 |
-
dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
|
2541 |
-
252:
|
2542 |
-
dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
|
2543 |
-
253:
|
2544 |
-
dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
|
2545 |
-
254:
|
2546 |
-
dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
|
2547 |
-
255:
|
2548 |
-
dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
|
2549 |
-
256:
|
2550 |
-
dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
|
2551 |
-
257:
|
2552 |
-
dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
|
2553 |
-
258:
|
2554 |
-
dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
|
2555 |
-
259:
|
2556 |
-
dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
|
2557 |
-
260:
|
2558 |
-
dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
|
2559 |
-
261:
|
2560 |
-
dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
|
2561 |
-
262:
|
2562 |
-
dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
|
2563 |
-
263:
|
2564 |
-
dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
|
2565 |
-
264:
|
2566 |
-
dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
|
2567 |
-
265:
|
2568 |
-
dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
|
2569 |
-
266:
|
2570 |
-
dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
|
2571 |
-
267:
|
2572 |
-
dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
|
2573 |
-
268:
|
2574 |
-
dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
|
2575 |
-
269:
|
2576 |
-
dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
|
2577 |
-
270:
|
2578 |
-
dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
|
2579 |
-
271:
|
2580 |
-
dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
|
2581 |
-
272:
|
2582 |
-
dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
|
2583 |
-
273:
|
2584 |
-
dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
|
2585 |
-
274:
|
2586 |
-
dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
|
2587 |
-
275:
|
2588 |
-
dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
|
2589 |
-
276:
|
2590 |
-
dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
|
2591 |
-
277:
|
2592 |
-
dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
|
2593 |
-
278:
|
2594 |
-
dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
|
2595 |
-
279:
|
2596 |
-
dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
|
2597 |
-
280:
|
2598 |
-
dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
|
2599 |
-
281:
|
2600 |
-
dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
|
2601 |
-
282:
|
2602 |
-
dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
|
2603 |
-
283:
|
2604 |
-
dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
|
2605 |
-
284:
|
2606 |
-
dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
|
2607 |
-
285:
|
2608 |
-
dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
|
2609 |
-
286:
|
2610 |
-
dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
|
2611 |
-
287:
|
2612 |
-
dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
|
2613 |
-
288:
|
2614 |
-
dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
|
2615 |
-
289:
|
2616 |
-
dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
|
2617 |
-
290:
|
2618 |
-
dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
|
2619 |
-
291:
|
2620 |
-
dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
|
2621 |
-
292:
|
2622 |
-
dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
|
2623 |
-
293:
|
2624 |
-
dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
|
2625 |
-
294:
|
2626 |
-
dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
|
2627 |
-
295:
|
2628 |
-
dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
|
2629 |
-
296:
|
2630 |
-
dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
|
2631 |
-
297:
|
2632 |
-
dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
|
2633 |
-
298:
|
2634 |
-
dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
|
2635 |
-
299:
|
2636 |
-
dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
|
2637 |
-
300:
|
2638 |
-
dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
|
2639 |
-
301:
|
2640 |
-
dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
|
2641 |
-
302:
|
2642 |
-
dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
|
2643 |
-
303:
|
2644 |
-
dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
|
2645 |
-
}),
|
2646 |
-
joint_weights=[
|
2647 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2648 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2649 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2650 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2651 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2652 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2653 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2654 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2655 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2656 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2657 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2658 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2659 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2660 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2661 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2662 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2663 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2664 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2665 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2666 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2667 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
|
2668 |
-
],
|
2669 |
-
sigmas=[])
|
2670 |
-
param_scheduler = [
|
2671 |
-
dict(
|
2672 |
-
type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
|
2673 |
-
dict(
|
2674 |
-
type='MultiStepLR',
|
2675 |
-
begin=0,
|
2676 |
-
end=150,
|
2677 |
-
milestones=[100, 130],
|
2678 |
-
gamma=0.1,
|
2679 |
-
by_epoch=True)
|
2680 |
-
]
|
2681 |
-
optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
|
2682 |
-
auto_scale_lr = dict(base_batch_size=512)
|
2683 |
-
dataset_type = 'DeepFashion2Dataset'
|
2684 |
-
data_mode = 'topdown'
|
2685 |
-
data_root = 'data/deepfashion2/'
|
2686 |
-
codec = dict(
|
2687 |
-
type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
|
2688 |
-
train_pipeline = [
|
2689 |
-
dict(type='LoadImage'),
|
2690 |
-
dict(type='GetBBoxCenterScale'),
|
2691 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2692 |
-
dict(
|
2693 |
-
type='RandomBBoxTransform',
|
2694 |
-
shift_prob=0,
|
2695 |
-
rotate_factor=60,
|
2696 |
-
scale_factor=(0.75, 1.25)),
|
2697 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2698 |
-
dict(
|
2699 |
-
type='GenerateTarget',
|
2700 |
-
encoder=dict(
|
2701 |
-
type='MSRAHeatmap',
|
2702 |
-
input_size=(192, 256),
|
2703 |
-
heatmap_size=(48, 64),
|
2704 |
-
sigma=2)),
|
2705 |
-
dict(type='PackPoseInputs')
|
2706 |
-
]
|
2707 |
-
val_pipeline = [
|
2708 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2709 |
-
dict(type='GetBBoxCenterScale'),
|
2710 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2711 |
-
dict(type='PackPoseInputs')
|
2712 |
-
]
|
2713 |
-
train_dataloader = dict(
|
2714 |
-
batch_size=64,
|
2715 |
-
num_workers=6,
|
2716 |
-
persistent_workers=True,
|
2717 |
-
sampler=dict(type='DefaultSampler', shuffle=True),
|
2718 |
-
dataset=dict(
|
2719 |
-
type='DeepFashion2Dataset',
|
2720 |
-
data_root='data/deepfashion2/',
|
2721 |
-
data_mode='topdown',
|
2722 |
-
ann_file='train/deepfashion2_short_sleeved_dress.json',
|
2723 |
-
data_prefix=dict(img='train/image/'),
|
2724 |
-
pipeline=[
|
2725 |
-
dict(type='LoadImage'),
|
2726 |
-
dict(type='GetBBoxCenterScale'),
|
2727 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2728 |
-
dict(
|
2729 |
-
type='RandomBBoxTransform',
|
2730 |
-
shift_prob=0,
|
2731 |
-
rotate_factor=60,
|
2732 |
-
scale_factor=(0.75, 1.25)),
|
2733 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2734 |
-
dict(
|
2735 |
-
type='GenerateTarget',
|
2736 |
-
encoder=dict(
|
2737 |
-
type='MSRAHeatmap',
|
2738 |
-
input_size=(192, 256),
|
2739 |
-
heatmap_size=(48, 64),
|
2740 |
-
sigma=2)),
|
2741 |
-
dict(type='PackPoseInputs')
|
2742 |
-
]))
|
2743 |
-
val_dataloader = dict(
|
2744 |
-
batch_size=32,
|
2745 |
-
num_workers=6,
|
2746 |
-
persistent_workers=True,
|
2747 |
-
drop_last=False,
|
2748 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2749 |
-
dataset=dict(
|
2750 |
-
type='DeepFashion2Dataset',
|
2751 |
-
data_root='data/deepfashion2/',
|
2752 |
-
data_mode='topdown',
|
2753 |
-
ann_file='validation/deepfashion2_short_sleeved_dress.json',
|
2754 |
-
data_prefix=dict(img='validation/image/'),
|
2755 |
-
test_mode=True,
|
2756 |
-
pipeline=[
|
2757 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2758 |
-
dict(type='GetBBoxCenterScale'),
|
2759 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2760 |
-
dict(type='PackPoseInputs')
|
2761 |
-
]))
|
2762 |
-
test_dataloader = dict(
|
2763 |
-
batch_size=32,
|
2764 |
-
num_workers=6,
|
2765 |
-
persistent_workers=True,
|
2766 |
-
drop_last=False,
|
2767 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2768 |
-
dataset=dict(
|
2769 |
-
type='DeepFashion2Dataset',
|
2770 |
-
data_root='data/deepfashion2/',
|
2771 |
-
data_mode='topdown',
|
2772 |
-
ann_file='validation/deepfashion2_short_sleeved_dress.json',
|
2773 |
-
data_prefix=dict(img='validation/image/'),
|
2774 |
-
test_mode=True,
|
2775 |
-
pipeline=[
|
2776 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2777 |
-
dict(type='GetBBoxCenterScale'),
|
2778 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2779 |
-
dict(type='PackPoseInputs')
|
2780 |
-
]))
|
2781 |
-
channel_cfg = dict(
|
2782 |
-
num_output_channels=294,
|
2783 |
-
dataset_joints=294,
|
2784 |
-
dataset_channel=[[
|
2785 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2786 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2787 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2788 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2789 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2790 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2791 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2792 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2793 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2794 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2795 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2796 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2797 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2798 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2799 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2800 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2801 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2802 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2803 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2804 |
-
290, 291, 292, 293
|
2805 |
-
]],
|
2806 |
-
inference_channel=[
|
2807 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2808 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2809 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2810 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2811 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2812 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2813 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2814 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2815 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2816 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2817 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2818 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2819 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2820 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2821 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2822 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2823 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2824 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2825 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2826 |
-
290, 291, 292, 293
|
2827 |
-
])
|
2828 |
-
model = dict(
|
2829 |
-
type='TopdownPoseEstimator',
|
2830 |
-
data_preprocessor=dict(
|
2831 |
-
type='PoseDataPreprocessor',
|
2832 |
-
mean=[123.675, 116.28, 103.53],
|
2833 |
-
std=[58.395, 57.12, 57.375],
|
2834 |
-
bgr_to_rgb=True),
|
2835 |
-
backbone=dict(
|
2836 |
-
type='ResNet',
|
2837 |
-
depth=50,
|
2838 |
-
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
|
2839 |
-
head=dict(
|
2840 |
-
type='HeatmapHead',
|
2841 |
-
in_channels=2048,
|
2842 |
-
out_channels=294,
|
2843 |
-
loss=dict(type='KeypointMSELoss', use_target_weight=True),
|
2844 |
-
decoder=dict(
|
2845 |
-
type='MSRAHeatmap',
|
2846 |
-
input_size=(192, 256),
|
2847 |
-
heatmap_size=(48, 64),
|
2848 |
-
sigma=2)),
|
2849 |
-
test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
|
2850 |
-
val_evaluator = [
|
2851 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2852 |
-
dict(type='AUC'),
|
2853 |
-
dict(type='EPE')
|
2854 |
-
]
|
2855 |
-
test_evaluator = [
|
2856 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2857 |
-
dict(type='AUC'),
|
2858 |
-
dict(type='EPE')
|
2859 |
-
]
|
2860 |
-
launcher = 'pytorch'
|
2861 |
-
work_dir = './work_dirs/td_hm_res50_4xb64-150e_deepfashion2_short_sleeved_dress_256x192'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_cifar.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='ImageClassifier',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNet_CIFAR',
|
6 |
-
depth=34,
|
7 |
-
num_stages=4,
|
8 |
-
out_indices=(3, ),
|
9 |
-
style='pytorch'),
|
10 |
-
neck=dict(type='GlobalAveragePooling'),
|
11 |
-
head=dict(
|
12 |
-
type='LinearClsHead',
|
13 |
-
num_classes=10,
|
14 |
-
in_channels=512,
|
15 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
|
16 |
-
))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/LayoutWritable.js
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
import { writable } from "svelte/store";
|
2 |
-
|
3 |
-
export const isloading_writable = writable(false);
|
4 |
-
export const is_init_writable = writable(false);
|
5 |
-
export const cancel_writable = writable(false);
|
6 |
-
export const refresh_chats_writable = writable([]);
|
7 |
-
export const refresh_chats_writable_empty = writable(false);
|
8 |
-
export const curr_model_writable = writable(0);
|
9 |
-
export const curr_model_writable_string = writable("");
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import Achievements from './logic/achievements/ymlachievements/Achievements';
|
2 |
-
export default Achievements;
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/CreatExpandContainer.js
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
import Sizer from '../../sizer/Sizer.js';
|
2 |
-
|
3 |
-
var CreatExpandContainer = function (scene, orientation) {
|
4 |
-
var container = new Sizer(scene, {
|
5 |
-
orientation: orientation
|
6 |
-
})
|
7 |
-
scene.add.existing(container);
|
8 |
-
return container;
|
9 |
-
}
|
10 |
-
|
11 |
-
export default CreatExpandContainer;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateCanvas.js
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
import MergeStyle from './utils/MergeStyle.js';
|
2 |
-
import Canvas from '../../canvas/Canvas.js';
|
3 |
-
import SetTextureProperties from './utils/SetTextureProperties.js';
|
4 |
-
|
5 |
-
|
6 |
-
var CreateCanvas = function (scene, data, view, styles, customBuilders) {
|
7 |
-
data = MergeStyle(data, styles);
|
8 |
-
|
9 |
-
var width = data.width || 1;
|
10 |
-
var height = data.height || 1;
|
11 |
-
var gameObject = new Canvas(scene, 0, 0, width, height);
|
12 |
-
|
13 |
-
if (data.fill !== undefined) {
|
14 |
-
gameObject.fill(data.fill);
|
15 |
-
}
|
16 |
-
|
17 |
-
SetTextureProperties(gameObject, data);
|
18 |
-
|
19 |
-
scene.add.existing(gameObject);
|
20 |
-
return gameObject;
|
21 |
-
}
|
22 |
-
|
23 |
-
export default CreateCanvas;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateNinePatch2.js
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
import MergeStyle from './utils/MergeStyle.js';
|
2 |
-
import NinePatch from '../../ninepatch2/NinePatch.js';
|
3 |
-
|
4 |
-
var CreateNinePatch2 = function (scene, data, view, styles, customBuilders) {
|
5 |
-
data = MergeStyle(data, styles);
|
6 |
-
|
7 |
-
var gameObject = new NinePatch(scene, data);
|
8 |
-
|
9 |
-
scene.add.existing(gameObject);
|
10 |
-
return gameObject;
|
11 |
-
}
|
12 |
-
export default CreateNinePatch2;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Agusbs98/automatic-ecg-diagnosis/nets/nets.py
DELETED
@@ -1,73 +0,0 @@
|
|
1 |
-
|
2 |
-
import os, sys
|
3 |
-
from libs import *
|
4 |
-
from .layers import *
|
5 |
-
from .modules import *
|
6 |
-
from .bblocks import *
|
7 |
-
from .backbones import *
|
8 |
-
|
9 |
-
class LightX3ECG(nn.Module):
|
10 |
-
def __init__(self,
|
11 |
-
base_channels = 64,
|
12 |
-
num_classes = 1,
|
13 |
-
):
|
14 |
-
super(LightX3ECG, self).__init__()
|
15 |
-
self.backbone_0 = LightSEResNet18(base_channels)
|
16 |
-
self.backbone_1 = LightSEResNet18(base_channels)
|
17 |
-
self.backbone_2 = LightSEResNet18(base_channels)
|
18 |
-
self.lw_attention = nn.Sequential(
|
19 |
-
nn.Linear(
|
20 |
-
base_channels*24, base_channels*8,
|
21 |
-
),
|
22 |
-
nn.BatchNorm1d(base_channels*8),
|
23 |
-
nn.ReLU(),
|
24 |
-
nn.Dropout(0.3),
|
25 |
-
nn.Linear(
|
26 |
-
base_channels*8, 3,
|
27 |
-
),
|
28 |
-
)
|
29 |
-
|
30 |
-
self.classifier = nn.Sequential(
|
31 |
-
nn.Dropout(0.2),
|
32 |
-
nn.Linear(
|
33 |
-
base_channels*8, num_classes,
|
34 |
-
),
|
35 |
-
)
|
36 |
-
|
37 |
-
def forward(self,
|
38 |
-
input,
|
39 |
-
return_attention_scores = False,
|
40 |
-
):
|
41 |
-
features_0 = self.backbone_0(input[:, 0, :].unsqueeze(1)).squeeze(2)
|
42 |
-
features_1 = self.backbone_1(input[:, 1, :].unsqueeze(1)).squeeze(2)
|
43 |
-
features_2 = self.backbone_2(input[:, 2, :].unsqueeze(1)).squeeze(2)
|
44 |
-
attention_scores = torch.sigmoid(
|
45 |
-
self.lw_attention(
|
46 |
-
torch.cat(
|
47 |
-
[
|
48 |
-
features_0,
|
49 |
-
features_1,
|
50 |
-
features_2,
|
51 |
-
],
|
52 |
-
dim = 1,
|
53 |
-
)
|
54 |
-
)
|
55 |
-
)
|
56 |
-
merged_features = torch.sum(
|
57 |
-
torch.stack(
|
58 |
-
[
|
59 |
-
features_0,
|
60 |
-
features_1,
|
61 |
-
features_2,
|
62 |
-
],
|
63 |
-
dim = 1,
|
64 |
-
)*attention_scores.unsqueeze(-1),
|
65 |
-
dim = 1,
|
66 |
-
)
|
67 |
-
|
68 |
-
output = self.classifier(merged_features)
|
69 |
-
|
70 |
-
if not return_attention_scores:
|
71 |
-
return output
|
72 |
-
else:
|
73 |
-
return output, attention_scores
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Multilingual Anime TTS
|
3 |
-
emoji: 🎙🐴
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.7
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlphaDragon/Voice-Clone/app.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from TTS.api import TTS
|
3 |
-
|
4 |
-
# Init TTS
|
5 |
-
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=False)
|
6 |
-
zh_tts = TTS(model_name="tts_models/zh-CN/baker/tacotron2-DDC-GST", progress_bar=False, gpu=False)
|
7 |
-
de_tts = TTS(model_name="tts_models/de/thorsten/vits", gpu=False)
|
8 |
-
es_tts = TTS(model_name="tts_models/es/mai/tacotron2-DDC", progress_bar=False, gpu=False)
|
9 |
-
|
10 |
-
def text_to_speech(text: str, speaker_wav, speaker_wav_file, language: str):
|
11 |
-
if speaker_wav_file and not speaker_wav:
|
12 |
-
speaker_wav = speaker_wav_file
|
13 |
-
file_path = "output.wav"
|
14 |
-
if language == "zh-CN":
|
15 |
-
# if speaker_wav is not None:
|
16 |
-
# zh_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
|
17 |
-
# else:
|
18 |
-
zh_tts.tts_to_file(text, file_path=file_path)
|
19 |
-
elif language == "de":
|
20 |
-
# if speaker_wav is not None:
|
21 |
-
# de_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
|
22 |
-
# else:
|
23 |
-
de_tts.tts_to_file(text, file_path=file_path)
|
24 |
-
elif language == "es":
|
25 |
-
# if speaker_wav is not None:
|
26 |
-
# es_tts.tts_to_file(text, speaker_wav=speaker_wav, file_path=file_path)
|
27 |
-
# else:
|
28 |
-
es_tts.tts_to_file(text, file_path=file_path)
|
29 |
-
else:
|
30 |
-
if speaker_wav is not None:
|
31 |
-
tts.tts_to_file(text, speaker_wav=speaker_wav, language=language, file_path=file_path)
|
32 |
-
else:
|
33 |
-
tts.tts_to_file(text, speaker=tts.speakers[0], language=language, file_path=file_path)
|
34 |
-
return file_path
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
title = "Voice-Cloning-Demo"
|
39 |
-
|
40 |
-
def toggle(choice):
|
41 |
-
if choice == "mic":
|
42 |
-
return gr.update(visible=True, value=None), gr.update(visible=False, value=None)
|
43 |
-
else:
|
44 |
-
return gr.update(visible=False, value=None), gr.update(visible=True, value=None)
|
45 |
-
|
46 |
-
def handle_language_change(choice):
|
47 |
-
if choice == "zh-CN" or choice == "de" or choice == "es":
|
48 |
-
return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False)
|
49 |
-
else:
|
50 |
-
return gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
|
51 |
-
|
52 |
-
warming_text = """Please note that Chinese, German, and Spanish are currently not supported for voice cloning."""
|
53 |
-
|
54 |
-
with gr.Blocks() as demo:
|
55 |
-
with gr.Row():
|
56 |
-
with gr.Column():
|
57 |
-
text_input = gr.Textbox(label="Input the text", value="", max_lines=3)
|
58 |
-
lan_input = gr.Radio(label="Language", choices=["en", "fr-fr", "pt-br", "zh-CN", "de", "es"], value="en")
|
59 |
-
gr.Markdown(warming_text)
|
60 |
-
radio = gr.Radio(["mic", "file"], value="mic",
|
61 |
-
label="How would you like to upload your audio?")
|
62 |
-
audio_input_mic = gr.Audio(label="Voice to clone", source="microphone", type="filepath", visible=True)
|
63 |
-
audio_input_file = gr.Audio(label="Voice to clone", type="filepath", visible=False)
|
64 |
-
|
65 |
-
with gr.Row():
|
66 |
-
with gr.Column():
|
67 |
-
btn_clear = gr.Button("Clear")
|
68 |
-
with gr.Column():
|
69 |
-
btn = gr.Button("Submit", variant="primary")
|
70 |
-
with gr.Column():
|
71 |
-
audio_output = gr.Audio(label="Output")
|
72 |
-
|
73 |
-
# gr.Examples(examples, fn=inference, inputs=[audio_file, text_input],
|
74 |
-
# outputs=audio_output, cache_examples=True)
|
75 |
-
btn.click(text_to_speech, inputs=[text_input, audio_input_mic,
|
76 |
-
audio_input_file, lan_input], outputs=audio_output)
|
77 |
-
radio.change(toggle, radio, [audio_input_mic, audio_input_file])
|
78 |
-
lan_input.change(handle_language_change, lan_input, [radio, audio_input_mic, audio_input_file])
|
79 |
-
|
80 |
-
demo.launch(enable_queue=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/pdf-table-extractor/README.md
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Pdf Table Extractor
|
3 |
-
emoji: 📄
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: green
|
6 |
-
sdk: streamlit
|
7 |
-
app_file: app.py
|
8 |
-
pinned: false
|
9 |
-
---
|
10 |
-
|
11 |
-
# Configuration
|
12 |
-
|
13 |
-
`title`: _string_
|
14 |
-
Display title for the Space
|
15 |
-
|
16 |
-
`emoji`: _string_
|
17 |
-
Space emoji (emoji-only character allowed)
|
18 |
-
|
19 |
-
`colorFrom`: _string_
|
20 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
21 |
-
|
22 |
-
`colorTo`: _string_
|
23 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
24 |
-
|
25 |
-
`sdk`: _string_
|
26 |
-
Can be either `gradio`, `streamlit`, or `static`
|
27 |
-
|
28 |
-
`sdk_version` : _string_
|
29 |
-
Only applicable for `streamlit` SDK.
|
30 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
31 |
-
|
32 |
-
`app_file`: _string_
|
33 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
|
34 |
-
Path is relative to the root of the repository.
|
35 |
-
|
36 |
-
`pinned`: _boolean_
|
37 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vae.py
DELETED
@@ -1,600 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import gc
|
17 |
-
import unittest
|
18 |
-
|
19 |
-
import torch
|
20 |
-
from parameterized import parameterized
|
21 |
-
|
22 |
-
from diffusers import AsymmetricAutoencoderKL, AutoencoderKL
|
23 |
-
from diffusers.utils import floats_tensor, load_hf_numpy, require_torch_gpu, slow, torch_all_close, torch_device
|
24 |
-
from diffusers.utils.import_utils import is_xformers_available
|
25 |
-
from diffusers.utils.testing_utils import enable_full_determinism
|
26 |
-
|
27 |
-
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
|
28 |
-
|
29 |
-
|
30 |
-
enable_full_determinism()
|
31 |
-
|
32 |
-
|
33 |
-
class AutoencoderKLTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
|
34 |
-
model_class = AutoencoderKL
|
35 |
-
main_input_name = "sample"
|
36 |
-
base_precision = 1e-2
|
37 |
-
|
38 |
-
@property
|
39 |
-
def dummy_input(self):
|
40 |
-
batch_size = 4
|
41 |
-
num_channels = 3
|
42 |
-
sizes = (32, 32)
|
43 |
-
|
44 |
-
image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
|
45 |
-
|
46 |
-
return {"sample": image}
|
47 |
-
|
48 |
-
@property
|
49 |
-
def input_shape(self):
|
50 |
-
return (3, 32, 32)
|
51 |
-
|
52 |
-
@property
|
53 |
-
def output_shape(self):
|
54 |
-
return (3, 32, 32)
|
55 |
-
|
56 |
-
def prepare_init_args_and_inputs_for_common(self):
|
57 |
-
init_dict = {
|
58 |
-
"block_out_channels": [32, 64],
|
59 |
-
"in_channels": 3,
|
60 |
-
"out_channels": 3,
|
61 |
-
"down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
|
62 |
-
"up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
|
63 |
-
"latent_channels": 4,
|
64 |
-
}
|
65 |
-
inputs_dict = self.dummy_input
|
66 |
-
return init_dict, inputs_dict
|
67 |
-
|
68 |
-
def test_forward_signature(self):
|
69 |
-
pass
|
70 |
-
|
71 |
-
def test_training(self):
|
72 |
-
pass
|
73 |
-
|
74 |
-
@unittest.skipIf(torch_device == "mps", "Gradient checkpointing skipped on MPS")
|
75 |
-
def test_gradient_checkpointing(self):
|
76 |
-
# enable deterministic behavior for gradient checkpointing
|
77 |
-
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
|
78 |
-
model = self.model_class(**init_dict)
|
79 |
-
model.to(torch_device)
|
80 |
-
|
81 |
-
assert not model.is_gradient_checkpointing and model.training
|
82 |
-
|
83 |
-
out = model(**inputs_dict).sample
|
84 |
-
# run the backwards pass on the model. For backwards pass, for simplicity purpose,
|
85 |
-
# we won't calculate the loss and rather backprop on out.sum()
|
86 |
-
model.zero_grad()
|
87 |
-
|
88 |
-
labels = torch.randn_like(out)
|
89 |
-
loss = (out - labels).mean()
|
90 |
-
loss.backward()
|
91 |
-
|
92 |
-
# re-instantiate the model now enabling gradient checkpointing
|
93 |
-
model_2 = self.model_class(**init_dict)
|
94 |
-
# clone model
|
95 |
-
model_2.load_state_dict(model.state_dict())
|
96 |
-
model_2.to(torch_device)
|
97 |
-
model_2.enable_gradient_checkpointing()
|
98 |
-
|
99 |
-
assert model_2.is_gradient_checkpointing and model_2.training
|
100 |
-
|
101 |
-
out_2 = model_2(**inputs_dict).sample
|
102 |
-
# run the backwards pass on the model. For backwards pass, for simplicity purpose,
|
103 |
-
# we won't calculate the loss and rather backprop on out.sum()
|
104 |
-
model_2.zero_grad()
|
105 |
-
loss_2 = (out_2 - labels).mean()
|
106 |
-
loss_2.backward()
|
107 |
-
|
108 |
-
# compare the output and parameters gradients
|
109 |
-
self.assertTrue((loss - loss_2).abs() < 1e-5)
|
110 |
-
named_params = dict(model.named_parameters())
|
111 |
-
named_params_2 = dict(model_2.named_parameters())
|
112 |
-
for name, param in named_params.items():
|
113 |
-
self.assertTrue(torch_all_close(param.grad.data, named_params_2[name].grad.data, atol=5e-5))
|
114 |
-
|
115 |
-
def test_from_pretrained_hub(self):
|
116 |
-
model, loading_info = AutoencoderKL.from_pretrained("fusing/autoencoder-kl-dummy", output_loading_info=True)
|
117 |
-
self.assertIsNotNone(model)
|
118 |
-
self.assertEqual(len(loading_info["missing_keys"]), 0)
|
119 |
-
|
120 |
-
model.to(torch_device)
|
121 |
-
image = model(**self.dummy_input)
|
122 |
-
|
123 |
-
assert image is not None, "Make sure output is not None"
|
124 |
-
|
125 |
-
def test_output_pretrained(self):
|
126 |
-
model = AutoencoderKL.from_pretrained("fusing/autoencoder-kl-dummy")
|
127 |
-
model = model.to(torch_device)
|
128 |
-
model.eval()
|
129 |
-
|
130 |
-
if torch_device == "mps":
|
131 |
-
generator = torch.manual_seed(0)
|
132 |
-
else:
|
133 |
-
generator = torch.Generator(device=torch_device).manual_seed(0)
|
134 |
-
|
135 |
-
image = torch.randn(
|
136 |
-
1,
|
137 |
-
model.config.in_channels,
|
138 |
-
model.config.sample_size,
|
139 |
-
model.config.sample_size,
|
140 |
-
generator=torch.manual_seed(0),
|
141 |
-
)
|
142 |
-
image = image.to(torch_device)
|
143 |
-
with torch.no_grad():
|
144 |
-
output = model(image, sample_posterior=True, generator=generator).sample
|
145 |
-
|
146 |
-
output_slice = output[0, -1, -3:, -3:].flatten().cpu()
|
147 |
-
|
148 |
-
# Since the VAE Gaussian prior's generator is seeded on the appropriate device,
|
149 |
-
# the expected output slices are not the same for CPU and GPU.
|
150 |
-
if torch_device == "mps":
|
151 |
-
expected_output_slice = torch.tensor(
|
152 |
-
[
|
153 |
-
-4.0078e-01,
|
154 |
-
-3.8323e-04,
|
155 |
-
-1.2681e-01,
|
156 |
-
-1.1462e-01,
|
157 |
-
2.0095e-01,
|
158 |
-
1.0893e-01,
|
159 |
-
-8.8247e-02,
|
160 |
-
-3.0361e-01,
|
161 |
-
-9.8644e-03,
|
162 |
-
]
|
163 |
-
)
|
164 |
-
elif torch_device == "cpu":
|
165 |
-
expected_output_slice = torch.tensor(
|
166 |
-
[-0.1352, 0.0878, 0.0419, -0.0818, -0.1069, 0.0688, -0.1458, -0.4446, -0.0026]
|
167 |
-
)
|
168 |
-
else:
|
169 |
-
expected_output_slice = torch.tensor(
|
170 |
-
[-0.2421, 0.4642, 0.2507, -0.0438, 0.0682, 0.3160, -0.2018, -0.0727, 0.2485]
|
171 |
-
)
|
172 |
-
|
173 |
-
self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2))
|
174 |
-
|
175 |
-
|
176 |
-
class AsymmetricAutoencoderKLTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
|
177 |
-
model_class = AsymmetricAutoencoderKL
|
178 |
-
main_input_name = "sample"
|
179 |
-
base_precision = 1e-2
|
180 |
-
|
181 |
-
@property
|
182 |
-
def dummy_input(self):
|
183 |
-
batch_size = 4
|
184 |
-
num_channels = 3
|
185 |
-
sizes = (32, 32)
|
186 |
-
|
187 |
-
image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
|
188 |
-
mask = torch.ones((batch_size, 1) + sizes).to(torch_device)
|
189 |
-
|
190 |
-
return {"sample": image, "mask": mask}
|
191 |
-
|
192 |
-
@property
|
193 |
-
def input_shape(self):
|
194 |
-
return (3, 32, 32)
|
195 |
-
|
196 |
-
@property
|
197 |
-
def output_shape(self):
|
198 |
-
return (3, 32, 32)
|
199 |
-
|
200 |
-
def prepare_init_args_and_inputs_for_common(self):
|
201 |
-
init_dict = {
|
202 |
-
"in_channels": 3,
|
203 |
-
"out_channels": 3,
|
204 |
-
"down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
|
205 |
-
"down_block_out_channels": [32, 64],
|
206 |
-
"layers_per_down_block": 1,
|
207 |
-
"up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
|
208 |
-
"up_block_out_channels": [32, 64],
|
209 |
-
"layers_per_up_block": 1,
|
210 |
-
"act_fn": "silu",
|
211 |
-
"latent_channels": 4,
|
212 |
-
"norm_num_groups": 32,
|
213 |
-
"sample_size": 32,
|
214 |
-
"scaling_factor": 0.18215,
|
215 |
-
}
|
216 |
-
inputs_dict = self.dummy_input
|
217 |
-
return init_dict, inputs_dict
|
218 |
-
|
219 |
-
def test_forward_signature(self):
|
220 |
-
pass
|
221 |
-
|
222 |
-
def test_forward_with_norm_groups(self):
|
223 |
-
pass
|
224 |
-
|
225 |
-
|
226 |
-
@slow
|
227 |
-
class AutoencoderKLIntegrationTests(unittest.TestCase):
|
228 |
-
def get_file_format(self, seed, shape):
|
229 |
-
return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy"
|
230 |
-
|
231 |
-
def tearDown(self):
|
232 |
-
# clean up the VRAM after each test
|
233 |
-
super().tearDown()
|
234 |
-
gc.collect()
|
235 |
-
torch.cuda.empty_cache()
|
236 |
-
|
237 |
-
def get_sd_image(self, seed=0, shape=(4, 3, 512, 512), fp16=False):
|
238 |
-
dtype = torch.float16 if fp16 else torch.float32
|
239 |
-
image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
|
240 |
-
return image
|
241 |
-
|
242 |
-
def get_sd_vae_model(self, model_id="CompVis/stable-diffusion-v1-4", fp16=False):
|
243 |
-
revision = "fp16" if fp16 else None
|
244 |
-
torch_dtype = torch.float16 if fp16 else torch.float32
|
245 |
-
|
246 |
-
model = AutoencoderKL.from_pretrained(
|
247 |
-
model_id,
|
248 |
-
subfolder="vae",
|
249 |
-
torch_dtype=torch_dtype,
|
250 |
-
revision=revision,
|
251 |
-
)
|
252 |
-
model.to(torch_device)
|
253 |
-
|
254 |
-
return model
|
255 |
-
|
256 |
-
def get_generator(self, seed=0):
|
257 |
-
if torch_device == "mps":
|
258 |
-
return torch.manual_seed(seed)
|
259 |
-
return torch.Generator(device=torch_device).manual_seed(seed)
|
260 |
-
|
261 |
-
@parameterized.expand(
|
262 |
-
[
|
263 |
-
# fmt: off
|
264 |
-
[33, [-0.1603, 0.9878, -0.0495, -0.0790, -0.2709, 0.8375, -0.2060, -0.0824], [-0.2395, 0.0098, 0.0102, -0.0709, -0.2840, -0.0274, -0.0718, -0.1824]],
|
265 |
-
[47, [-0.2376, 0.1168, 0.1332, -0.4840, -0.2508, -0.0791, -0.0493, -0.4089], [0.0350, 0.0847, 0.0467, 0.0344, -0.0842, -0.0547, -0.0633, -0.1131]],
|
266 |
-
# fmt: on
|
267 |
-
]
|
268 |
-
)
|
269 |
-
def test_stable_diffusion(self, seed, expected_slice, expected_slice_mps):
|
270 |
-
model = self.get_sd_vae_model()
|
271 |
-
image = self.get_sd_image(seed)
|
272 |
-
generator = self.get_generator(seed)
|
273 |
-
|
274 |
-
with torch.no_grad():
|
275 |
-
sample = model(image, generator=generator, sample_posterior=True).sample
|
276 |
-
|
277 |
-
assert sample.shape == image.shape
|
278 |
-
|
279 |
-
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
|
280 |
-
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
|
281 |
-
|
282 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
|
283 |
-
|
284 |
-
@parameterized.expand(
|
285 |
-
[
|
286 |
-
# fmt: off
|
287 |
-
[33, [-0.0513, 0.0289, 1.3799, 0.2166, -0.2573, -0.0871, 0.5103, -0.0999]],
|
288 |
-
[47, [-0.4128, -0.1320, -0.3704, 0.1965, -0.4116, -0.2332, -0.3340, 0.2247]],
|
289 |
-
# fmt: on
|
290 |
-
]
|
291 |
-
)
|
292 |
-
@require_torch_gpu
|
293 |
-
def test_stable_diffusion_fp16(self, seed, expected_slice):
|
294 |
-
model = self.get_sd_vae_model(fp16=True)
|
295 |
-
image = self.get_sd_image(seed, fp16=True)
|
296 |
-
generator = self.get_generator(seed)
|
297 |
-
|
298 |
-
with torch.no_grad():
|
299 |
-
sample = model(image, generator=generator, sample_posterior=True).sample
|
300 |
-
|
301 |
-
assert sample.shape == image.shape
|
302 |
-
|
303 |
-
output_slice = sample[-1, -2:, :2, -2:].flatten().float().cpu()
|
304 |
-
expected_output_slice = torch.tensor(expected_slice)
|
305 |
-
|
306 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=1e-2)
|
307 |
-
|
308 |
-
@parameterized.expand(
|
309 |
-
[
|
310 |
-
# fmt: off
|
311 |
-
[33, [-0.1609, 0.9866, -0.0487, -0.0777, -0.2716, 0.8368, -0.2055, -0.0814], [-0.2395, 0.0098, 0.0102, -0.0709, -0.2840, -0.0274, -0.0718, -0.1824]],
|
312 |
-
[47, [-0.2377, 0.1147, 0.1333, -0.4841, -0.2506, -0.0805, -0.0491, -0.4085], [0.0350, 0.0847, 0.0467, 0.0344, -0.0842, -0.0547, -0.0633, -0.1131]],
|
313 |
-
# fmt: on
|
314 |
-
]
|
315 |
-
)
|
316 |
-
def test_stable_diffusion_mode(self, seed, expected_slice, expected_slice_mps):
|
317 |
-
model = self.get_sd_vae_model()
|
318 |
-
image = self.get_sd_image(seed)
|
319 |
-
|
320 |
-
with torch.no_grad():
|
321 |
-
sample = model(image).sample
|
322 |
-
|
323 |
-
assert sample.shape == image.shape
|
324 |
-
|
325 |
-
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
|
326 |
-
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
|
327 |
-
|
328 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
|
329 |
-
|
330 |
-
@parameterized.expand(
|
331 |
-
[
|
332 |
-
# fmt: off
|
333 |
-
[13, [-0.2051, -0.1803, -0.2311, -0.2114, -0.3292, -0.3574, -0.2953, -0.3323]],
|
334 |
-
[37, [-0.2632, -0.2625, -0.2199, -0.2741, -0.4539, -0.4990, -0.3720, -0.4925]],
|
335 |
-
# fmt: on
|
336 |
-
]
|
337 |
-
)
|
338 |
-
@require_torch_gpu
|
339 |
-
def test_stable_diffusion_decode(self, seed, expected_slice):
|
340 |
-
model = self.get_sd_vae_model()
|
341 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
|
342 |
-
|
343 |
-
with torch.no_grad():
|
344 |
-
sample = model.decode(encoding).sample
|
345 |
-
|
346 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
347 |
-
|
348 |
-
output_slice = sample[-1, -2:, :2, -2:].flatten().cpu()
|
349 |
-
expected_output_slice = torch.tensor(expected_slice)
|
350 |
-
|
351 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
|
352 |
-
|
353 |
-
@parameterized.expand(
|
354 |
-
[
|
355 |
-
# fmt: off
|
356 |
-
[27, [-0.0369, 0.0207, -0.0776, -0.0682, -0.1747, -0.1930, -0.1465, -0.2039]],
|
357 |
-
[16, [-0.1628, -0.2134, -0.2747, -0.2642, -0.3774, -0.4404, -0.3687, -0.4277]],
|
358 |
-
# fmt: on
|
359 |
-
]
|
360 |
-
)
|
361 |
-
@require_torch_gpu
|
362 |
-
def test_stable_diffusion_decode_fp16(self, seed, expected_slice):
|
363 |
-
model = self.get_sd_vae_model(fp16=True)
|
364 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64), fp16=True)
|
365 |
-
|
366 |
-
with torch.no_grad():
|
367 |
-
sample = model.decode(encoding).sample
|
368 |
-
|
369 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
370 |
-
|
371 |
-
output_slice = sample[-1, -2:, :2, -2:].flatten().float().cpu()
|
372 |
-
expected_output_slice = torch.tensor(expected_slice)
|
373 |
-
|
374 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
|
375 |
-
|
376 |
-
@parameterized.expand([(13,), (16,), (27,)])
|
377 |
-
@require_torch_gpu
|
378 |
-
@unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
|
379 |
-
def test_stable_diffusion_decode_xformers_vs_2_0_fp16(self, seed):
|
380 |
-
model = self.get_sd_vae_model(fp16=True)
|
381 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64), fp16=True)
|
382 |
-
|
383 |
-
with torch.no_grad():
|
384 |
-
sample = model.decode(encoding).sample
|
385 |
-
|
386 |
-
model.enable_xformers_memory_efficient_attention()
|
387 |
-
with torch.no_grad():
|
388 |
-
sample_2 = model.decode(encoding).sample
|
389 |
-
|
390 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
391 |
-
|
392 |
-
assert torch_all_close(sample, sample_2, atol=1e-1)
|
393 |
-
|
394 |
-
@parameterized.expand([(13,), (16,), (37,)])
|
395 |
-
@require_torch_gpu
|
396 |
-
@unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
|
397 |
-
def test_stable_diffusion_decode_xformers_vs_2_0(self, seed):
|
398 |
-
model = self.get_sd_vae_model()
|
399 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
|
400 |
-
|
401 |
-
with torch.no_grad():
|
402 |
-
sample = model.decode(encoding).sample
|
403 |
-
|
404 |
-
model.enable_xformers_memory_efficient_attention()
|
405 |
-
with torch.no_grad():
|
406 |
-
sample_2 = model.decode(encoding).sample
|
407 |
-
|
408 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
409 |
-
|
410 |
-
assert torch_all_close(sample, sample_2, atol=1e-2)
|
411 |
-
|
412 |
-
@parameterized.expand(
|
413 |
-
[
|
414 |
-
# fmt: off
|
415 |
-
[33, [-0.3001, 0.0918, -2.6984, -3.9720, -3.2099, -5.0353, 1.7338, -0.2065, 3.4267]],
|
416 |
-
[47, [-1.5030, -4.3871, -6.0355, -9.1157, -1.6661, -2.7853, 2.1607, -5.0823, 2.5633]],
|
417 |
-
# fmt: on
|
418 |
-
]
|
419 |
-
)
|
420 |
-
def test_stable_diffusion_encode_sample(self, seed, expected_slice):
|
421 |
-
model = self.get_sd_vae_model()
|
422 |
-
image = self.get_sd_image(seed)
|
423 |
-
generator = self.get_generator(seed)
|
424 |
-
|
425 |
-
with torch.no_grad():
|
426 |
-
dist = model.encode(image).latent_dist
|
427 |
-
sample = dist.sample(generator=generator)
|
428 |
-
|
429 |
-
assert list(sample.shape) == [image.shape[0], 4] + [i // 8 for i in image.shape[2:]]
|
430 |
-
|
431 |
-
output_slice = sample[0, -1, -3:, -3:].flatten().cpu()
|
432 |
-
expected_output_slice = torch.tensor(expected_slice)
|
433 |
-
|
434 |
-
tolerance = 3e-3 if torch_device != "mps" else 1e-2
|
435 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=tolerance)
|
436 |
-
|
437 |
-
def test_stable_diffusion_model_local(self):
|
438 |
-
model_id = "stabilityai/sd-vae-ft-mse"
|
439 |
-
model_1 = AutoencoderKL.from_pretrained(model_id).to(torch_device)
|
440 |
-
|
441 |
-
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors"
|
442 |
-
model_2 = AutoencoderKL.from_single_file(url).to(torch_device)
|
443 |
-
image = self.get_sd_image(33)
|
444 |
-
|
445 |
-
with torch.no_grad():
|
446 |
-
sample_1 = model_1(image).sample
|
447 |
-
sample_2 = model_2(image).sample
|
448 |
-
|
449 |
-
assert sample_1.shape == sample_2.shape
|
450 |
-
|
451 |
-
output_slice_1 = sample_1[-1, -2:, -2:, :2].flatten().float().cpu()
|
452 |
-
output_slice_2 = sample_2[-1, -2:, -2:, :2].flatten().float().cpu()
|
453 |
-
|
454 |
-
assert torch_all_close(output_slice_1, output_slice_2, atol=3e-3)
|
455 |
-
|
456 |
-
|
457 |
-
@slow
|
458 |
-
class AsymmetricAutoencoderKLIntegrationTests(unittest.TestCase):
|
459 |
-
def get_file_format(self, seed, shape):
|
460 |
-
return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy"
|
461 |
-
|
462 |
-
def tearDown(self):
|
463 |
-
# clean up the VRAM after each test
|
464 |
-
super().tearDown()
|
465 |
-
gc.collect()
|
466 |
-
torch.cuda.empty_cache()
|
467 |
-
|
468 |
-
def get_sd_image(self, seed=0, shape=(4, 3, 512, 512), fp16=False):
|
469 |
-
dtype = torch.float16 if fp16 else torch.float32
|
470 |
-
image = torch.from_numpy(load_hf_numpy(self.get_file_format(seed, shape))).to(torch_device).to(dtype)
|
471 |
-
return image
|
472 |
-
|
473 |
-
def get_sd_vae_model(self, model_id="cross-attention/asymmetric-autoencoder-kl-x-1-5", fp16=False):
|
474 |
-
revision = "main"
|
475 |
-
torch_dtype = torch.float32
|
476 |
-
|
477 |
-
model = AsymmetricAutoencoderKL.from_pretrained(
|
478 |
-
model_id,
|
479 |
-
torch_dtype=torch_dtype,
|
480 |
-
revision=revision,
|
481 |
-
)
|
482 |
-
model.to(torch_device).eval()
|
483 |
-
|
484 |
-
return model
|
485 |
-
|
486 |
-
def get_generator(self, seed=0):
|
487 |
-
if torch_device == "mps":
|
488 |
-
return torch.manual_seed(seed)
|
489 |
-
return torch.Generator(device=torch_device).manual_seed(seed)
|
490 |
-
|
491 |
-
@parameterized.expand(
|
492 |
-
[
|
493 |
-
# fmt: off
|
494 |
-
[33, [-0.0344, 0.2912, 0.1687, -0.0137, -0.3462, 0.3552, -0.1337, 0.1078], [-0.1603, 0.9878, -0.0495, -0.0790, -0.2709, 0.8375, -0.2060, -0.0824]],
|
495 |
-
[47, [0.4400, 0.0543, 0.2873, 0.2946, 0.0553, 0.0839, -0.1585, 0.2529], [-0.2376, 0.1168, 0.1332, -0.4840, -0.2508, -0.0791, -0.0493, -0.4089]],
|
496 |
-
# fmt: on
|
497 |
-
]
|
498 |
-
)
|
499 |
-
def test_stable_diffusion(self, seed, expected_slice, expected_slice_mps):
|
500 |
-
model = self.get_sd_vae_model()
|
501 |
-
image = self.get_sd_image(seed)
|
502 |
-
generator = self.get_generator(seed)
|
503 |
-
|
504 |
-
with torch.no_grad():
|
505 |
-
sample = model(image, generator=generator, sample_posterior=True).sample
|
506 |
-
|
507 |
-
assert sample.shape == image.shape
|
508 |
-
|
509 |
-
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
|
510 |
-
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
|
511 |
-
|
512 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=5e-3)
|
513 |
-
|
514 |
-
@parameterized.expand(
|
515 |
-
[
|
516 |
-
# fmt: off
|
517 |
-
[33, [-0.0340, 0.2870, 0.1698, -0.0105, -0.3448, 0.3529, -0.1321, 0.1097], [-0.0344, 0.2912, 0.1687, -0.0137, -0.3462, 0.3552, -0.1337, 0.1078]],
|
518 |
-
[47, [0.4397, 0.0550, 0.2873, 0.2946, 0.0567, 0.0855, -0.1580, 0.2531], [0.4397, 0.0550, 0.2873, 0.2946, 0.0567, 0.0855, -0.1580, 0.2531]],
|
519 |
-
# fmt: on
|
520 |
-
]
|
521 |
-
)
|
522 |
-
def test_stable_diffusion_mode(self, seed, expected_slice, expected_slice_mps):
|
523 |
-
model = self.get_sd_vae_model()
|
524 |
-
image = self.get_sd_image(seed)
|
525 |
-
|
526 |
-
with torch.no_grad():
|
527 |
-
sample = model(image).sample
|
528 |
-
|
529 |
-
assert sample.shape == image.shape
|
530 |
-
|
531 |
-
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
|
532 |
-
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)
|
533 |
-
|
534 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)
|
535 |
-
|
536 |
-
@parameterized.expand(
|
537 |
-
[
|
538 |
-
# fmt: off
|
539 |
-
[13, [-0.0521, -0.2939, 0.1540, -0.1855, -0.5936, -0.3138, -0.4579, -0.2275]],
|
540 |
-
[37, [-0.1820, -0.4345, -0.0455, -0.2923, -0.8035, -0.5089, -0.4795, -0.3106]],
|
541 |
-
# fmt: on
|
542 |
-
]
|
543 |
-
)
|
544 |
-
@require_torch_gpu
|
545 |
-
def test_stable_diffusion_decode(self, seed, expected_slice):
|
546 |
-
model = self.get_sd_vae_model()
|
547 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
|
548 |
-
|
549 |
-
with torch.no_grad():
|
550 |
-
sample = model.decode(encoding).sample
|
551 |
-
|
552 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
553 |
-
|
554 |
-
output_slice = sample[-1, -2:, :2, -2:].flatten().cpu()
|
555 |
-
expected_output_slice = torch.tensor(expected_slice)
|
556 |
-
|
557 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=2e-3)
|
558 |
-
|
559 |
-
@parameterized.expand([(13,), (16,), (37,)])
|
560 |
-
@require_torch_gpu
|
561 |
-
@unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
|
562 |
-
def test_stable_diffusion_decode_xformers_vs_2_0(self, seed):
|
563 |
-
model = self.get_sd_vae_model()
|
564 |
-
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
|
565 |
-
|
566 |
-
with torch.no_grad():
|
567 |
-
sample = model.decode(encoding).sample
|
568 |
-
|
569 |
-
model.enable_xformers_memory_efficient_attention()
|
570 |
-
with torch.no_grad():
|
571 |
-
sample_2 = model.decode(encoding).sample
|
572 |
-
|
573 |
-
assert list(sample.shape) == [3, 3, 512, 512]
|
574 |
-
|
575 |
-
assert torch_all_close(sample, sample_2, atol=5e-2)
|
576 |
-
|
577 |
-
@parameterized.expand(
|
578 |
-
[
|
579 |
-
# fmt: off
|
580 |
-
[33, [-0.3001, 0.0918, -2.6984, -3.9720, -3.2099, -5.0353, 1.7338, -0.2065, 3.4267]],
|
581 |
-
[47, [-1.5030, -4.3871, -6.0355, -9.1157, -1.6661, -2.7853, 2.1607, -5.0823, 2.5633]],
|
582 |
-
# fmt: on
|
583 |
-
]
|
584 |
-
)
|
585 |
-
def test_stable_diffusion_encode_sample(self, seed, expected_slice):
|
586 |
-
model = self.get_sd_vae_model()
|
587 |
-
image = self.get_sd_image(seed)
|
588 |
-
generator = self.get_generator(seed)
|
589 |
-
|
590 |
-
with torch.no_grad():
|
591 |
-
dist = model.encode(image).latent_dist
|
592 |
-
sample = dist.sample(generator=generator)
|
593 |
-
|
594 |
-
assert list(sample.shape) == [image.shape[0], 4] + [i // 8 for i in image.shape[2:]]
|
595 |
-
|
596 |
-
output_slice = sample[0, -1, -3:, -3:].flatten().cpu()
|
597 |
-
expected_output_slice = torch.tensor(expected_slice)
|
598 |
-
|
599 |
-
tolerance = 3e-3 if torch_device != "mps" else 1e-2
|
600 |
-
assert torch_all_close(output_slice, expected_output_slice, atol=tolerance)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
_base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
|
2 |
-
model = dict(
|
3 |
-
type='GFL',
|
4 |
-
pretrained='open-mmlab://resnext101_32x4d',
|
5 |
-
backbone=dict(
|
6 |
-
type='ResNeXt',
|
7 |
-
depth=101,
|
8 |
-
groups=32,
|
9 |
-
base_width=4,
|
10 |
-
num_stages=4,
|
11 |
-
out_indices=(0, 1, 2, 3),
|
12 |
-
frozen_stages=1,
|
13 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
14 |
-
norm_eval=True,
|
15 |
-
style='pytorch'))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_fpn_2x_coco.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
_base_ = './retinanet_r50_fpn_1x_coco.py'
|
2 |
-
# learning policy
|
3 |
-
lr_config = dict(step=[16, 22])
|
4 |
-
runner = dict(type='EpochBasedRunner', max_epochs=24)
|
|
|
|
|
|
|
|
|
|
spaces/AndySAnker/DeepStruc/predict.py
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
import sys, argparse
|
2 |
-
import streamlit as st
|
3 |
-
from tools.module import Net
|
4 |
-
import torch, random, time
|
5 |
-
import numpy as np
|
6 |
-
import pytorch_lightning as pl
|
7 |
-
from tools.utils import get_data, format_predictions, plot_ls, get_model, save_predictions
|
8 |
-
|
9 |
-
def main(args):
|
10 |
-
time_start = time.time()
|
11 |
-
data, data_name, project_name = get_data(args)
|
12 |
-
model_path, model_arch = get_model(args.model)
|
13 |
-
|
14 |
-
Net(model_arch=model_arch)
|
15 |
-
DeepStruc = Net.load_from_checkpoint(model_path,model_arch=model_arch)
|
16 |
-
#start_time = time.time()
|
17 |
-
xyz_pred, latent_space, kl, mu, sigma = DeepStruc(data, mode='prior', sigma_scale=args.sigma)
|
18 |
-
#st.write("one prediction: " , time.time() - start_time)
|
19 |
-
#start_time = time.time()
|
20 |
-
#for i in range(1000):
|
21 |
-
# xyz_pred, latent_space, kl, mu, sigma = DeepStruc(data, mode='prior', sigma_scale=args.sigma)
|
22 |
-
#st.write("thousand predictions: " , time.time() - start_time)
|
23 |
-
|
24 |
-
samling_pairs = format_predictions(latent_space, data_name, mu, sigma, args.sigma)
|
25 |
-
|
26 |
-
df, mk_dir, index_highlight = samling_pairs, project_name, args.index_plot
|
27 |
-
|
28 |
-
these_cords = save_predictions(xyz_pred, samling_pairs, project_name, model_arch, args)
|
29 |
-
|
30 |
-
return df, index_highlight, these_cords
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deform_conv.py
DELETED
@@ -1,405 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
from typing import Tuple, Union
|
3 |
-
|
4 |
-
import torch
|
5 |
-
import torch.nn as nn
|
6 |
-
import torch.nn.functional as F
|
7 |
-
from torch import Tensor
|
8 |
-
from torch.autograd import Function
|
9 |
-
from torch.autograd.function import once_differentiable
|
10 |
-
from torch.nn.modules.utils import _pair, _single
|
11 |
-
|
12 |
-
from annotator.uniformer.mmcv.utils import deprecated_api_warning
|
13 |
-
from ..cnn import CONV_LAYERS
|
14 |
-
from ..utils import ext_loader, print_log
|
15 |
-
|
16 |
-
ext_module = ext_loader.load_ext('_ext', [
|
17 |
-
'deform_conv_forward', 'deform_conv_backward_input',
|
18 |
-
'deform_conv_backward_parameters'
|
19 |
-
])
|
20 |
-
|
21 |
-
|
22 |
-
class DeformConv2dFunction(Function):
|
23 |
-
|
24 |
-
@staticmethod
|
25 |
-
def symbolic(g,
|
26 |
-
input,
|
27 |
-
offset,
|
28 |
-
weight,
|
29 |
-
stride,
|
30 |
-
padding,
|
31 |
-
dilation,
|
32 |
-
groups,
|
33 |
-
deform_groups,
|
34 |
-
bias=False,
|
35 |
-
im2col_step=32):
|
36 |
-
return g.op(
|
37 |
-
'mmcv::MMCVDeformConv2d',
|
38 |
-
input,
|
39 |
-
offset,
|
40 |
-
weight,
|
41 |
-
stride_i=stride,
|
42 |
-
padding_i=padding,
|
43 |
-
dilation_i=dilation,
|
44 |
-
groups_i=groups,
|
45 |
-
deform_groups_i=deform_groups,
|
46 |
-
bias_i=bias,
|
47 |
-
im2col_step_i=im2col_step)
|
48 |
-
|
49 |
-
@staticmethod
|
50 |
-
def forward(ctx,
|
51 |
-
input,
|
52 |
-
offset,
|
53 |
-
weight,
|
54 |
-
stride=1,
|
55 |
-
padding=0,
|
56 |
-
dilation=1,
|
57 |
-
groups=1,
|
58 |
-
deform_groups=1,
|
59 |
-
bias=False,
|
60 |
-
im2col_step=32):
|
61 |
-
if input is not None and input.dim() != 4:
|
62 |
-
raise ValueError(
|
63 |
-
f'Expected 4D tensor as input, got {input.dim()}D tensor \
|
64 |
-
instead.')
|
65 |
-
assert bias is False, 'Only support bias is False.'
|
66 |
-
ctx.stride = _pair(stride)
|
67 |
-
ctx.padding = _pair(padding)
|
68 |
-
ctx.dilation = _pair(dilation)
|
69 |
-
ctx.groups = groups
|
70 |
-
ctx.deform_groups = deform_groups
|
71 |
-
ctx.im2col_step = im2col_step
|
72 |
-
|
73 |
-
# When pytorch version >= 1.6.0, amp is adopted for fp16 mode;
|
74 |
-
# amp won't cast the type of model (float32), but "offset" is cast
|
75 |
-
# to float16 by nn.Conv2d automatically, leading to the type
|
76 |
-
# mismatch with input (when it is float32) or weight.
|
77 |
-
# The flag for whether to use fp16 or amp is the type of "offset",
|
78 |
-
# we cast weight and input to temporarily support fp16 and amp
|
79 |
-
# whatever the pytorch version is.
|
80 |
-
input = input.type_as(offset)
|
81 |
-
weight = weight.type_as(input)
|
82 |
-
ctx.save_for_backward(input, offset, weight)
|
83 |
-
|
84 |
-
output = input.new_empty(
|
85 |
-
DeformConv2dFunction._output_size(ctx, input, weight))
|
86 |
-
|
87 |
-
ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
|
88 |
-
|
89 |
-
cur_im2col_step = min(ctx.im2col_step, input.size(0))
|
90 |
-
assert (input.size(0) %
|
91 |
-
cur_im2col_step) == 0, 'im2col step must divide batchsize'
|
92 |
-
ext_module.deform_conv_forward(
|
93 |
-
input,
|
94 |
-
weight,
|
95 |
-
offset,
|
96 |
-
output,
|
97 |
-
ctx.bufs_[0],
|
98 |
-
ctx.bufs_[1],
|
99 |
-
kW=weight.size(3),
|
100 |
-
kH=weight.size(2),
|
101 |
-
dW=ctx.stride[1],
|
102 |
-
dH=ctx.stride[0],
|
103 |
-
padW=ctx.padding[1],
|
104 |
-
padH=ctx.padding[0],
|
105 |
-
dilationW=ctx.dilation[1],
|
106 |
-
dilationH=ctx.dilation[0],
|
107 |
-
group=ctx.groups,
|
108 |
-
deformable_group=ctx.deform_groups,
|
109 |
-
im2col_step=cur_im2col_step)
|
110 |
-
return output
|
111 |
-
|
112 |
-
@staticmethod
|
113 |
-
@once_differentiable
|
114 |
-
def backward(ctx, grad_output):
|
115 |
-
input, offset, weight = ctx.saved_tensors
|
116 |
-
|
117 |
-
grad_input = grad_offset = grad_weight = None
|
118 |
-
|
119 |
-
cur_im2col_step = min(ctx.im2col_step, input.size(0))
|
120 |
-
assert (input.size(0) % cur_im2col_step
|
121 |
-
) == 0, 'batch size must be divisible by im2col_step'
|
122 |
-
|
123 |
-
grad_output = grad_output.contiguous()
|
124 |
-
if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
|
125 |
-
grad_input = torch.zeros_like(input)
|
126 |
-
grad_offset = torch.zeros_like(offset)
|
127 |
-
ext_module.deform_conv_backward_input(
|
128 |
-
input,
|
129 |
-
offset,
|
130 |
-
grad_output,
|
131 |
-
grad_input,
|
132 |
-
grad_offset,
|
133 |
-
weight,
|
134 |
-
ctx.bufs_[0],
|
135 |
-
kW=weight.size(3),
|
136 |
-
kH=weight.size(2),
|
137 |
-
dW=ctx.stride[1],
|
138 |
-
dH=ctx.stride[0],
|
139 |
-
padW=ctx.padding[1],
|
140 |
-
padH=ctx.padding[0],
|
141 |
-
dilationW=ctx.dilation[1],
|
142 |
-
dilationH=ctx.dilation[0],
|
143 |
-
group=ctx.groups,
|
144 |
-
deformable_group=ctx.deform_groups,
|
145 |
-
im2col_step=cur_im2col_step)
|
146 |
-
|
147 |
-
if ctx.needs_input_grad[2]:
|
148 |
-
grad_weight = torch.zeros_like(weight)
|
149 |
-
ext_module.deform_conv_backward_parameters(
|
150 |
-
input,
|
151 |
-
offset,
|
152 |
-
grad_output,
|
153 |
-
grad_weight,
|
154 |
-
ctx.bufs_[0],
|
155 |
-
ctx.bufs_[1],
|
156 |
-
kW=weight.size(3),
|
157 |
-
kH=weight.size(2),
|
158 |
-
dW=ctx.stride[1],
|
159 |
-
dH=ctx.stride[0],
|
160 |
-
padW=ctx.padding[1],
|
161 |
-
padH=ctx.padding[0],
|
162 |
-
dilationW=ctx.dilation[1],
|
163 |
-
dilationH=ctx.dilation[0],
|
164 |
-
group=ctx.groups,
|
165 |
-
deformable_group=ctx.deform_groups,
|
166 |
-
scale=1,
|
167 |
-
im2col_step=cur_im2col_step)
|
168 |
-
|
169 |
-
return grad_input, grad_offset, grad_weight, \
|
170 |
-
None, None, None, None, None, None, None
|
171 |
-
|
172 |
-
@staticmethod
|
173 |
-
def _output_size(ctx, input, weight):
|
174 |
-
channels = weight.size(0)
|
175 |
-
output_size = (input.size(0), channels)
|
176 |
-
for d in range(input.dim() - 2):
|
177 |
-
in_size = input.size(d + 2)
|
178 |
-
pad = ctx.padding[d]
|
179 |
-
kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1
|
180 |
-
stride_ = ctx.stride[d]
|
181 |
-
output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
|
182 |
-
if not all(map(lambda s: s > 0, output_size)):
|
183 |
-
raise ValueError(
|
184 |
-
'convolution input is too small (output would be ' +
|
185 |
-
'x'.join(map(str, output_size)) + ')')
|
186 |
-
return output_size
|
187 |
-
|
188 |
-
|
189 |
-
deform_conv2d = DeformConv2dFunction.apply
|
190 |
-
|
191 |
-
|
192 |
-
class DeformConv2d(nn.Module):
|
193 |
-
r"""Deformable 2D convolution.
|
194 |
-
|
195 |
-
Applies a deformable 2D convolution over an input signal composed of
|
196 |
-
several input planes. DeformConv2d was described in the paper
|
197 |
-
`Deformable Convolutional Networks
|
198 |
-
<https://arxiv.org/pdf/1703.06211.pdf>`_
|
199 |
-
|
200 |
-
Note:
|
201 |
-
The argument ``im2col_step`` was added in version 1.3.17, which means
|
202 |
-
number of samples processed by the ``im2col_cuda_kernel`` per call.
|
203 |
-
It enables users to define ``batch_size`` and ``im2col_step`` more
|
204 |
-
flexibly and solved `issue mmcv#1440
|
205 |
-
<https://github.com/open-mmlab/mmcv/issues/1440>`_.
|
206 |
-
|
207 |
-
Args:
|
208 |
-
in_channels (int): Number of channels in the input image.
|
209 |
-
out_channels (int): Number of channels produced by the convolution.
|
210 |
-
kernel_size(int, tuple): Size of the convolving kernel.
|
211 |
-
stride(int, tuple): Stride of the convolution. Default: 1.
|
212 |
-
padding (int or tuple): Zero-padding added to both sides of the input.
|
213 |
-
Default: 0.
|
214 |
-
dilation (int or tuple): Spacing between kernel elements. Default: 1.
|
215 |
-
groups (int): Number of blocked connections from input.
|
216 |
-
channels to output channels. Default: 1.
|
217 |
-
deform_groups (int): Number of deformable group partitions.
|
218 |
-
bias (bool): If True, adds a learnable bias to the output.
|
219 |
-
Default: False.
|
220 |
-
im2col_step (int): Number of samples processed by im2col_cuda_kernel
|
221 |
-
per call. It will work when ``batch_size`` > ``im2col_step``, but
|
222 |
-
``batch_size`` must be divisible by ``im2col_step``. Default: 32.
|
223 |
-
`New in version 1.3.17.`
|
224 |
-
"""
|
225 |
-
|
226 |
-
@deprecated_api_warning({'deformable_groups': 'deform_groups'},
|
227 |
-
cls_name='DeformConv2d')
|
228 |
-
def __init__(self,
|
229 |
-
in_channels: int,
|
230 |
-
out_channels: int,
|
231 |
-
kernel_size: Union[int, Tuple[int, ...]],
|
232 |
-
stride: Union[int, Tuple[int, ...]] = 1,
|
233 |
-
padding: Union[int, Tuple[int, ...]] = 0,
|
234 |
-
dilation: Union[int, Tuple[int, ...]] = 1,
|
235 |
-
groups: int = 1,
|
236 |
-
deform_groups: int = 1,
|
237 |
-
bias: bool = False,
|
238 |
-
im2col_step: int = 32) -> None:
|
239 |
-
super(DeformConv2d, self).__init__()
|
240 |
-
|
241 |
-
assert not bias, \
|
242 |
-
f'bias={bias} is not supported in DeformConv2d.'
|
243 |
-
assert in_channels % groups == 0, \
|
244 |
-
f'in_channels {in_channels} cannot be divisible by groups {groups}'
|
245 |
-
assert out_channels % groups == 0, \
|
246 |
-
f'out_channels {out_channels} cannot be divisible by groups \
|
247 |
-
{groups}'
|
248 |
-
|
249 |
-
self.in_channels = in_channels
|
250 |
-
self.out_channels = out_channels
|
251 |
-
self.kernel_size = _pair(kernel_size)
|
252 |
-
self.stride = _pair(stride)
|
253 |
-
self.padding = _pair(padding)
|
254 |
-
self.dilation = _pair(dilation)
|
255 |
-
self.groups = groups
|
256 |
-
self.deform_groups = deform_groups
|
257 |
-
self.im2col_step = im2col_step
|
258 |
-
# enable compatibility with nn.Conv2d
|
259 |
-
self.transposed = False
|
260 |
-
self.output_padding = _single(0)
|
261 |
-
|
262 |
-
# only weight, no bias
|
263 |
-
self.weight = nn.Parameter(
|
264 |
-
torch.Tensor(out_channels, in_channels // self.groups,
|
265 |
-
*self.kernel_size))
|
266 |
-
|
267 |
-
self.reset_parameters()
|
268 |
-
|
269 |
-
def reset_parameters(self):
|
270 |
-
# switch the initialization of `self.weight` to the standard kaiming
|
271 |
-
# method described in `Delving deep into rectifiers: Surpassing
|
272 |
-
# human-level performance on ImageNet classification` - He, K. et al.
|
273 |
-
# (2015), using a uniform distribution
|
274 |
-
nn.init.kaiming_uniform_(self.weight, nonlinearity='relu')
|
275 |
-
|
276 |
-
def forward(self, x: Tensor, offset: Tensor) -> Tensor:
|
277 |
-
"""Deformable Convolutional forward function.
|
278 |
-
|
279 |
-
Args:
|
280 |
-
x (Tensor): Input feature, shape (B, C_in, H_in, W_in)
|
281 |
-
offset (Tensor): Offset for deformable convolution, shape
|
282 |
-
(B, deform_groups*kernel_size[0]*kernel_size[1]*2,
|
283 |
-
H_out, W_out), H_out, W_out are equal to the output's.
|
284 |
-
|
285 |
-
An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
|
286 |
-
The spatial arrangement is like:
|
287 |
-
|
288 |
-
.. code:: text
|
289 |
-
|
290 |
-
(x0, y0) (x1, y1) (x2, y2)
|
291 |
-
(x3, y3) (x4, y4) (x5, y5)
|
292 |
-
(x6, y6) (x7, y7) (x8, y8)
|
293 |
-
|
294 |
-
Returns:
|
295 |
-
Tensor: Output of the layer.
|
296 |
-
"""
|
297 |
-
# To fix an assert error in deform_conv_cuda.cpp:128
|
298 |
-
# input image is smaller than kernel
|
299 |
-
input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) <
|
300 |
-
self.kernel_size[1])
|
301 |
-
if input_pad:
|
302 |
-
pad_h = max(self.kernel_size[0] - x.size(2), 0)
|
303 |
-
pad_w = max(self.kernel_size[1] - x.size(3), 0)
|
304 |
-
x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
|
305 |
-
offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0)
|
306 |
-
offset = offset.contiguous()
|
307 |
-
out = deform_conv2d(x, offset, self.weight, self.stride, self.padding,
|
308 |
-
self.dilation, self.groups, self.deform_groups,
|
309 |
-
False, self.im2col_step)
|
310 |
-
if input_pad:
|
311 |
-
out = out[:, :, :out.size(2) - pad_h, :out.size(3) -
|
312 |
-
pad_w].contiguous()
|
313 |
-
return out
|
314 |
-
|
315 |
-
def __repr__(self):
|
316 |
-
s = self.__class__.__name__
|
317 |
-
s += f'(in_channels={self.in_channels},\n'
|
318 |
-
s += f'out_channels={self.out_channels},\n'
|
319 |
-
s += f'kernel_size={self.kernel_size},\n'
|
320 |
-
s += f'stride={self.stride},\n'
|
321 |
-
s += f'padding={self.padding},\n'
|
322 |
-
s += f'dilation={self.dilation},\n'
|
323 |
-
s += f'groups={self.groups},\n'
|
324 |
-
s += f'deform_groups={self.deform_groups},\n'
|
325 |
-
# bias is not supported in DeformConv2d.
|
326 |
-
s += 'bias=False)'
|
327 |
-
return s
|
328 |
-
|
329 |
-
|
330 |
-
@CONV_LAYERS.register_module('DCN')
|
331 |
-
class DeformConv2dPack(DeformConv2d):
|
332 |
-
"""A Deformable Conv Encapsulation that acts as normal Conv layers.
|
333 |
-
|
334 |
-
The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
|
335 |
-
The spatial arrangement is like:
|
336 |
-
|
337 |
-
.. code:: text
|
338 |
-
|
339 |
-
(x0, y0) (x1, y1) (x2, y2)
|
340 |
-
(x3, y3) (x4, y4) (x5, y5)
|
341 |
-
(x6, y6) (x7, y7) (x8, y8)
|
342 |
-
|
343 |
-
Args:
|
344 |
-
in_channels (int): Same as nn.Conv2d.
|
345 |
-
out_channels (int): Same as nn.Conv2d.
|
346 |
-
kernel_size (int or tuple[int]): Same as nn.Conv2d.
|
347 |
-
stride (int or tuple[int]): Same as nn.Conv2d.
|
348 |
-
padding (int or tuple[int]): Same as nn.Conv2d.
|
349 |
-
dilation (int or tuple[int]): Same as nn.Conv2d.
|
350 |
-
groups (int): Same as nn.Conv2d.
|
351 |
-
bias (bool or str): If specified as `auto`, it will be decided by the
|
352 |
-
norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
|
353 |
-
False.
|
354 |
-
"""
|
355 |
-
|
356 |
-
_version = 2
|
357 |
-
|
358 |
-
def __init__(self, *args, **kwargs):
|
359 |
-
super(DeformConv2dPack, self).__init__(*args, **kwargs)
|
360 |
-
self.conv_offset = nn.Conv2d(
|
361 |
-
self.in_channels,
|
362 |
-
self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
|
363 |
-
kernel_size=self.kernel_size,
|
364 |
-
stride=_pair(self.stride),
|
365 |
-
padding=_pair(self.padding),
|
366 |
-
dilation=_pair(self.dilation),
|
367 |
-
bias=True)
|
368 |
-
self.init_offset()
|
369 |
-
|
370 |
-
def init_offset(self):
|
371 |
-
self.conv_offset.weight.data.zero_()
|
372 |
-
self.conv_offset.bias.data.zero_()
|
373 |
-
|
374 |
-
def forward(self, x):
|
375 |
-
offset = self.conv_offset(x)
|
376 |
-
return deform_conv2d(x, offset, self.weight, self.stride, self.padding,
|
377 |
-
self.dilation, self.groups, self.deform_groups,
|
378 |
-
False, self.im2col_step)
|
379 |
-
|
380 |
-
def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
|
381 |
-
missing_keys, unexpected_keys, error_msgs):
|
382 |
-
version = local_metadata.get('version', None)
|
383 |
-
|
384 |
-
if version is None or version < 2:
|
385 |
-
# the key is different in early versions
|
386 |
-
# In version < 2, DeformConvPack loads previous benchmark models.
|
387 |
-
if (prefix + 'conv_offset.weight' not in state_dict
|
388 |
-
and prefix[:-1] + '_offset.weight' in state_dict):
|
389 |
-
state_dict[prefix + 'conv_offset.weight'] = state_dict.pop(
|
390 |
-
prefix[:-1] + '_offset.weight')
|
391 |
-
if (prefix + 'conv_offset.bias' not in state_dict
|
392 |
-
and prefix[:-1] + '_offset.bias' in state_dict):
|
393 |
-
state_dict[prefix +
|
394 |
-
'conv_offset.bias'] = state_dict.pop(prefix[:-1] +
|
395 |
-
'_offset.bias')
|
396 |
-
|
397 |
-
if version is not None and version > 1:
|
398 |
-
print_log(
|
399 |
-
f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to '
|
400 |
-
'version 2.',
|
401 |
-
logger='root')
|
402 |
-
|
403 |
-
super()._load_from_state_dict(state_dict, prefix, local_metadata,
|
404 |
-
strict, missing_keys, unexpected_keys,
|
405 |
-
error_msgs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/util.py
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import cv2
|
3 |
-
import os
|
4 |
-
|
5 |
-
|
6 |
-
annotator_ckpts_path = os.path.join(os.path.dirname(__file__), 'ckpts')
|
7 |
-
|
8 |
-
|
9 |
-
def HWC3(x):
|
10 |
-
assert x.dtype == np.uint8
|
11 |
-
if x.ndim == 2:
|
12 |
-
x = x[:, :, None]
|
13 |
-
assert x.ndim == 3
|
14 |
-
H, W, C = x.shape
|
15 |
-
assert C == 1 or C == 3 or C == 4
|
16 |
-
if C == 3:
|
17 |
-
return x
|
18 |
-
if C == 1:
|
19 |
-
return np.concatenate([x, x, x], axis=2)
|
20 |
-
if C == 4:
|
21 |
-
color = x[:, :, 0:3].astype(np.float32)
|
22 |
-
alpha = x[:, :, 3:4].astype(np.float32) / 255.0
|
23 |
-
y = color * alpha + 255.0 * (1.0 - alpha)
|
24 |
-
y = y.clip(0, 255).astype(np.uint8)
|
25 |
-
return y
|
26 |
-
|
27 |
-
|
28 |
-
def resize_image(input_image, resolution):
|
29 |
-
H, W, C = input_image.shape
|
30 |
-
H = float(H)
|
31 |
-
W = float(W)
|
32 |
-
k = float(resolution) / min(H, W)
|
33 |
-
H *= k
|
34 |
-
W *= k
|
35 |
-
H = int(np.round(H / 64.0)) * 64
|
36 |
-
W = int(np.round(W / 64.0)) * 64
|
37 |
-
img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA)
|
38 |
-
return img
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Apex-X/nono/roop/processors/frame/core.py
DELETED
@@ -1,91 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
import importlib
|
4 |
-
import psutil
|
5 |
-
from concurrent.futures import ThreadPoolExecutor, as_completed
|
6 |
-
from queue import Queue
|
7 |
-
from types import ModuleType
|
8 |
-
from typing import Any, List, Callable
|
9 |
-
from tqdm import tqdm
|
10 |
-
|
11 |
-
import roop
|
12 |
-
|
13 |
-
FRAME_PROCESSORS_MODULES: List[ModuleType] = []
|
14 |
-
FRAME_PROCESSORS_INTERFACE = [
|
15 |
-
'pre_check',
|
16 |
-
'pre_start',
|
17 |
-
'process_frame',
|
18 |
-
'process_frames',
|
19 |
-
'process_image',
|
20 |
-
'process_video',
|
21 |
-
'post_process'
|
22 |
-
]
|
23 |
-
|
24 |
-
|
25 |
-
def load_frame_processor_module(frame_processor: str) -> Any:
|
26 |
-
try:
|
27 |
-
frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}')
|
28 |
-
for method_name in FRAME_PROCESSORS_INTERFACE:
|
29 |
-
if not hasattr(frame_processor_module, method_name):
|
30 |
-
raise NotImplementedError
|
31 |
-
except ModuleNotFoundError:
|
32 |
-
sys.exit(f'Frame processor {frame_processor} not found.')
|
33 |
-
except NotImplementedError:
|
34 |
-
sys.exit(f'Frame processor {frame_processor} not implemented correctly.')
|
35 |
-
return frame_processor_module
|
36 |
-
|
37 |
-
|
38 |
-
def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]:
|
39 |
-
global FRAME_PROCESSORS_MODULES
|
40 |
-
|
41 |
-
if not FRAME_PROCESSORS_MODULES:
|
42 |
-
for frame_processor in frame_processors:
|
43 |
-
frame_processor_module = load_frame_processor_module(frame_processor)
|
44 |
-
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
45 |
-
return FRAME_PROCESSORS_MODULES
|
46 |
-
|
47 |
-
|
48 |
-
def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None:
|
49 |
-
with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor:
|
50 |
-
futures = []
|
51 |
-
queue = create_queue(temp_frame_paths)
|
52 |
-
queue_per_future = max(len(temp_frame_paths) // roop.globals.execution_threads, 1)
|
53 |
-
while not queue.empty():
|
54 |
-
future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update)
|
55 |
-
futures.append(future)
|
56 |
-
for future in as_completed(futures):
|
57 |
-
future.result()
|
58 |
-
|
59 |
-
|
60 |
-
def create_queue(temp_frame_paths: List[str]) -> Queue[str]:
|
61 |
-
queue: Queue[str] = Queue()
|
62 |
-
for frame_path in temp_frame_paths:
|
63 |
-
queue.put(frame_path)
|
64 |
-
return queue
|
65 |
-
|
66 |
-
|
67 |
-
def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]:
|
68 |
-
queues = []
|
69 |
-
for _ in range(queue_per_future):
|
70 |
-
if not queue.empty():
|
71 |
-
queues.append(queue.get())
|
72 |
-
return queues
|
73 |
-
|
74 |
-
|
75 |
-
def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None:
|
76 |
-
progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'
|
77 |
-
total = len(frame_paths)
|
78 |
-
with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress:
|
79 |
-
multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress))
|
80 |
-
|
81 |
-
|
82 |
-
def update_progress(progress: Any = None) -> None:
|
83 |
-
process = psutil.Process(os.getpid())
|
84 |
-
memory_usage = process.memory_info().rss / 1024 / 1024 / 1024
|
85 |
-
progress.set_postfix({
|
86 |
-
'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB',
|
87 |
-
'execution_providers': roop.globals.execution_providers,
|
88 |
-
'execution_threads': roop.globals.execution_threads
|
89 |
-
})
|
90 |
-
progress.refresh()
|
91 |
-
progress.update(1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arvi/feedback_generator/app.py
DELETED
@@ -1,407 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
"""Untitled19.ipynb
|
3 |
-
|
4 |
-
Automatically generated by Colaboratory.
|
5 |
-
|
6 |
-
Original file is located at
|
7 |
-
https://colab.research.google.com/drive/123iPxfG1KBLCe4t3m41RIziyYLSOxg30
|
8 |
-
"""
|
9 |
-
|
10 |
-
|
11 |
-
import gradio as gr
|
12 |
-
import pandas as pd
|
13 |
-
import numpy as np
|
14 |
-
|
15 |
-
df=pd.read_csv(r'final_processed.csv')
|
16 |
-
|
17 |
-
def assign_weights(Name,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15):
|
18 |
-
import gradio as gr
|
19 |
-
import pandas as pd
|
20 |
-
import numpy as np
|
21 |
-
df=pd.read_csv(r'final_processed.csv')
|
22 |
-
df.drop(['Unnamed: 0'], axis=1,inplace=True)
|
23 |
-
from sklearn import preprocessing
|
24 |
-
label_encoder = preprocessing.LabelEncoder()
|
25 |
-
|
26 |
-
|
27 |
-
y={'academic time':col2,'task dedication':col3,'physical activity':col4,'favourite sport':col5,'family time':col6,'poor sleep':col7,'fitness':col8,
|
28 |
-
'loss of concentration':col9,'eating habits':col10,'free time':col11,'motivation':col12,'social media':col13,'social media on academics':col14,'performance':col15}
|
29 |
-
df=df.append(y,ignore_index=True)
|
30 |
-
|
31 |
-
|
32 |
-
df['academic time']= label_encoder.fit_transform(df['academic time'])
|
33 |
-
df['task dedication']= label_encoder.fit_transform(df['task dedication'])
|
34 |
-
df['physical activity']= label_encoder.fit_transform(df['physical activity'])
|
35 |
-
df['favorite sport']= label_encoder.fit_transform(df['favorite sport'])
|
36 |
-
df['family time']= label_encoder.fit_transform(df['family time'])
|
37 |
-
df['poor sleep']= label_encoder.fit_transform(df['poor sleep'])
|
38 |
-
df['fitness']= label_encoder.fit_transform(df['fitness'])
|
39 |
-
df['loss of concentration']= label_encoder.fit_transform(df['loss of concentration'])
|
40 |
-
df['eating habits']= label_encoder.fit_transform(df['eating habits'])
|
41 |
-
df['free time']= label_encoder.fit_transform(df['free time'])
|
42 |
-
df['motivation']= label_encoder.fit_transform(df['motivation'])
|
43 |
-
df['social media']= label_encoder.fit_transform(df['social media'])
|
44 |
-
df['socail media on academics']= label_encoder.fit_transform(df['socail media on academics'])
|
45 |
-
df['performance']= label_encoder.fit_transform(df['performance'])
|
46 |
-
|
47 |
-
df.loc[df['academic time'] == 4, 'weight_academic'] =0.45
|
48 |
-
df.loc[df['academic time'] == 1, 'weight_academic'] =0.15
|
49 |
-
df.loc[df['academic time'] == 0, 'weight_academic'] =0.05
|
50 |
-
df.loc[df['academic time'] == 2, 'weight_academic'] =0.35
|
51 |
-
df.loc[df['academic time'] == 3, 'weight_academic'] =0.00
|
52 |
-
|
53 |
-
df.loc[df['task dedication'] == 0, 'weight_task'] =0.00
|
54 |
-
df.loc[df['task dedication'] == 1, 'weight_task'] =0.05
|
55 |
-
df.loc[df['task dedication'] == 2, 'weight_task'] =0.20
|
56 |
-
df.loc[df['task dedication'] == 3, 'weight_task'] =0.25
|
57 |
-
df.loc[df['task dedication'] == 4, 'weight_task'] =0.50
|
58 |
-
|
59 |
-
df.loc[df['physical activity'] == 0, 'weight_physic'] =0.00
|
60 |
-
df.loc[df['physical activity'] == 1, 'weight_physic'] =1.00
|
61 |
-
|
62 |
-
df.loc[df['favorite sport'] == 0, 'weight_play'] =0.20
|
63 |
-
df.loc[df['favorite sport'] == 1, 'weight_play'] =0.20
|
64 |
-
df.loc[df['favorite sport'] == 2, 'weight_play'] =0.20
|
65 |
-
df.loc[df['favorite sport'] == 3, 'weight_play'] =0.20
|
66 |
-
df.loc[df['favorite sport'] == 4, 'weight_play'] =0.00
|
67 |
-
df.loc[df['favorite sport'] == 5, 'weight_play'] =0.20
|
68 |
-
|
69 |
-
df.loc[df['family time'] == 3, 'weight_familytime'] =0.40
|
70 |
-
df.loc[df['family time'] == 2, 'weight_familytime'] =0.10
|
71 |
-
df.loc[df['family time'] == 1, 'weight_familytime'] =0.00
|
72 |
-
df.loc[df['family time'] == 0, 'weight_familytime'] =0.40
|
73 |
-
df.loc[df['family time'] == 4, 'weight_familytime'] =0.10
|
74 |
-
|
75 |
-
df.loc[df['poor sleep'] == 4, 'weight_sleep'] =0.00
|
76 |
-
df.loc[df['poor sleep'] == 3, 'weight_sleep'] =0.05
|
77 |
-
df.loc[df['poor sleep'] == 0, 'weight_sleep'] =0.00
|
78 |
-
df.loc[df['poor sleep'] == 2, 'weight_sleep'] =0.40
|
79 |
-
df.loc[df['poor sleep'] == 1, 'weight_sleep'] =0.55
|
80 |
-
|
81 |
-
df.loc[df['loss of concentration'] == 4, 'weight_conc'] =0.20
|
82 |
-
df.loc[df['loss of concentration'] == 0, 'weight_conc'] =0.05
|
83 |
-
df.loc[df['loss of concentration'] == 1, 'weight_conc'] =0.00
|
84 |
-
df.loc[df['loss of concentration'] == 3, 'weight_conc'] =0.75
|
85 |
-
df.loc[df['loss of concentration'] == 2, 'weight_conc'] =0.05
|
86 |
-
|
87 |
-
df.loc[df['eating habits'] == 4, 'weight_eating'] =0.20
|
88 |
-
df.loc[df['eating habits'] == 0, 'weight_eating'] =0.05
|
89 |
-
df.loc[df['eating habits'] == 1, 'weight_eating'] =0.00
|
90 |
-
df.loc[df['eating habits'] == 3, 'weight_eating'] =0.75
|
91 |
-
df.loc[df['eating habits'] == 2, 'weight_eating'] =0.05
|
92 |
-
|
93 |
-
df.loc[df['fitness'] == 2, 'weight_fit'] =0.60
|
94 |
-
df.loc[df['fitness'] == 0, 'weight_fit'] =0.10
|
95 |
-
df.loc[df['fitness'] == 1, 'weight_fit'] =0.30
|
96 |
-
df.loc[df['fitness'] == 3, 'weight_fit'] =0.00
|
97 |
-
|
98 |
-
df.loc[df['free time'] == 3, 'weight_time'] =0.50
|
99 |
-
df.loc[df['free time'] == 2, 'weight_time'] =0.10
|
100 |
-
df.loc[df['free time'] == 1, 'weight_time'] =0.20
|
101 |
-
df.loc[df['free time'] == 0, 'weight_time'] =0.20
|
102 |
-
|
103 |
-
df.loc[df['motivation'] == 3, 'weight_motivation'] =0.30
|
104 |
-
df.loc[df['motivation'] == 2, 'weight_motivation'] =0.25
|
105 |
-
df.loc[df['motivation'] == 1, 'weight_motivation'] =0.25
|
106 |
-
df.loc[df['motivation'] == 0, 'weight_motivation'] =0.20
|
107 |
-
|
108 |
-
df.loc[df['social media'] == 3, 'weight_media'] =0.00
|
109 |
-
df.loc[df['social media'] == 2, 'weight_media'] =0.65
|
110 |
-
df.loc[df['social media'] == 1, 'weight_media'] =0.10
|
111 |
-
df.loc[df['social media'] == 0, 'weight_media'] =0.25
|
112 |
-
|
113 |
-
|
114 |
-
df.loc[df['socail media on academics'] == 0, 'weight_media_academics'] =0.00
|
115 |
-
df.loc[df['socail media on academics'] == 1, 'weight_media_academics'] =1.00
|
116 |
-
|
117 |
-
df.loc[df['performance'] == 4, 'weight_performance']=0.55
|
118 |
-
df.loc[df['performance'] == 3, 'weight_performance']=0.00
|
119 |
-
df.loc[df['performance'] == 2, 'weight_performance']=0.30
|
120 |
-
df.loc[df['performance'] == 1, 'weight_performance']=0.10
|
121 |
-
df.loc[df['performance'] == 0, 'weight_performance']=0.05
|
122 |
-
|
123 |
-
df['total']=df.iloc[:,14:].sum(axis=1)
|
124 |
-
|
125 |
-
|
126 |
-
df.loc[(df['weight_academic']<0.35) | (df['weight_task']<0.25),'academic value']=0
|
127 |
-
df.loc[(df['weight_academic']>=0.35) & (df['weight_task']>=0.25),'academic value']=1
|
128 |
-
df.inplace=1
|
129 |
-
|
130 |
-
df.loc[(df['weight_academic']<0.35) | (df['weight_time']<0.20),'time value']=0
|
131 |
-
df.loc[(df['weight_academic']>=0.35) & (df['weight_time']>=0.20),'time value']=1
|
132 |
-
df.inplace=1
|
133 |
-
|
134 |
-
df.loc[((df['weight_academic']<=0.35) & (df['weight_conc']>=0.20)) | ((df['weight_academic']>=0.35) & (df['weight_conc']>=0.20)),'productive value']=1
|
135 |
-
df.loc[((df['weight_academic']>=0.35) & (df['weight_conc']<0.20)) | ((df['weight_academic']<0.35) & (df['weight_conc']<0.20)),'productive value']=0
|
136 |
-
df.inplace=1
|
137 |
-
|
138 |
-
df.loc[(df['weight_physic']==1) & (df['weight_play']==0.2) & (df['weight_fit']>=0.3) & (df['weight_eating']>=0.20),'fitness_value']=1
|
139 |
-
df.loc[(df['weight_physic']!=1) | (df['weight_play']!=0.2) | (df['weight_fit']<0.3) | (df['weight_eating']<0.20),'fitness_value']=0
|
140 |
-
df.inplace=1
|
141 |
-
|
142 |
-
|
143 |
-
df.loc[(df['weight_sleep']>=0.40) & (df['weight_conc']>=0.20) ,'sleep value']=1
|
144 |
-
df.loc[(df['weight_sleep']<0.40) | (df['weight_conc']<0.20),'sleep value']=0
|
145 |
-
df.inplace=1
|
146 |
-
|
147 |
-
df.loc[(df['weight_familytime']==0.40) & (df['weight_motivation']==0.25) ,'motivation value']=1
|
148 |
-
df.loc[(df['weight_familytime']!=0.40) | (df['weight_motivation']!=0.25),'motivation value']=0
|
149 |
-
df.inplace=1
|
150 |
-
|
151 |
-
df.loc[(df['weight_performance']>=0.30) ,'performance_value']=1
|
152 |
-
df.loc[(df['weight_performance']<0.30),'performance_value']=0
|
153 |
-
df.inplace=1
|
154 |
-
|
155 |
-
df.loc[(df['weight_media']>=0.25) & (df['weight_media_academics']==0.00) ,'media_value']=1
|
156 |
-
df.loc[(df['weight_media']<0.25) | (df['weight_media_academics']!=0.00),'media_value']=0
|
157 |
-
df.inplace=1
|
158 |
-
|
159 |
-
df.loc[df['total']>=4.0,'overall']=1
|
160 |
-
df.loc[df['total']<4.0,'overall']=0
|
161 |
-
df.inplace=1
|
162 |
-
|
163 |
-
|
164 |
-
X = df[['academic time',
|
165 |
-
'task dedication',
|
166 |
-
'physical activity',
|
167 |
-
'favorite sport',
|
168 |
-
'family time',
|
169 |
-
'poor sleep',
|
170 |
-
'fitness',
|
171 |
-
'loss of concentration',
|
172 |
-
'eating habits',
|
173 |
-
'free time',
|
174 |
-
'motivation',
|
175 |
-
'social media',
|
176 |
-
'socail media on academics',
|
177 |
-
'performance',
|
178 |
-
'weight_academic',
|
179 |
-
'weight_task',
|
180 |
-
'weight_physic',
|
181 |
-
'weight_play',
|
182 |
-
'weight_familytime',
|
183 |
-
'weight_sleep',
|
184 |
-
'weight_conc',
|
185 |
-
'weight_eating',
|
186 |
-
'weight_fit',
|
187 |
-
'weight_time',
|
188 |
-
'weight_motivation',
|
189 |
-
'weight_media',
|
190 |
-
'weight_media_academics',
|
191 |
-
'weight_performance',
|
192 |
-
'total'
|
193 |
-
]]
|
194 |
-
y1 = df['academic value']
|
195 |
-
y2=df['time value']
|
196 |
-
y3=df['productive value']
|
197 |
-
y4=df['fitness_value']
|
198 |
-
y5=df['sleep value']
|
199 |
-
y6=df['motivation value']
|
200 |
-
y7=df['performance_value']
|
201 |
-
y8=df['media_value']
|
202 |
-
y9=df['overall']
|
203 |
-
from sklearn.model_selection import train_test_split
|
204 |
-
X_train,X_test,y1_train,y1_test = train_test_split(X,y1,test_size=0.3,random_state = 0,shuffle = True)
|
205 |
-
X_train,X_test,y2_train,y2_test = train_test_split(X,y2,test_size=0.3,random_state = 0,shuffle = True)
|
206 |
-
X_train,X_test,y3_train,y3_test = train_test_split(X,y3,test_size=0.3,random_state = 0,shuffle = True)
|
207 |
-
X_train,X_test,y4_train,y4_test = train_test_split(X,y4,test_size=0.3,random_state = 0,shuffle = True)
|
208 |
-
X_train,X_test,y5_train,y5_test = train_test_split(X,y5,test_size=0.3,random_state = 0,shuffle = True)
|
209 |
-
X_train,X_test,y6_train,y6_test = train_test_split(X,y6,test_size=0.3,random_state = 0,shuffle = True)
|
210 |
-
X_train,X_test,y7_train,y7_test = train_test_split(X,y7,test_size=0.3,random_state = 0,shuffle = True)
|
211 |
-
X_train,X_test,y8_train,y8_test = train_test_split(X,y8,test_size=0.3,random_state = 0,shuffle = True)
|
212 |
-
X_train,X_test,y9_train,y9_test = train_test_split(X,y9,test_size=0.3,random_state = 0,shuffle = True)
|
213 |
-
from sklearn.ensemble import RandomForestClassifier as rfc
|
214 |
-
import xgboost as xgb
|
215 |
-
rfc1 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
216 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
217 |
-
rfc2 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
218 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
219 |
-
rfc3 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
220 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
221 |
-
rfc4 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
222 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
223 |
-
rfc5 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
224 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
225 |
-
rfc6 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
226 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
227 |
-
rfc7 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
228 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
229 |
-
rfc8 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
230 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
231 |
-
rfc9 = xgb.XGBClassifier(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
|
232 |
-
max_depth = 5, alpha = 10, n_estimators = 10)
|
233 |
-
rfc1.fit(X_train,y1_train)
|
234 |
-
rfc2.fit(X_train,y2_train)
|
235 |
-
rfc3.fit(X_train,y3_train)
|
236 |
-
rfc4.fit(X_train,y4_train)
|
237 |
-
rfc5.fit(X_train,y5_train)
|
238 |
-
rfc6.fit(X_train,y6_train)
|
239 |
-
rfc7.fit(X_train,y7_train)
|
240 |
-
rfc8.fit(X_train,y8_train)
|
241 |
-
rfc9.fit(X_train,y9_train)
|
242 |
-
import random
|
243 |
-
|
244 |
-
z=df.tail(1)
|
245 |
-
|
246 |
-
|
247 |
-
|
248 |
-
|
249 |
-
if z['academic value'].eq(1).all():
|
250 |
-
a=['You are in the right track just try to stick on to your schedule','HARRRRRDDDD WORK always payys off you seem to be going in the right track',
|
251 |
-
'The way is classiscal!! a tip for you is to listen to some classical music before studying ','You are driven by your own intrest keep riding',
|
252 |
-
'Your study time is great ,now its to take a short break',
|
253 |
-
'WOWWW you are just a just synonym of hard work and dedication ' ]
|
254 |
-
res1="feedback on youe study schedule --> " +random.choice(a)
|
255 |
-
if z['academic value'].eq(0).all():
|
256 |
-
b=['If you know your “WHY”, finding your “HOW" will not be difficult.you just need to start working','Focusing is about saying no.just learn to say no to things which distracts you .u just need to put a little more focus on your studytime',
|
257 |
-
'Be the early bird that gets the first worm.set your body clock and start working','listen to directions,follow through assignments,learn for yourself.you just need to enjoy the process',
|
258 |
-
'measure for progress not the time you are working ,try to put in more studytime','postponment will postpone you,finish your daily tasks when you have the time',
|
259 |
-
'you are just off track,there is still time and sure that you will reach great heights ','you surely have the talent its now in your hands to make wonders!!!! talent without hardwork?? what do you think ','enroll yourself to a personalized learning environament which gives you a controll and education experience ']
|
260 |
-
res1="feedback on youe study schedule --> "+random.choice(b)
|
261 |
-
|
262 |
-
|
263 |
-
if z['time value'].eq(1).all():
|
264 |
-
c=['there is a saying give me 6 hours to chop a tree and i will spend the 1st hr sharpening the axe, the fact here is you have sharpenend your axe','your timimg is great you are managing time well'
|
265 |
-
'its seems you hsve been studying long take a quick break and come back ','you are enjoying your time keep putting the same efforts you put','keep managing the time like the way you are doing now,this attribute will take care of the rest'
|
266 |
-
,'you seem to stay organized and on track with your procative planning and systematic scheduling ']
|
267 |
-
res2="Feedback on how you manage time --> "+random.choice(c)
|
268 |
-
if z['time value'].eq(0).all():
|
269 |
-
d=['you have to start spending time on academics and show some interest in succeeding,you are the pilot who should stop time from flying and bring it on your control','start working and stick to a time table and set your body clock','try to be more organized and start spending quality time towards studies'
|
270 |
-
'start learning to manage time and priortize on your academics','spend more time on your weak areas ,try to strech out for long hours','the biggest obstracle stopping you from winning is time management,prepare a timetable and stick to it',
|
271 |
-
'play while you play and work while you work dont try to mix up things','dont try to procastinate finish your day to day jobs when and where you get time']
|
272 |
-
res2="Feedback on how you manage time --> "+random.choice(d)
|
273 |
-
|
274 |
-
if z['productive value'].eq(1).all():
|
275 |
-
e=['you are smart,productive and have a good way of preparation in your studies','Be more proactive and try to participate in class,you are effiecient and can reach heights with your effectiveness','you have the ability to study things smartly and quickly,pick areas which are more brain-storming',
|
276 |
-
'you have the ability to intepret things and your mind is sharp and you are a good listener','you are the master-mind,you are the person who shouldnt miss out in enrolling to IIts,NITs or whatever','you are productive person if u feel you are not delivering your 100% its not because because you arent studying,its something else']
|
277 |
-
res3="Feedback on your productivity --> "+random.choice(e)
|
278 |
-
if z['productive value'].eq(0).all():
|
279 |
-
f=['Try to stick on to an approach which is convinient to you ,have a clear mind before you start working','start solving more,puzzles and a daily sudoko is a good start, you just need to be on your toes and tune your mind to solve various activities ','think!think!think analyse where you lack and start building strategies to improve yourself'
|
280 |
-
'class participation its high time you start taking decisions and choose to be proactive','connect everything with what you are learining so that it will stick in your mind and helps you to recollect when and where you require','enjoy the process of learning dont be monotonous and a bookworm tame your mind to face your challenges','actively consult your instructor to enrich yourself with lot ways to improve your productivity',
|
281 |
-
'rather than a brute-force approach try to think more of an optimal solution to a problem','gather a lot of resoruces and try to sit in your desk ,take mobile breaks(short one), an online chess game might be an eye opener for your next session ']
|
282 |
-
res3="Feedback on your productivity --> "+random.choice(f)
|
283 |
-
|
284 |
-
if z['fitness_value'].eq(1).all():
|
285 |
-
g=['fitness is your key ,if your body is strong your mind is stronger. Maintaining a good fitness is really important for your health as well as it empowers your learining ',' I can see you have spent time in maintaing your body. Keep winning more golds ','you have choosen to step out of your comfort zone and by trying to put some gains,this will surely be a stepping stone in other important sectors','your fitness level is reasonably good indicating that you are sticking to a schedule kind of person which is really good',
|
286 |
-
'you are in a good shape which is a key for self_confidence and gives you a lot of motivation','you are a sportive person ,this will really help you to socialize and gives you a lot of energy to start new things ','you are an open-minded person ,this is really the best character one could ask for,half the problems are over if one is listening and able to make good decisions ']
|
287 |
-
res4="Feedback on your fitness --> "+random.choice(g)
|
288 |
-
if z['fitness_value'].eq(0).all():
|
289 |
-
h=['A weak body is a liability, you guys being the future generation should definetly be fit and healthy to lead the society at its best','your body should always get the first priority and should be taken care properly',
|
290 |
-
'Any physical activity will make you disipline and gives you self confidence. Join your school team today ','out of all a hungry stomach isnt fit for a brisk study session ,being physically fit lets you do more activity even improve your academics ',
|
291 |
-
'engage yourself in any physical activity for 20 mins as it can improve your concentration and helps your focus in learning ','out of your busy schedule try devoting just 15 mins get down do some pushups or squats or a brisk jog will do good ']
|
292 |
-
res4="Feedback on your fitness --> "+random.choice(h)
|
293 |
-
|
294 |
-
if z['sleep value'].eq(1).all():
|
295 |
-
i=['Good that you have a proper sleep, just stick to it and try finishing all your work in the day time and get enough rest','Its pretty impressive that you are giving enough importance to your sleep, shows that you have good time management skills and a sweet dream','getting a good sleep even during your stressed timetables shows that you stay at the moment',
|
296 |
-
'a good fitness routine followed by a good-sleep is a good sunday schedule and a good starter for a hectic next week which i hope you would have experienced many times','its good that you have a good sleep everynight this is big boost for a bright tomorrow']
|
297 |
-
res5="Feedback on your sleep time --> "+random.choice(i)
|
298 |
-
if z['sleep value'].eq(0).all():
|
299 |
-
|
300 |
-
j=['The time we sleep is only when we rest our mind, eyes and the whole body which is really crucial for a stduent',' Try not using any devices an hour before you sleep, have a good sleep cycle for atleast 6 to 7 hrs a day','Get enough rest, dont stress your body too much.',
|
301 |
-
'Prioritize your sleep, dont have caffinated drinks late in the evening and getting good sleep will make you feel fresh and enegrytic all day long ',
|
302 |
-
'a 7 - hour refresh will set your body clock for the rest of your day so please ensure that you get adequate rest','if you are sleep deprieved make sure you exhaust all your energy during the day and make sure you get a pleasant and peaceful sleep',
|
303 |
-
'tests prove that sleep deprivation is a result for low academic performance make sure you dont fall under that','Please ensure that the extra miles which you are putting doesnt affect your sleep']
|
304 |
-
|
305 |
-
res5="Feedback on your sleep time --> "+random.choice(j)
|
306 |
-
|
307 |
-
if z['motivation value'].eq(1).all():
|
308 |
-
k=['you are fairly motivated ,Motivation drives everyone to work better to achive something,it lits a light inside you ','you should be really proud that you have good motivation at a really young age,use it in areas where you feel a bit off',
|
309 |
-
'None of the greatest achievers couldnt have done it without motivation and self motivation is really powerfull tool to success ,you are one among them Keep going!',
|
310 |
-
'a good level of motivation gives you high spirits and a good attitude,your attitude builds YOU']
|
311 |
-
|
312 |
-
res6="motivation factor --> "+random.choice(k)
|
313 |
-
if z['motivation value'].eq(0).all():
|
314 |
-
|
315 |
-
l=['Nobody in the world is born with motivation,in this modern era you cant expect external motivation,you better be your own motivation','messi took eighteen years to be the G.O.A.T ignoring all demotivation and insults its finally your time',
|
316 |
-
'change your scenery sitting in a desk all-day makes you dull ,to renew interest,a new setting can be just what some students need to stay motivated to learn',
|
317 |
-
'lay-out clear objectives before you start learning so that there is no confussion','Make your goals high but attainable dont be afraid to push yourself to get more out of them ',
|
318 |
-
'Spend some quality time with your family listen to their experiences and try to dollow their footsteps']
|
319 |
-
|
320 |
-
|
321 |
-
res6="motivation factor --> "+random.choice(l)
|
322 |
-
|
323 |
-
if z['performance_value'].eq(1).all():
|
324 |
-
m=['Good job you!! Your hardwork and efforts paid off, you have nothing to worry about ,you are academically strong','To be honest that grades made me a little jealous. I can see the work you are putting towards academics',
|
325 |
-
'Give a big hit on boards make your parents and teachers proud, trust me that is super satisfying','academic performance gives you a lot of boost to you take that put in all other aspects which will give you overall developement',
|
326 |
-
'the most satisfying thing is scoring high its great that you are easily doing it','you are almost sorted out you now just have to take care of the bits and pieces']
|
327 |
-
|
328 |
-
res7="Feedback on your performance --> "+random.choice(m)
|
329 |
-
|
330 |
-
if z['performance_value'].eq(0).all():
|
331 |
-
n=['Its never late to begin. Divide your work, note important things mentioned in class spend more time in studies','Dont be ashamed to ask doubts we dont mind others judging. So we start from physics today? jk',
|
332 |
-
'Start studying with your friends, seek help from teachers,Remember the hardwork you put never fails you','analyse where you are making errors if you find that you are making mistakes while writing try practicing the sample papers it will help you to an extent'
|
333 |
-
,'you are almost there!!take short notes of the theoritical concepts so that it will be easy for reference','dont worry about where you are standing at the moment ,back yourself ,start it from scratch']
|
334 |
-
|
335 |
-
res7="Feedback on your performance --> "+random.choice(n)
|
336 |
-
|
337 |
-
if z['media_value'].eq(1).all():
|
338 |
-
o=[' In the world of people being addicted to social media today, its happy to see someone like you','Its good that you are not scrolling too much','Having a good social profile is important and you having a limit is really impressive'
|
339 |
-
,'Having the self control on yourself is really great but ensure that dont overdo on anything else','you are self-conscious which is really a great character to acquire']
|
340 |
-
|
341 |
-
res8="Feedback on your social media time --> "+random.choice(o)
|
342 |
-
|
343 |
-
if z['media_value'].eq(0).all():
|
344 |
-
p=['Its really common for this generation people to get addicted to social media. All you have to do is keep track of the time, dont over do stuffs and you dont have to post a story everyday.',
|
345 |
-
'Nothing wrong becoming a social idle, but right now concentrate in your studies','socially active is essential but over - scrolling will trap you in the matrix which you are unaware of',
|
346 |
-
'stay in your limits socially active for more than a hour during high school is ill advised','knowing that its impacting you and using social media again !! what is that??']
|
347 |
-
|
348 |
-
res8="Feedback on your social media time --> "+random.choice(p)
|
349 |
-
|
350 |
-
|
351 |
-
if z['overall'].eq(1).all():
|
352 |
-
q=['OMG!! Im thinking of getting a piece of advise from you you are almost there good that you equally participate in everything','You are an explorer and can learn new things easily,you are about to win the race',
|
353 |
-
'Your works are impressing everyone right from your teacher,friends and your parents, You are active,brisk and have good potential to improve your performance',
|
354 |
-
'You are doing great ,you are ready for new challenges and failures doesnt bother you ','You are multi tasker and ensure that you dont sink with over-confidence','Dont put yourself in any kind of pressure, eventhough you feel stressed time will answer to it and you will pass with flying colours'
|
355 |
-
'You are growing with confidence, take it to learn new things,choose your core and find your destiny']
|
356 |
-
|
357 |
-
res9=random.choice(q)
|
358 |
-
|
359 |
-
if z['overall'].eq(0).all():
|
360 |
-
|
361 |
-
r=['Its all good everyone goes out of form,the comeback is always on start putting consistent efforts','Put in the time, hardwork and you can already see it coming,you are just a few steps dowm','When we hit out lowest point we are open to the greatest change you are going to bring the best out of it. And yes that was said by Avatar Roku'
|
362 |
-
,'Choose the right person whom you feel will take you through all the obstracles you need make things more clear','The best view comes after the hardest climb you can climb the moutain ahead of you','You just need to reboot and have a good set-up ,stay optimistic and everything will take care of itself if you take one step at a time',
|
363 |
-
'You are nearing the pinacle of your true potential,just few changes hear and there you will be on your prime']
|
364 |
-
|
365 |
-
res9=random.choice(r)
|
366 |
-
|
367 |
-
|
368 |
-
|
369 |
-
|
370 |
-
|
371 |
-
|
372 |
-
|
373 |
-
|
374 |
-
return "hi " + str (Name) + " this is a predictive model these are some wild guesses so just take the points which you feel may work in your case \nalso if u feel the feeadbacks are harsh please flag your opinion \ntake your time to read this and hope u like it 😊\n\n\n"+ res1+" ,\n " + res2 +" ,\n " + res3 +" ,\n " + res4 +" ,\n " + res5 +" ,\n " + res6 +" ,\n " + res7 +" ,\n " + res8 +" ,\n\n\n " + res9
|
375 |
-
|
376 |
-
list(df.columns)
|
377 |
-
|
378 |
-
df.isna().sum()
|
379 |
-
|
380 |
-
demo = gr.Interface(
|
381 |
-
fn=assign_weights,
|
382 |
-
inputs=[
|
383 |
-
"text",
|
384 |
-
gr.Dropdown(['Science','Commerce'], label="Choose your stream"),
|
385 |
-
gr.Radio(["<5", "5 - 12", "13 - 20", "20 - 30",">30"],label='On an average, how many hours a week do you spend on academics?'),
|
386 |
-
gr.Radio(["0 - 20%", "20 - 40%", "40 - 60%", "60 - 80%","80 -100%"],label='How willing are you to work on a particular task ?'),
|
387 |
-
gr.Radio(["Yes", "No", ],label='Do you take up any physical activity at regular intervals(at least 3 hours a week) ?'),
|
388 |
-
gr.Radio(["Football", "Cricket", "Basketball", "Tennis" , "Chess" ,"Other","Not interested in sports"],label='Choose your favourite sport you follow or play'),
|
389 |
-
gr.Radio(["Never", "Occasionally", "Sometimes", "Often" , "Always"],label='How often do you spend time with your friends and family?'),
|
390 |
-
gr.Radio(["Always", "Very often", "Sometimes", "Rarely" ,"Never"],label='Has poor sleep troubled you in the last month?'),
|
391 |
-
gr.Radio(["Perfect", "Good", "Average", "Poor"],label='What is your current level of fitness?'),
|
392 |
-
gr.Radio(["Never", "Once in a while", "About half the time", "Most of the time","Always"],label='Do you feel kinda losing concentration during classes and other activities'),
|
393 |
-
gr.Radio(["Never", "Once in a while", "About half the time", "Most of the time","Always"],label='is there a change in your eating habits(either under eating or overeating'),
|
394 |
-
gr.Radio(["< 2", "2 - 5", "5 - 8", "> 8"],label='How many hours of free time do you have after school?'),
|
395 |
-
gr.Radio(["Asking a lot of questions to the teacher", "Completing various assignments", "Sports and other extracurricular activities", "Other"],label='What motivates you to learn more?'),
|
396 |
-
gr.Radio(["<30 mins", "30 - 60", "60 - 120", ">120 mins"],label='How long you spend your time on social media on a daily basis? '),
|
397 |
-
gr.Radio(["Yes", "No"],label='Do you feel that spending time on social media has been a reason for the deterioration in your academic performance?'),
|
398 |
-
gr.Radio(["<30%", "30% - 50%", "50% - 70%", "70% - 90%",">90%"],label='How much you score in your academics'),
|
399 |
-
],
|
400 |
-
outputs=['text'],
|
401 |
-
|
402 |
-
title="Performance predictor and feedback generator",
|
403 |
-
description="Here's a sample performance calculator. Enjoy!"
|
404 |
-
|
405 |
-
)
|
406 |
-
demo.launch(inline=False)
|
407 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/datasets/object365.py
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
import os
|
3 |
-
from fvcore.common.timer import Timer
|
4 |
-
from detectron2.structures import BoxMode
|
5 |
-
from fvcore.common.file_io import PathManager
|
6 |
-
from detectron2.data import DatasetCatalog, MetadataCatalog
|
7 |
-
from lvis import LVIS
|
8 |
-
|
9 |
-
logger = logging.getLogger(__name__)
|
10 |
-
|
11 |
-
__all__ = ["load_o365_json", "register_o365_instances"]
|
12 |
-
|
13 |
-
|
14 |
-
def register_o365_instances(name, metadata, json_file, image_root):
|
15 |
-
DatasetCatalog.register(name, lambda: load_o365_json(
|
16 |
-
json_file, image_root, name))
|
17 |
-
MetadataCatalog.get(name).set(
|
18 |
-
json_file=json_file, image_root=image_root,
|
19 |
-
evaluator_type="lvis", **metadata
|
20 |
-
)
|
21 |
-
|
22 |
-
|
23 |
-
def get_o365_meta():
|
24 |
-
categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}]
|
25 |
-
o365_categories = sorted(categories, key=lambda x: x["id"])
|
26 |
-
thing_classes = [k["name"] for k in o365_categories]
|
27 |
-
meta = {"thing_classes": thing_classes}
|
28 |
-
return meta
|
29 |
-
|
30 |
-
|
31 |
-
def load_o365_json(json_file, image_root, dataset_name=None):
|
32 |
-
'''
|
33 |
-
Load Object365 class name text for object description for GRiT
|
34 |
-
'''
|
35 |
-
|
36 |
-
json_file = PathManager.get_local_path(json_file)
|
37 |
-
|
38 |
-
timer = Timer()
|
39 |
-
lvis_api = LVIS(json_file)
|
40 |
-
if timer.seconds() > 1:
|
41 |
-
logger.info("Loading {} takes {:.2f} seconds.".format(
|
42 |
-
json_file, timer.seconds()))
|
43 |
-
|
44 |
-
class_names = {}
|
45 |
-
sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id'])
|
46 |
-
for x in sort_cat:
|
47 |
-
if '/' in x['name']:
|
48 |
-
text = ''
|
49 |
-
for xx in x['name'].split('/'):
|
50 |
-
text += xx
|
51 |
-
text += ' '
|
52 |
-
text = text[:-1]
|
53 |
-
else:
|
54 |
-
text = x['name']
|
55 |
-
class_names[x['id']] = text
|
56 |
-
|
57 |
-
img_ids = sorted(lvis_api.imgs.keys())
|
58 |
-
imgs = lvis_api.load_imgs(img_ids)
|
59 |
-
anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
|
60 |
-
|
61 |
-
ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
|
62 |
-
assert len(set(ann_ids)) == len(ann_ids), \
|
63 |
-
"Annotation ids in '{}' are not unique".format(json_file)
|
64 |
-
|
65 |
-
imgs_anns = list(zip(imgs, anns))
|
66 |
-
logger.info("Loaded {} images in the LVIS v1 format from {}".format(
|
67 |
-
len(imgs_anns), json_file))
|
68 |
-
|
69 |
-
dataset_dicts = []
|
70 |
-
|
71 |
-
for (img_dict, anno_dict_list) in imgs_anns:
|
72 |
-
record = {}
|
73 |
-
if "file_name" in img_dict:
|
74 |
-
file_name = img_dict["file_name"]
|
75 |
-
record["file_name"] = os.path.join(image_root, file_name)
|
76 |
-
|
77 |
-
record["height"] = int(img_dict["height"])
|
78 |
-
record["width"] = int(img_dict["width"])
|
79 |
-
image_id = record["image_id"] = img_dict["id"]
|
80 |
-
|
81 |
-
objs = []
|
82 |
-
for anno in anno_dict_list:
|
83 |
-
assert anno["image_id"] == image_id
|
84 |
-
if anno.get('iscrowd', 0) > 0:
|
85 |
-
continue
|
86 |
-
obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
|
87 |
-
obj["category_id"] = 0
|
88 |
-
obj["object_description"] = class_names[anno['category_id']]
|
89 |
-
|
90 |
-
objs.append(obj)
|
91 |
-
record["annotations"] = objs
|
92 |
-
if len(record["annotations"]) == 0:
|
93 |
-
continue
|
94 |
-
record["task"] = "ObjectDet"
|
95 |
-
dataset_dicts.append(record)
|
96 |
-
|
97 |
-
return dataset_dicts
|
98 |
-
|
99 |
-
|
100 |
-
_CUSTOM_SPLITS_LVIS = {
|
101 |
-
"object365_train": ("object365/images/train/", "object365/annotations/train_v1.json"),
|
102 |
-
}
|
103 |
-
|
104 |
-
|
105 |
-
for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items():
|
106 |
-
register_o365_instances(
|
107 |
-
key,
|
108 |
-
get_o365_meta(),
|
109 |
-
os.path.join("datasets", json_file) if "://" not in json_file else json_file,
|
110 |
-
os.path.join("datasets", image_root),
|
111 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/modules/ipex/attention.py
DELETED
@@ -1,128 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
|
3 |
-
|
4 |
-
# pylint: disable=protected-access, missing-function-docstring, line-too-long
|
5 |
-
|
6 |
-
original_torch_bmm = torch.bmm
|
7 |
-
def torch_bmm(input, mat2, *, out=None):
|
8 |
-
if input.dtype != mat2.dtype:
|
9 |
-
mat2 = mat2.to(input.dtype)
|
10 |
-
|
11 |
-
#ARC GPUs can't allocate more than 4GB to a single block, Slice it:
|
12 |
-
batch_size_attention, input_tokens, mat2_shape = input.shape[0], input.shape[1], mat2.shape[2]
|
13 |
-
block_multiply = 2.4 if input.dtype == torch.float32 else 1.2
|
14 |
-
block_size = (batch_size_attention * input_tokens * mat2_shape) / 1024 * block_multiply #MB
|
15 |
-
split_slice_size = batch_size_attention
|
16 |
-
if block_size >= 4000:
|
17 |
-
do_split = True
|
18 |
-
#Find something divisible with the input_tokens
|
19 |
-
while ((split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply) > 4000:
|
20 |
-
split_slice_size = split_slice_size // 2
|
21 |
-
if split_slice_size <= 1:
|
22 |
-
split_slice_size = 1
|
23 |
-
break
|
24 |
-
else:
|
25 |
-
do_split = False
|
26 |
-
|
27 |
-
split_block_size = (split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply #MB
|
28 |
-
split_2_slice_size = input_tokens
|
29 |
-
if split_block_size >= 4000:
|
30 |
-
do_split_2 = True
|
31 |
-
#Find something divisible with the input_tokens
|
32 |
-
while ((split_slice_size * split_2_slice_size * mat2_shape) / 1024 * block_multiply) > 4000:
|
33 |
-
split_2_slice_size = split_2_slice_size // 2
|
34 |
-
if split_2_slice_size <= 1:
|
35 |
-
split_2_slice_size = 1
|
36 |
-
break
|
37 |
-
else:
|
38 |
-
do_split_2 = False
|
39 |
-
|
40 |
-
if do_split:
|
41 |
-
hidden_states = torch.zeros(input.shape[0], input.shape[1], mat2.shape[2], device=input.device, dtype=input.dtype)
|
42 |
-
for i in range(batch_size_attention // split_slice_size):
|
43 |
-
start_idx = i * split_slice_size
|
44 |
-
end_idx = (i + 1) * split_slice_size
|
45 |
-
if do_split_2:
|
46 |
-
for i2 in range(input_tokens // split_2_slice_size): # pylint: disable=invalid-name
|
47 |
-
start_idx_2 = i2 * split_2_slice_size
|
48 |
-
end_idx_2 = (i2 + 1) * split_2_slice_size
|
49 |
-
hidden_states[start_idx:end_idx, start_idx_2:end_idx_2] = original_torch_bmm(
|
50 |
-
input[start_idx:end_idx, start_idx_2:end_idx_2],
|
51 |
-
mat2[start_idx:end_idx, start_idx_2:end_idx_2],
|
52 |
-
out=out
|
53 |
-
)
|
54 |
-
else:
|
55 |
-
hidden_states[start_idx:end_idx] = original_torch_bmm(
|
56 |
-
input[start_idx:end_idx],
|
57 |
-
mat2[start_idx:end_idx],
|
58 |
-
out=out
|
59 |
-
)
|
60 |
-
else:
|
61 |
-
return original_torch_bmm(input, mat2, out=out)
|
62 |
-
return hidden_states
|
63 |
-
|
64 |
-
original_scaled_dot_product_attention = torch.nn.functional.scaled_dot_product_attention
|
65 |
-
def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False):
|
66 |
-
#ARC GPUs can't allocate more than 4GB to a single block, Slice it:
|
67 |
-
shape_one, batch_size_attention, query_tokens, shape_four = query.shape
|
68 |
-
block_multiply = 2.4 if query.dtype == torch.float32 else 1.2
|
69 |
-
block_size = (shape_one * batch_size_attention * query_tokens * shape_four) / 1024 * block_multiply #MB
|
70 |
-
split_slice_size = batch_size_attention
|
71 |
-
if block_size >= 4000:
|
72 |
-
do_split = True
|
73 |
-
#Find something divisible with the shape_one
|
74 |
-
while ((shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply) > 4000:
|
75 |
-
split_slice_size = split_slice_size // 2
|
76 |
-
if split_slice_size <= 1:
|
77 |
-
split_slice_size = 1
|
78 |
-
break
|
79 |
-
else:
|
80 |
-
do_split = False
|
81 |
-
|
82 |
-
split_block_size = (shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply #MB
|
83 |
-
split_2_slice_size = query_tokens
|
84 |
-
if split_block_size >= 4000:
|
85 |
-
do_split_2 = True
|
86 |
-
#Find something divisible with the batch_size_attention
|
87 |
-
while ((shape_one * split_slice_size * split_2_slice_size * shape_four) / 1024 * block_multiply) > 4000:
|
88 |
-
split_2_slice_size = split_2_slice_size // 2
|
89 |
-
if split_2_slice_size <= 1:
|
90 |
-
split_2_slice_size = 1
|
91 |
-
break
|
92 |
-
else:
|
93 |
-
do_split_2 = False
|
94 |
-
|
95 |
-
if do_split:
|
96 |
-
hidden_states = torch.zeros(query.shape, device=query.device, dtype=query.dtype)
|
97 |
-
for i in range(batch_size_attention // split_slice_size):
|
98 |
-
start_idx = i * split_slice_size
|
99 |
-
end_idx = (i + 1) * split_slice_size
|
100 |
-
if do_split_2:
|
101 |
-
for i2 in range(query_tokens // split_2_slice_size): # pylint: disable=invalid-name
|
102 |
-
start_idx_2 = i2 * split_2_slice_size
|
103 |
-
end_idx_2 = (i2 + 1) * split_2_slice_size
|
104 |
-
hidden_states[:, start_idx:end_idx, start_idx_2:end_idx_2] = original_scaled_dot_product_attention(
|
105 |
-
query[:, start_idx:end_idx, start_idx_2:end_idx_2],
|
106 |
-
key[:, start_idx:end_idx, start_idx_2:end_idx_2],
|
107 |
-
value[:, start_idx:end_idx, start_idx_2:end_idx_2],
|
108 |
-
attn_mask=attn_mask[:, start_idx:end_idx, start_idx_2:end_idx_2] if attn_mask is not None else attn_mask,
|
109 |
-
dropout_p=dropout_p, is_causal=is_causal
|
110 |
-
)
|
111 |
-
else:
|
112 |
-
hidden_states[:, start_idx:end_idx] = original_scaled_dot_product_attention(
|
113 |
-
query[:, start_idx:end_idx],
|
114 |
-
key[:, start_idx:end_idx],
|
115 |
-
value[:, start_idx:end_idx],
|
116 |
-
attn_mask=attn_mask[:, start_idx:end_idx] if attn_mask is not None else attn_mask,
|
117 |
-
dropout_p=dropout_p, is_causal=is_causal
|
118 |
-
)
|
119 |
-
else:
|
120 |
-
return original_scaled_dot_product_attention(
|
121 |
-
query, key, value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal
|
122 |
-
)
|
123 |
-
return hidden_states
|
124 |
-
|
125 |
-
def attention_init():
|
126 |
-
#ARC GPUs can't allocate more than 4GB to a single block:
|
127 |
-
torch.bmm = torch_bmm
|
128 |
-
torch.nn.functional.scaled_dot_product_attention = scaled_dot_product_attention
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benebene/Chat-question-answering/interface.py
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from utils import Stuff
|
3 |
-
|
4 |
-
|
5 |
-
def launch_gradio(s: Stuff):
|
6 |
-
with gr.Blocks() as demo:
|
7 |
-
question = gr.Textbox(label = 'Type your question about astronomy here :')
|
8 |
-
output = gr.Textbox(label = 'The answer is...')
|
9 |
-
button = gr.Button('Enter')
|
10 |
-
button.click(fn = s.get_answer, inputs = question, outputs=output)
|
11 |
-
|
12 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Apk Mod Fox App.md
DELETED
@@ -1,57 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar APK Mod Fox App: Cómo obtener la mejor experiencia de navegador en Android</h1>
|
3 |
-
<p>Si está buscando una aplicación de navegador rápida, segura y personalizable para su dispositivo Android, es posible que desee probar la aplicación APK Mod Fox. Esta es una versión modificada del popular navegador Firefox, que ofrece muchas características y beneficios que no están disponibles en la aplicación original. En este artículo, le mostraremos lo que es APK Mod Fox App, cómo descargar e instalar, y cómo usarlo para obtener la mejor experiencia de navegador en Android.</p>
|
4 |
-
<h2>¿Qué es la aplicación APK Mod Fox? </h2>
|
5 |
-
<p>APK Mod Fox App es una versión modificada de la aplicación Firefox Browser, que es uno de los navegadores web más populares y de confianza en el mundo. Firefox Browser es conocido por su velocidad, privacidad y opciones de personalización, pero también tiene algunas limitaciones y desventajas que algunos usuarios pueden encontrar molesto o inconveniente. Por ejemplo, tiene anuncios, rastreadores, ventanas emergentes y otros elementos no deseados que pueden afectar su experiencia de navegación. También consume mucha batería y memoria, lo que puede ralentizar el dispositivo. </p>
|
6 |
-
<h2>descargar apk mod fox app</h2><br /><p><b><b>Download Zip</b> ->->->-> <a href="https://bltlly.com/2v6KZv">https://bltlly.com/2v6KZv</a></b></p><br /><br />
|
7 |
-
<p>Ahí es donde APK Mod Fox App entra en juego. Esta es una versión modificada de la aplicación Firefox Browser que elimina todos los anuncios, rastreadores, ventanas emergentes y otros elementos no deseados de la aplicación. También optimiza el rendimiento de la aplicación y reduce su consumo de batería y memoria. También agrega algunas características y mejoras adicionales que no están disponibles en la aplicación original, como el modo oscuro, el modo nocturno, el modo de incógnito, el bloqueador de anuncios, la VPN, el administrador de descargas y más. Con la aplicación APK Mod Fox, puedes disfrutar de una experiencia de navegador más rápida, fluida y privada en tu dispositivo Android. </p>
|
8 |
-
<h3>Los beneficios de usar APK Mod Fox App</h3>
|
9 |
-
<p>Algunos de los beneficios de usar la aplicación APK Mod Fox son:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Puedes navegar por la web sin anuncios, rastreadores, ventanas emergentes u otros elementos molestos que puedan distraerte o comprometer tu privacidad. </li>
|
12 |
-
|
13 |
-
<li>Puede acceder a varias características y herramientas que pueden mejorar su experiencia de navegación, como el modo oscuro, el modo nocturno, el modo de incógnito, el bloqueador de anuncios, la VPN, el administrador de descargas y más. </li>
|
14 |
-
<li> Puede ahorrar batería y memoria mediante el uso de una aplicación de navegador ligera y optimizada que no consume demasiados recursos de su dispositivo. </li>
|
15 |
-
<li> Puede disfrutar de un rendimiento del navegador rápido y suave que puede cargar páginas web de forma rápida y sin problemas. </li>
|
16 |
-
</ul>
|
17 |
-
<h3>Los inconvenientes de usar APK Mod Fox App</h3>
|
18 |
-
<p>Algunos de los inconvenientes de usar APK Mod Fox App son:</p>
|
19 |
-
<ul>
|
20 |
-
<li>Es posible que encuentre algunos problemas de compatibilidad o errores con algunos sitios web o aplicaciones que no están diseñados para la aplicación modded. </li>
|
21 |
-
<li>Es posible que no reciba actualizaciones regulares o soporte de los desarrolladores oficiales de la aplicación Firefox Browser. </li>
|
22 |
-
<li>Es posible que exponga su dispositivo a posibles riesgos de seguridad o malware si descarga la aplicación modificada desde una fuente no confiable o si habilita fuentes desconocidas en su dispositivo. </li>
|
23 |
-
</ul>
|
24 |
-
<h2>¿Cómo descargar e instalar la aplicación APK Mod Fox? </h2>
|
25 |
-
<p>Si desea descargar e instalar la aplicación APK Mod Fox en su dispositivo Android, debe seguir estos pasos:</p>
|
26 |
-
<h3>Paso 1: Encontrar una fuente confiable para la aplicación modded</h3>
|
27 |
-
<p>El primer paso es encontrar una fuente confiable para la aplicación modded. No se puede descargar APK Mod Fox App desde la Google Play Store, ya que no es una aplicación oficial. Es necesario encontrar un sitio web de terceros o plataforma que ofrece la aplicación modded para su descarga gratuita. Sin embargo, debe tener cuidado y hacer algunas investigaciones antes de descargar la aplicación modificada desde cualquier fuente. Usted necesita para asegurarse de que la fuente es confiable y de buena reputación, y que la aplicación modded es seguro y libre de virus. Puede comprobar las revisiones, calificaciones, comentarios y comentarios de otros usuarios que han descargado la aplicación modificada desde la misma fuente. También puede usar un escáner de malware o una aplicación antivirus para escanear la aplicación modificada antes de instalarla en su dispositivo. </p>
|
28 |
-
|
29 |
-
<p>El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. De forma predeterminada, esta configuración está desactivada en la mayoría de los dispositivos Android, ya que puede exponer su dispositivo a posibles riesgos de seguridad o malware. Sin embargo, si desea instalar APK Mod Fox App, es necesario habilitar esta configuración temporalmente. Para hacer esto, es necesario ir a la configuración de su dispositivo, a continuación, toque en la seguridad o la privacidad, a continuación, busque la opción que dice fuentes desconocidas o instalar aplicaciones desconocidas. Luego, cambie el interruptor o marque la casilla para habilitar esta opción. También es posible que necesite conceder permiso para la fuente o aplicación específica que desea instalar. </p>
|
30 |
-
<h3>Paso 3: Descargar e instalar el archivo APK</h3>
|
31 |
-
<p>El tercer paso es descargar e instalar el archivo APK de la aplicación APK Mod Fox en su dispositivo. Para hacer esto, debe abrir la aplicación del navegador en su dispositivo, luego ir al sitio web o plataforma donde encontró la aplicación modificada. Luego, busque el botón de descarga o enlace para la aplicación modded, y toque en él. Es posible que vea una ventana emergente o una notificación que le pida que confirme la descarga o instalación de la aplicación modificada. Toque en Aceptar o Instalar para continuar. Espere a que se complete el proceso de descarga e instalación, que puede tardar unos minutos dependiendo de la velocidad de Internet y el rendimiento del dispositivo. </p>
|
32 |
-
<h2>¿Cómo usar la aplicación APK Mod Fox? </h2>
|
33 |
-
<p>Una vez que haya descargado e instalado la aplicación APK Mod Fox en su dispositivo, puede comenzar a usarla para navegar por la web en su dispositivo Android. Aquí hay algunos consejos sobre cómo utilizar APK Mod Fox App:</p>
|
34 |
-
<p></p>
|
35 |
-
<h3>Personaliza la configuración y las preferencias de tu navegador</h3>
|
36 |
-
|
37 |
-
<h3>Navegar por la web con mayor privacidad y seguridad</h3>
|
38 |
-
<p>Otra ventaja de usar APK Mod Fox App es que se puede navegar por la web con mayor privacidad y seguridad. La aplicación modificada elimina todos los anuncios, rastreadores, ventanas emergentes y otros elementos no deseados de las páginas web que visita. También protege su actividad en línea y los datos de hackers, ISP, anunciantes y otros terceros que podrían tratar de espiar a usted o robar su información. También puede usar funciones como el modo incógnito, VPN y bloqueador de anuncios para aumentar aún más su privacidad y seguridad mientras navega por la web. </p>
|
39 |
-
<h3>Disfrute del rendimiento rápido y suave de la aplicación</h3>
|
40 |
-
<p>Una tercera ventaja de usar APK Mod Fox App es que se puede disfrutar del rendimiento rápido y suave de la aplicación. La aplicación modded optimiza el rendimiento de la aplicación y reduce su consumo de batería y memoria. También mejora la velocidad y suavidad de la aplicación mediante la carga de páginas web de forma rápida y sin problemas. También puedes usar funciones como gestor de descargas, VPN y bloqueador de anuncios para aumentar la velocidad de navegación y evitar interrupciones o ralentizaciones. </p>
|
41 |
-
<h2>Conclusión</h2>
|
42 |
-
<p>APK Mod Fox App es una versión modificada de la aplicación del navegador Firefox que ofrece muchos beneficios y características que no están disponibles en la aplicación original. Es una aplicación de navegador rápida, segura y personalizable que puede mejorar su experiencia de navegación en Android. Sin embargo, también tiene algunos inconvenientes y riesgos que debe tener en cuenta antes de descargarlo e instalarlo en su dispositivo. Necesitas encontrar una fuente confiable para la aplicación modded, habilitar fuentes desconocidas en tu dispositivo y escanear la aplicación modded en busca de malware o virus. También debe tener cuidado con la compatibilidad y las actualizaciones de la aplicación modded. </p>
|
43 |
-
<h4>Resumen de los puntos principales</h4>
|
44 |
-
<p>En este artículo, te hemos mostrado:</p>
|
45 |
-
<ul>
|
46 |
-
<li> ¿Qué es APK Mod Fox App y cómo se diferencia de la aplicación original del navegador Firefox. </li>
|
47 |
-
<li>Los beneficios y desventajas de usar APK Mod Fox App.</li>
|
48 |
-
|
49 |
-
<li>Cómo utilizar APK Mod Fox App para obtener la mejor experiencia de navegador en Android.</li>
|
50 |
-
</ul>
|
51 |
-
<h4>Llamada a la acción para los lectores</h4>
|
52 |
-
<p>Si usted está interesado en probar APK Mod Fox App, puede seguir los pasos que hemos proporcionado en este artículo para descargar e instalar en su dispositivo. Sin embargo, también debe hacer su propia investigación y comprobar las revisiones y calificaciones de la aplicación modded antes de descargarlo. También debe realizar una copia de seguridad de sus datos y dispositivo antes de instalar la aplicación modded, en caso de que algo salga mal o desee desinstalarlo más tarde. También debe tener cuidado con la seguridad y la privacidad de su actividad en línea y los datos durante el uso de la aplicación modded. </p>
|
53 |
-
<p>Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer! </p>
|
54 |
-
|
55 |
-
podría querer probar estas aplicaciones de navegador para Android: - Brave Browser: Esta es una aplicación de navegador rápido, seguro y privado que bloquea los anuncios y rastreadores por defecto. También le recompensa con criptomoneda para navegar por la web. - Opera Browser: Esta es una aplicación de navegador rápida, ligera y personalizable que ofrece funciones como bloqueador de anuncios, VPN, ahorro de datos, modo nocturno y más. - Chrome Browser: Esta es una aplicación de navegador popular y confiable que ofrece características como sincronización, búsqueda por voz, modo de incógnito, modo oscuro y más. </p> 64aa2da5cf<br />
|
56 |
-
<br />
|
57 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Carreras Lmites Mod Apk.md
DELETED
@@ -1,89 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Racing Limits Mod APK: Una guía para los entusiastas de las carreras</h1>
|
3 |
-
<p>Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de <strong>Racing Limits</strong>, un popular juego de carreras estilo árcade que te permite competir en la ciudad y el tráfico de carreteras. Este juego ofrece física de conducción realista, vehículos de alto detalle, afinaciones y mejoras, gráficos realistas y cinco modos de carreras agradables. Sin embargo, si quieres disfrutar del juego al máximo, es posible que desee descargar el <strong>Racing Limits mod APK</strong>, que le da dinero ilimitado y acceso a todas las características del juego. En este artículo, le diremos qué es Racing Limits, cuáles son sus características y modos, cómo jugar mejor, y cómo descargar el mod APK fácilmente. </p>
|
4 |
-
<h2>descargar carreras límites mod apk</h2><br /><p><b><b>Download Zip</b> 🗸 <a href="https://bltlly.com/2v6KqJ">https://bltlly.com/2v6KqJ</a></b></p><br /><br />
|
5 |
-
<h2>Características del juego Racing Limits</h2>
|
6 |
-
<p>Racing Limits es un juego que define los estándares móviles de los juegos de carreras de tipo árcade infinito. Basado en carreras y adelantamiento de vehículos tanto en la ciudad y el tráfico de carreteras, este juego tiene muchas características que lo hacen divertido y desafiante. Estos son algunos de ellos:</p>
|
7 |
-
<h3>5 modos agradables de carreras</h3>
|
8 |
-
<p>Racing Limits tiene cinco modos de carrera diferentes entre los que puedes elegir. Son:</p>
|
9 |
-
<ul>
|
10 |
-
<li><strong>Modo portador:</strong> Este modo tiene cientos de niveles que puedes completar al lograr ciertos objetivos. Puedes ganar dinero y comprar coches nuevos o mejorar los existentes. </li>
|
11 |
-
<li><strong>Modo infinito:</strong> Este modo no tiene fin. Puedes correr todo el tiempo que quieras e intentar batir tus propios récords. También puedes ganar dinero y bonos adelantando a otros vehículos. </li>
|
12 |
-
<li><strong>Modo contra-tiempo:</strong> Este modo prueba tu velocidad y habilidades. Tienes que correr contra el reloj y llegar a los puntos de control antes de que acabe el tiempo. </li>
|
13 |
-
<li><strong>Modo libre:</strong> Este modo te permite correr libremente sin reglas ni restricciones. Puede elegir la densidad de tráfico, el límite de velocidad y la hora del día. </li>
|
14 |
-
|
15 |
-
</ul>
|
16 |
-
<h3>Física de conducción realista</h3>
|
17 |
-
<p>Racing Limits tiene una física de conducción realista que hace que el juego sea más inmersivo y desafiante. Todos los coches de Racing Limits tienen una potencia, par y velocidades de transmisión realistas. El proceso de aceleración y las velocidades máximas se basan en una simulación completa. Se tienen en cuenta el peso del vehículo, las relaciones de transmisión, la potencia del motor y las relaciones de par. </p>
|
18 |
-
<h3>Vehículos de alto detalle</h3>
|
19 |
-
<p>Racing Limits tiene un montón de vehículos con altos niveles de detalle gráfico que están esperando a que conduzcas. Los detalles gráficos de los coches presentes en Racing Limits son los mejores de su categoría. Usted puede elegir entre diferentes tipos de coches como sedanes, SUV, coches deportivos, supercoches, y más. </p>
|
20 |
-
<h3>Afinaciones y mejoras</h3>
|
21 |
-
<p>Racing <p>Racing Limits te permite personalizar tu coche con varias opciones. Puedes cambiar el color de tu coche, llantas y pinzas. También puede aplicar diferentes tipos de vinilos a su coche. También puede mejorar el rendimiento de su coche mediante el aumento de la potencia del motor, el freno y la sensibilidad de la dirección, y la reducción de peso. </p>
|
22 |
-
<h3>Gráficos realistas</h3>
|
23 |
-
<p>Racing Limits tiene gráficos impresionantes que hacen el juego más realista y agradable. El juego tiene diferentes entornos con iluminación realista y efectos climáticos. Puedes correr en condiciones de sol, lluvia, niebla o nieve. También puede elegir la hora del día desde el amanecer hasta la noche. El juego también tiene efectos de sonido realistas y música que mejoran la experiencia de juego. </p>
|
24 |
-
<p></p>
|
25 |
-
<h2>Modos de juego de límites de carreras</h2>
|
26 |
-
<p>Como mencionamos antes, Racing Limits tiene cinco modos diferentes de carreras que puedes jugar. Cada modo tiene sus propios desafíos y recompensas. Aquí hay una breve descripción de cada modo:</p>
|
27 |
-
<h3>Modo portador</h3>
|
28 |
-
|
29 |
-
<h3>Modo infinito</h3>
|
30 |
-
<p>Este es el modo en el que puedes correr sin límites. Puedes elegir la densidad de tráfico, el límite de velocidad y la hora del día. Tienes que adelantar a otros vehículos lo más cerca posible para ganar más dinero y bonos. También puedes usar nitro para aumentar tu velocidad y realizar maniobras arriesgadas. Puedes comparar tus puntuaciones con otros jugadores de la clasificación. </p>
|
31 |
-
<h3>Modo contra-tiempo</h3>
|
32 |
-
<p>Este es el modo en el que tienes que correr contra el reloj. Tienes que llegar a los puntos de control antes de que acabe el tiempo. Puedes ganar tiempo extra adelantando a otros vehículos o usando nitro. Tienes que ser rápido y tener cuidado de no chocar o quedarse sin tiempo. </p>
|
33 |
-
<h3>Modo libre</h3>
|
34 |
-
<p>Este es el modo en el que puedes correr libremente sin reglas ni restricciones. Puedes elegir la densidad de tráfico, el límite de velocidad y la hora del día. También puede apagar el tráfico y disfrutar del paisaje. Puede utilizar este modo para practicar sus habilidades de conducción o simplemente divertirse. </p>
|
35 |
-
<h3>Modo multijugador</h3>
|
36 |
-
<p>Este es el modo en el que puedes competir con tus amigos u otros jugadores de todo el mundo en tiempo real. Puedes unirte o crear salas y carreras en diferentes pistas. Puedes chatear con otros jugadores y enviarles emojis. También puedes ver sus perfiles y estadísticas. </p>
|
37 |
-
<h2>Consejos de juego de límites de carreras</h2>
|
38 |
-
<p>Racing Limits es un juego que requiere habilidad y estrategia para dominar. Aquí hay algunos consejos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:</p>
|
39 |
-
<h3>Elegir el ángulo de la cámara derecha</h3>
|
40 |
-
<p>Racing Limits ofrece cuatro ángulos de cámara diferentes entre los que puedes alternar durante el juego. Son:</p>
|
41 |
-
<ul>
|
42 |
-
<li><strong>Bumper cam:</strong> Esta es la cámara que muestra la vista desde el parachoques delantero de su coche. Esta cámara le da una sensación realista de velocidad e inmersión, pero también limita su visibilidad de la carretera y el tráfico. </li>
|
43 |
-
|
44 |
-
<li><strong>Cockpit cam:</strong> Esta es la cámara que muestra la vista desde el interior de su coche. Esta cámara le da una sensación realista de conducción e inmersión, pero también limita su visibilidad de la carretera y el tráfico. </li>
|
45 |
-
<li><strong>Tercera persona cámara:</strong> Esta es la cámara que muestra la vista desde detrás de su coche. Esta cámara te da una buena vista de la carretera y el tráfico, pero también reduce tu sensación de velocidad e inmersión. </li>
|
46 |
-
</ul>
|
47 |
-
<p>Usted debe elegir el ángulo de la cámara que se adapte a su preferencia y estilo de carreras. También puede cambiar el ángulo de la cámara durante el juego tocando en la pantalla. </p>
|
48 |
-
<h3>Utilice los controles sensibles y fáciles</h3>
|
49 |
-
<p>Racing Limits tiene controles sensibles y fáciles que te permiten controlar tu coche con precisión y facilidad. Puede elegir entre tres opciones de control diferentes: inclinación, tacto o volante. También puede ajustar la sensibilidad y la calibración de cada opción en el menú de configuración. </p>
|
50 |
-
<p>El control de inclinación le permite dirigir su automóvil inclinando el dispositivo hacia la izquierda o hacia la derecha. El control táctil le permite dirigir su automóvil tocando el lado izquierdo o derecho de la pantalla. El control del volante te permite conducir tu coche arrastrando un volante virtual en la pantalla. </p>
|
51 |
-
<p>Debe elegir la opción de control que se adapte a <p>Debe elegir la opción de control que se adapte a su preferencia y comodidad. También puede utilizar los botones de freno y nitro en la pantalla para ralentizar o acelerar su coche. También puede cambiar la posición y el tamaño de los botones en el menú de configuración. </p>
|
52 |
-
<h3>Personalizar su coche para adaptarse a su estilo</h3>
|
53 |
-
<p>Racing Limits te permite personalizar tu coche con varias opciones. Puedes cambiar el color de tu coche, llantas y calibradores. También puede aplicar diferentes tipos de vinilos a su coche. También puede mejorar el rendimiento de su coche mediante el aumento de la potencia del motor, el freno y la sensibilidad de la dirección, y la reducción de peso. </p>
|
54 |
-
|
55 |
-
<h3>Mantener líneas de carreras limpias y apretadas</h3>
|
56 |
-
<p>Racing Limits es un juego que requiere habilidad y estrategia para dominar. Una de las habilidades más importantes es mantener sus líneas de carreras limpias y apretadas. Líneas de carreras son los caminos que se toman en la carretera para optimizar su velocidad y distancia. Deberías intentar seguir las líneas de carreras lo más de cerca posible y evitar giros o movimientos innecesarios. </p>
|
57 |
-
<p>También debe tratar de adelantar a otros vehículos lo más cerca posible para ganar más dinero y bonos. Sin embargo, también debe tener cuidado de no chocar o golpear otros vehículos, ya que esto dañará su automóvil y reducirá su velocidad. También debe evitar conducir en el carril opuesto, ya que esto aumentará el riesgo de colisión y penalización. </p>
|
58 |
-
<h3>Otros corredores para ganar velocidad</h3>
|
59 |
-
<p>Racing Limits es un juego que recompensa la habilidad y la estrategia. Una de las estrategias más efectivas es reclutar a otros corredores para ganar velocidad. El dibujo es una técnica en la que se sigue de cerca detrás de otro vehículo para reducir la resistencia del aire y aumentar su velocidad. Puedes usar esta técnica para adelantar a otros vehículos o escapar de ellos. </p>
|
60 |
-
<p>Debes reclutar a otros corredores siempre que sea posible, especialmente en carreteras rectas o carreteras. Sin embargo, también debe tener cuidado de no quedarse detrás de ellos durante demasiado tiempo, ya que esto reducirá su visibilidad y tiempo de reacción. También debe tener cuidado con los movimientos repentinos o los frenos del vehículo que tiene delante, ya que esto puede causar que se estrelle o pierda velocidad. </p>
|
61 |
-
<h2>Cómo descargar límites de carreras Mod APK</h2>
|
62 |
-
<p>Si quieres disfrutar de Racing Limits al máximo, es posible que desee descargar el mod APK, que le da dinero ilimitado y acceso a todas las características del juego. Aquí están los pasos para descargar e instalar el mod APK fácilmente:</p>
|
63 |
-
<h3>Paso 1: Encontrar una fuente confiable</h3>
|
64 |
-
|
65 |
-
<p>También debe comprobar las revisiones y valoraciones de la fuente antes de descargar, ya que pueden darle una idea de su calidad y seguridad. También puede pedir recomendaciones de otros jugadores o amigos que han descargado el mod APK antes. </p>
|
66 |
-
<h3>Paso 2: Habilitar fuentes desconocidas en su dispositivo</h3>
|
67 |
-
<p>El siguiente paso es habilitar fuentes desconocidas en su dispositivo, lo que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, tienes que ir a la configuración del dispositivo, luego la seguridad, luego fuentes desconocidas y luego activarlo. También es posible que tenga que confirmar un mensaje de advertencia que aparece en su pantalla. </p>
|
68 |
-
<p>Solo debe habilitar fuentes desconocidas cuando se está descargando e instalando el archivo APK mod, y desactivarlo después, ya que puede plantear un riesgo de seguridad para su dispositivo. </p>
|
69 |
-
<h3>Paso 3: Descargar e instalar el archivo APK Mod</h3>
|
70 |
-
<p>El tercer paso es descargar e instalar el archivo APK mod en su dispositivo. Para hacer esto, debe hacer clic en el enlace proporcionado por la fuente que eligió en el paso 1, y luego esperar a que termine la descarga. También es posible que tenga que permitir algunos permisos o aceptar algunos términos y condiciones antes de descargar. </p>
|
71 |
-
<p>Una vez que la descarga se ha completado, usted tiene que localizar el archivo APK mod en el almacenamiento de su dispositivo, por lo general en la carpeta de descargas, y luego toque en él para iniciar el proceso de instalación. También es posible que tenga que permitir algunos permisos o aceptar algunos términos y condiciones antes de instalar. </p>
|
72 |
-
<h3>Paso 4: Iniciar el juego y disfrutar de dinero ilimitado y características</h3>
|
73 |
-
|
74 |
-
<h2>Conclusión</h2>
|
75 |
-
<p>Racing Limits es un divertido y emocionante juego de carreras estilo árcade que te permite correr en la ciudad y el tráfico de carreteras. Tiene física de conducción realista, vehículos de alto detalle, afinaciones y mejoras, gráficos realistas y cinco modos de carreras agradables. Sin embargo, si desea disfrutar del juego al máximo, es posible que desee descargar el mod APK Racing Limits, que le da dinero ilimitado y acceso a todas las características del juego. En este artículo, te hemos dicho lo que es Racing Limits, cuáles son sus características y modos, cómo jugar mejor, y cómo descargar el mod APK fácilmente. Esperamos que este artículo te haya ayudado y que te lo pases genial jugando a Racing Limits.</p>
|
76 |
-
<h2>Preguntas frecuentes</h2>
|
77 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Racing Limits y su mod APK:</p>
|
78 |
-
<h3>Q: Es Racing Limits mod APK seguro para descargar e instalar? </h3>
|
79 |
-
<p>A: Sí, Racing Limits mod APK es seguro para descargar e instalar, siempre y cuando siga los pasos que hemos proporcionado en este artículo. Sin embargo, siempre debe tener cuidado de no descargar de fuentes no confiables o maliciosas, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar sus datos. También debe comprobar las revisiones y calificaciones de la fuente antes de descargar, ya que pueden darle una idea de su calidad y seguridad. También debe desactivar fuentes desconocidas en su dispositivo después de instalar el mod APK, ya que puede plantear un riesgo de seguridad para su dispositivo. </p>
|
80 |
-
<h3>Q: ¿Cuáles son los beneficios de descargar Racing Limits mod APK? </h3>
|
81 |
-
<p>A: Los beneficios de descargar Racing Limits mod APK son que se obtiene dinero ilimitado y el acceso a todas las características del juego. Puede utilizar este dinero y características para comprar coches nuevos, actualizar los existentes, o cambiar su apariencia. También puede reproducir cualquier modo o pista que desee, sin restricciones o limitaciones. Puedes disfrutar del juego al máximo sin gastar dinero real ni esperar nada. </p>
|
82 |
-
|
83 |
-
<p>A: Para actualizar Racing Limits mod APK, tienes que seguir los mismos pasos que hemos proporcionado en este artículo para descargarlo e instalarlo. Tienes que encontrar una fuente confiable que ofrece la última versión del archivo mod APK para Racing Limits, y luego descargarlo e instalarlo en tu dispositivo. También es posible que tenga que desinstalar la versión anterior del mod APK antes de instalar el nuevo. </p>
|
84 |
-
<h3>Q: ¿Puedo jugar Racing Limits mod APK en línea con otros jugadores? </h3>
|
85 |
-
<p>A: Sí, puede jugar Racing Limits mod APK en línea con otros jugadores en el modo multijugador. Sin embargo, usted debe ser consciente de que no todos los jugadores pueden estar utilizando el mod APK, y algunos podrían estar utilizando la versión original del juego. Esto podría causar algunos problemas de compatibilidad o ventajas injustas para algunos jugadores. También debes respetar a otros jugadores y no usar trucos o hacks que puedan arruinar su experiencia de juego. </p>
|
86 |
-
<h3>Q: ¿Puedo jugar Racing Limits mod APK sin conexión a Internet? </h3>
|
87 |
-
<p>A: Sí, puede jugar Racing Limits mod APK sin conexión a Internet en algunos modos como modo portador, modo infinito, modo contra-tiempo o modo libre. Sin embargo, no podrás jugar al modo multijugador ni acceder a algunas funciones online como tablas de clasificación o salas de chat. </p> 64aa2da5cf<br />
|
88 |
-
<br />
|
89 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Do Cabra Simulador.md
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Hacer simulador de cabra: Cómo convertirse en una cabra virtual y cosas de naufragio</h1>
|
3 |
-
<p>¿Alguna vez te has preguntado cómo sería ser una cabra? ¿Vagar libremente, con la cabeza a la vista y causar tanto caos como sea posible? Bueno, no te lo preguntes más, porque Goat Simulator es el juego para ti. En este artículo, te contaremos todo lo que necesitas saber sobre este divertido y absurdo juego, y cómo puedes descargarlo y jugarlo en tu dispositivo. </p>
|
4 |
-
<h2>descargar do cabra simulador</h2><br /><p><b><b>Download Zip</b> ✪ <a href="https://bltlly.com/2v6Jbc">https://bltlly.com/2v6Jbc</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es Goat Simulator? </h2>
|
6 |
-
<h3>Una breve introducción al juego y sus características</h3>
|
7 |
-
<p>Goat Simulator es un juego que simula la vida de una cabra, pero no de una manera realista o seria. En cambio, es una parodia de otros juegos de simulación, como Flight Simulator o Farming Simulator, que exagera la física y los fallos del motor del juego para crear una experiencia ridícula e hilarante. El juego fue desarrollado por Coffee Stain Studios y lanzado en 2014 como una broma de April Fools, pero se hizo tan popular que generó varios spin-offs y DLCs.</p>
|
8 |
-
<p>El juego no tiene metas u objetivos específicos, aparte de explorar el entorno de mundo abierto y causar tanta destrucción como sea posible. Puedes interactuar con varios objetos y personajes en el juego, como coches, trampolines, explosivos, zombis, alienígenas y más. También puede realizar varias acrobacias y trucos, como backflips, carreras de pared, física ragdoll y cámara lenta. Incluso puedes lamer cosas y arrastrarlas con la lengua. </p>
|
9 |
-
<p>El juego también es compatible con Steam Workshop, lo que significa que puedes crear tus propias cabras, niveles, misiones, modos de juego y más. También puedes descargar e instalar mods creados por otros jugadores, que añaden nuevas características y contenido al juego. </p>
|
10 |
-
<h3> ¿Por qué usted debe jugar Goat Simulator</h3>
|
11 |
-
|
12 |
-
<p>Si estás buscando un juego que desafíe tus habilidades o ponga a prueba tu inteligencia, entonces Goat Simulator no es para ti. Pero si estás buscando un juego que te haga sonreír, reír o incluso reírte, entonces Goat Simulator es definitivamente para ti. Es un juego que te hará olvidarte de tus preocupaciones y estrés por un tiempo, y simplemente disfrutar de ser una cabra. </p>
|
13 |
-
<p></p>
|
14 |
-
<h3>Cómo descargar Goat Simulator para diferentes plataformas</h3>
|
15 |
-
<p>Goat Simulator está disponible para varias plataformas, como Windows, Mac, Linux, Android, iOS, Xbox One, Xbox 360, PlayStation 4, PlayStation 3, Nintendo Switch, Amazon Fire TV y más. Puedes descargarlo desde diferentes fuentes dependiendo de tu dispositivo. </p>
|
16 |
-
<tabla>
|
17 |
-
<tr><th>Plataforma</th><th>Fuente</th><th>Precio</th></tr>
|
18 |
-
<tr><td>Windows</td><td><a href="( 1 )">Steam</a></td><td>$9.99</td></tr>
|
19 |
-
<tr><td>Mac</td><td><a href="( 1 )">Steam</a></td><td>$9.99</td></tr>
|
20 |
-
<tr><td>Linux</td><td><a href="( 1 <p>Goat Simulator tiene varios modos de juego y mapas que puedes elegir, cada uno con su propio tema y características. Puede acceder a ellos desde el menú principal o desde el menú de pausa. Estos son algunos de los más populares:</p>
|
21 |
-
<ul>
|
22 |
-
<li>Goatville: Este es el mapa original del juego, donde puedes explorar una ciudad suburbana llena de gente, animales, vehículos y secretos. También puedes encontrar una torre de cabra, un parque de patinaje, un carnaval, una gasolinera y más. </li>
|
23 |
-
<li>Goat City Bay: Este es un mapa más grande que cuenta con una ciudad costera con rascacielos, playas, barcos y una montaña rusa. También puedes encontrar un casino, un hotel, un museo y una ballena. </li>
|
24 |
-
<li>Goat MMO Simulator: Esta es una parodia de MMORPGs, donde puedes elegir entre diferentes clases de cabras, como Rogue, Hunter, Magician, Tank o Microwave. También puedes completar misiones, subir de nivel, recoger botín y luchar contra enemigos. </li>
|
25 |
-
|
26 |
-
<li>Goat Simulator: DÍA DE PAGO: Esta es una parodia de los juegos de robo, donde puedes formar equipo con otros criminales animales, como un flamenco, un delfín y un camello. También puedes robar bancos, casinos, museos y más. </li>
|
27 |
-
<li>Desperdicio de espacio: Esta es una parodia de juegos de ciencia ficción, donde puedes explorar el espacio como una cabra. También puedes volar naves espaciales, visitar planetas, luchar contra alienígenas y financiar tu propia colonia espacial. </li>
|
28 |
-
</ul>
|
29 |
-
<p>También hay muchos otros modos y mapas que puedes descargar desde el Steam Workshop u otras fuentes. Algunos de ellos se basan en otros juegos o películas populares, como Minecraft, Jurassic Park, Harry Potter, y más. </p>
|
30 |
-
<h3>Los mejores consejos y trucos para divertirse como una cabra</h3>
|
31 |
-
<p>Goat Simulator es un juego que te anima a experimentar y probar cosas diferentes. No hay forma correcta o incorrecta de jugarlo. Sin embargo, si quieres algunos consejos y trucos para hacer tu experiencia de cabra más agradable e hilarante, aquí hay algunas sugerencias:</p>
|
32 |
-
<ul>
|
33 |
-
<li>Busca estatuas de cabra dorada. Estos son coleccionables que desbloquean nuevas cabras con habilidades o apariencias especiales. Algunos de ellos están ocultos en lugares secretos o requieren ciertas acciones para obtener. </li>
|
34 |
-
<li>Use el menú del mutador. Esta es una opción que le permite personalizar su cabra con varios modificadores que cambian su comportamiento o apariencia. Por ejemplo, puedes hacer que tu cabra vuele, disparar láseres desde sus ojos, engendrar secuaces, o volverse enorme. </li>
|
35 |
-
<li>Combinar diferentes mutadores. También puede mezclar y combinar diferentes mutadores para crear su propia cabra única. Por ejemplo, puedes combinar el mutador Angel Goat con el mutador Jetpack para crear una cabra voladora con alas y cohetes. </li>
|
36 |
-
<li>Explora cada esquina del mapa. Hay muchos secretos ocultos y huevos de Pascua en el juego que hacen referencia a otros juegos o cultura pop. Por ejemplo, puedes encontrar un minijuego de Flappy Bird en Goat City Bay o una pistola portal en Waste of Space.</li>
|
37 |
-
|
38 |
-
</ul>
|
39 |
-
<h2>Conclusión</h2>
|
40 |
-
<h3>Un resumen de los puntos principales y una llamada a la acción</h3>
|
41 |
-
<p>Goat Simulator es uno de los juegos más divertidos y absurdos jamás realizados. Es un juego que te permite convertirte en una cabra virtual y destrozar cosas de varias maneras. Es un juego que no tiene reglas ni límites, solo diversión y creatividad. Es un juego que te hará reír a carcajadas y divertirte. </p>
|
42 |
-
<p>Si quieres experimentar este juego por ti mismo, todo lo que tienes que hacer es descargarlo desde la fuente que se adapte a tu dispositivo y plataforma. A continuación, puede empezar a jugar de inmediato y disfrutar de ser una cabra. </p>
|
43 |
-
<p>Entonces, ¿qué estás esperando? Descargar Goat Simulator hoy y dar rienda suelta a su cabra interior! </p>
|
44 |
-
<h2>Preguntas frecuentes</h2>
|
45 |
-
<h4>¿Es Goat Simulator gratis? </h4>
|
46 |
-
<p>No, Goat Simulator no es gratis. Es un juego de pago que cuesta diferentes precios dependiendo de tu plataforma y región. Sin embargo, hay algunas versiones gratuitas del juego para dispositivos Android que tienen características y contenido limitados. </p>
|
47 |
-
<h4>¿Es Goat Simulator multijugador? </h4>
|
48 |
-
<p>Sí <p>Sí, Goat Simulator tiene un modo multijugador que te permite jugar con hasta cuatro jugadores en línea o localmente. Puede unirse o alojar un juego desde el menú principal o el menú de pausa. También puede chatear con otros jugadores y ver sus cabras en la pantalla. </p>
|
49 |
-
<h4>¿Es seguro descargar Goat Simulator? </h4>
|
50 |
-
<p>Sí, Goat Simulator es seguro para descargar de las fuentes oficiales que mencionamos en este artículo. Sin embargo, debe tener cuidado al descargar mods u otros archivos de fuentes no oficiales, ya que pueden contener virus o malware que pueden dañar su dispositivo o comprometer sus datos. </p>
|
51 |
-
<h4>¿Cuáles son los requisitos del sistema para Goat Simulator? </h4>
|
52 |
-
<p>Los requisitos del sistema para Goat Simulator varían dependiendo de su plataforma y dispositivo. Para la versión de PC, los requisitos mínimos son:</p>
|
53 |
-
<ul>
|
54 |
-
<li>OS: Windows XP (SP3), Windows Vista (SP2), Windows 7, Windows 8</li>
|
55 |
-
<li>Procesador: 2.0 GHz Procesador de doble núcleo</li>
|
56 |
-
<li>Memoria: 2 GB de RAM</li>
|
57 |
-
|
58 |
-
<li>DirectX: Versión 9.0c</li>
|
59 |
-
<li>Almacenamiento: 2 GB de espacio disponible</li>
|
60 |
-
<li>Tarjeta de sonido: DirectX 9.0c-compatible, 16 bits</li>
|
61 |
-
</ul>
|
62 |
-
<p>Los requisitos recomendados son:</p>
|
63 |
-
<ul>
|
64 |
-
<li>OS: ventanas 7, ventanas 8</li>
|
65 |
-
<li>Procesador: 2.0 GHz Procesador de cuatro núcleos</li>
|
66 |
-
<li>Memoria: 4 GB de RAM</li>
|
67 |
-
<li>Gráficos: Shader Model 3.0, 512 MB VRAM</li>
|
68 |
-
<li>DirectX: Versión 9.0c</li>
|
69 |
-
<li>Almacenamiento: 2 GB de espacio disponible</li>
|
70 |
-
<li>Tarjeta de sonido: DirectX 9.0c-compatible, 16 bits</li>
|
71 |
-
</ul>
|
72 |
-
<p>Puede comprobar los requisitos del sistema para otras plataformas y dispositivos en sus respectivas fuentes o sitios web. </p>
|
73 |
-
<h4>¿Cuáles son algunos de los mejores mods para Goat Simulator? </h4>
|
74 |
-
<p>Hay muchos mods para Goat Simulator que añaden nuevas características y contenido al juego. Algunos de ellos se basan en otros juegos o películas populares, como Minecraft, Jurassic Park, Harry Potter y más. Algunos de los mejores mods para Goat Simulator son:</p>
|
75 |
-
<ul>
|
76 |
-
<li>Minecraft Mod: Este mod te permite jugar como una cabra en un mundo similar a Minecraft, donde puedes extraer bloques, crear objetos, construir estructuras y luchar contra enemigos. </li>
|
77 |
-
<li>Cabra jurásica Mod: Este mod te permite jugar como una cabra en un mundo parecido a un parque jurásico, donde puedes encontrar dinosaurios, montar vehículos y explorar la isla. </li>
|
78 |
-
<li>Hogwarts Mod: este mod te permite jugar como una cabra en un mundo similar a Harry Potter, donde puedes usar hechizos mágicos, escobas voladoras y visitar Hogwarts.</li>
|
79 |
-
<li>Pokemon Mod: Este mod te permite jugar como una cabra en un mundo similar a Pokémon, donde puedes atrapar y luchar contra otras cabras con diferentes tipos y habilidades. </li>
|
80 |
-
<li>Mario Mod: Este mod te permite jugar como una cabra en un mundo similar a Mario, donde puedes recoger monedas, potenciadores y estrellas, y saltar sobre enemigos y plataformas. </li>
|
81 |
-
</ul></p> 64aa2da5cf<br />
|
82 |
-
<br />
|
83 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/locations/_sysconfig.py
DELETED
@@ -1,213 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
import os
|
3 |
-
import sys
|
4 |
-
import sysconfig
|
5 |
-
import typing
|
6 |
-
|
7 |
-
from pip._internal.exceptions import InvalidSchemeCombination, UserInstallationInvalid
|
8 |
-
from pip._internal.models.scheme import SCHEME_KEYS, Scheme
|
9 |
-
from pip._internal.utils.virtualenv import running_under_virtualenv
|
10 |
-
|
11 |
-
from .base import change_root, get_major_minor_version, is_osx_framework
|
12 |
-
|
13 |
-
logger = logging.getLogger(__name__)
|
14 |
-
|
15 |
-
|
16 |
-
# Notes on _infer_* functions.
|
17 |
-
# Unfortunately ``get_default_scheme()`` didn't exist before 3.10, so there's no
|
18 |
-
# way to ask things like "what is the '_prefix' scheme on this platform". These
|
19 |
-
# functions try to answer that with some heuristics while accounting for ad-hoc
|
20 |
-
# platforms not covered by CPython's default sysconfig implementation. If the
|
21 |
-
# ad-hoc implementation does not fully implement sysconfig, we'll fall back to
|
22 |
-
# a POSIX scheme.
|
23 |
-
|
24 |
-
_AVAILABLE_SCHEMES = set(sysconfig.get_scheme_names())
|
25 |
-
|
26 |
-
_PREFERRED_SCHEME_API = getattr(sysconfig, "get_preferred_scheme", None)
|
27 |
-
|
28 |
-
|
29 |
-
def _should_use_osx_framework_prefix() -> bool:
|
30 |
-
"""Check for Apple's ``osx_framework_library`` scheme.
|
31 |
-
|
32 |
-
Python distributed by Apple's Command Line Tools has this special scheme
|
33 |
-
that's used when:
|
34 |
-
|
35 |
-
* This is a framework build.
|
36 |
-
* We are installing into the system prefix.
|
37 |
-
|
38 |
-
This does not account for ``pip install --prefix`` (also means we're not
|
39 |
-
installing to the system prefix), which should use ``posix_prefix``, but
|
40 |
-
logic here means ``_infer_prefix()`` outputs ``osx_framework_library``. But
|
41 |
-
since ``prefix`` is not available for ``sysconfig.get_default_scheme()``,
|
42 |
-
which is the stdlib replacement for ``_infer_prefix()``, presumably Apple
|
43 |
-
wouldn't be able to magically switch between ``osx_framework_library`` and
|
44 |
-
``posix_prefix``. ``_infer_prefix()`` returning ``osx_framework_library``
|
45 |
-
means its behavior is consistent whether we use the stdlib implementation
|
46 |
-
or our own, and we deal with this special case in ``get_scheme()`` instead.
|
47 |
-
"""
|
48 |
-
return (
|
49 |
-
"osx_framework_library" in _AVAILABLE_SCHEMES
|
50 |
-
and not running_under_virtualenv()
|
51 |
-
and is_osx_framework()
|
52 |
-
)
|
53 |
-
|
54 |
-
|
55 |
-
def _infer_prefix() -> str:
|
56 |
-
"""Try to find a prefix scheme for the current platform.
|
57 |
-
|
58 |
-
This tries:
|
59 |
-
|
60 |
-
* A special ``osx_framework_library`` for Python distributed by Apple's
|
61 |
-
Command Line Tools, when not running in a virtual environment.
|
62 |
-
* Implementation + OS, used by PyPy on Windows (``pypy_nt``).
|
63 |
-
* Implementation without OS, used by PyPy on POSIX (``pypy``).
|
64 |
-
* OS + "prefix", used by CPython on POSIX (``posix_prefix``).
|
65 |
-
* Just the OS name, used by CPython on Windows (``nt``).
|
66 |
-
|
67 |
-
If none of the above works, fall back to ``posix_prefix``.
|
68 |
-
"""
|
69 |
-
if _PREFERRED_SCHEME_API:
|
70 |
-
return _PREFERRED_SCHEME_API("prefix")
|
71 |
-
if _should_use_osx_framework_prefix():
|
72 |
-
return "osx_framework_library"
|
73 |
-
implementation_suffixed = f"{sys.implementation.name}_{os.name}"
|
74 |
-
if implementation_suffixed in _AVAILABLE_SCHEMES:
|
75 |
-
return implementation_suffixed
|
76 |
-
if sys.implementation.name in _AVAILABLE_SCHEMES:
|
77 |
-
return sys.implementation.name
|
78 |
-
suffixed = f"{os.name}_prefix"
|
79 |
-
if suffixed in _AVAILABLE_SCHEMES:
|
80 |
-
return suffixed
|
81 |
-
if os.name in _AVAILABLE_SCHEMES: # On Windows, prefx is just called "nt".
|
82 |
-
return os.name
|
83 |
-
return "posix_prefix"
|
84 |
-
|
85 |
-
|
86 |
-
def _infer_user() -> str:
|
87 |
-
"""Try to find a user scheme for the current platform."""
|
88 |
-
if _PREFERRED_SCHEME_API:
|
89 |
-
return _PREFERRED_SCHEME_API("user")
|
90 |
-
if is_osx_framework() and not running_under_virtualenv():
|
91 |
-
suffixed = "osx_framework_user"
|
92 |
-
else:
|
93 |
-
suffixed = f"{os.name}_user"
|
94 |
-
if suffixed in _AVAILABLE_SCHEMES:
|
95 |
-
return suffixed
|
96 |
-
if "posix_user" not in _AVAILABLE_SCHEMES: # User scheme unavailable.
|
97 |
-
raise UserInstallationInvalid()
|
98 |
-
return "posix_user"
|
99 |
-
|
100 |
-
|
101 |
-
def _infer_home() -> str:
|
102 |
-
"""Try to find a home for the current platform."""
|
103 |
-
if _PREFERRED_SCHEME_API:
|
104 |
-
return _PREFERRED_SCHEME_API("home")
|
105 |
-
suffixed = f"{os.name}_home"
|
106 |
-
if suffixed in _AVAILABLE_SCHEMES:
|
107 |
-
return suffixed
|
108 |
-
return "posix_home"
|
109 |
-
|
110 |
-
|
111 |
-
# Update these keys if the user sets a custom home.
|
112 |
-
_HOME_KEYS = [
|
113 |
-
"installed_base",
|
114 |
-
"base",
|
115 |
-
"installed_platbase",
|
116 |
-
"platbase",
|
117 |
-
"prefix",
|
118 |
-
"exec_prefix",
|
119 |
-
]
|
120 |
-
if sysconfig.get_config_var("userbase") is not None:
|
121 |
-
_HOME_KEYS.append("userbase")
|
122 |
-
|
123 |
-
|
124 |
-
def get_scheme(
|
125 |
-
dist_name: str,
|
126 |
-
user: bool = False,
|
127 |
-
home: typing.Optional[str] = None,
|
128 |
-
root: typing.Optional[str] = None,
|
129 |
-
isolated: bool = False,
|
130 |
-
prefix: typing.Optional[str] = None,
|
131 |
-
) -> Scheme:
|
132 |
-
"""
|
133 |
-
Get the "scheme" corresponding to the input parameters.
|
134 |
-
|
135 |
-
:param dist_name: the name of the package to retrieve the scheme for, used
|
136 |
-
in the headers scheme path
|
137 |
-
:param user: indicates to use the "user" scheme
|
138 |
-
:param home: indicates to use the "home" scheme
|
139 |
-
:param root: root under which other directories are re-based
|
140 |
-
:param isolated: ignored, but kept for distutils compatibility (where
|
141 |
-
this controls whether the user-site pydistutils.cfg is honored)
|
142 |
-
:param prefix: indicates to use the "prefix" scheme and provides the
|
143 |
-
base directory for the same
|
144 |
-
"""
|
145 |
-
if user and prefix:
|
146 |
-
raise InvalidSchemeCombination("--user", "--prefix")
|
147 |
-
if home and prefix:
|
148 |
-
raise InvalidSchemeCombination("--home", "--prefix")
|
149 |
-
|
150 |
-
if home is not None:
|
151 |
-
scheme_name = _infer_home()
|
152 |
-
elif user:
|
153 |
-
scheme_name = _infer_user()
|
154 |
-
else:
|
155 |
-
scheme_name = _infer_prefix()
|
156 |
-
|
157 |
-
# Special case: When installing into a custom prefix, use posix_prefix
|
158 |
-
# instead of osx_framework_library. See _should_use_osx_framework_prefix()
|
159 |
-
# docstring for details.
|
160 |
-
if prefix is not None and scheme_name == "osx_framework_library":
|
161 |
-
scheme_name = "posix_prefix"
|
162 |
-
|
163 |
-
if home is not None:
|
164 |
-
variables = {k: home for k in _HOME_KEYS}
|
165 |
-
elif prefix is not None:
|
166 |
-
variables = {k: prefix for k in _HOME_KEYS}
|
167 |
-
else:
|
168 |
-
variables = {}
|
169 |
-
|
170 |
-
paths = sysconfig.get_paths(scheme=scheme_name, vars=variables)
|
171 |
-
|
172 |
-
# Logic here is very arbitrary, we're doing it for compatibility, don't ask.
|
173 |
-
# 1. Pip historically uses a special header path in virtual environments.
|
174 |
-
# 2. If the distribution name is not known, distutils uses 'UNKNOWN'. We
|
175 |
-
# only do the same when not running in a virtual environment because
|
176 |
-
# pip's historical header path logic (see point 1) did not do this.
|
177 |
-
if running_under_virtualenv():
|
178 |
-
if user:
|
179 |
-
base = variables.get("userbase", sys.prefix)
|
180 |
-
else:
|
181 |
-
base = variables.get("base", sys.prefix)
|
182 |
-
python_xy = f"python{get_major_minor_version()}"
|
183 |
-
paths["include"] = os.path.join(base, "include", "site", python_xy)
|
184 |
-
elif not dist_name:
|
185 |
-
dist_name = "UNKNOWN"
|
186 |
-
|
187 |
-
scheme = Scheme(
|
188 |
-
platlib=paths["platlib"],
|
189 |
-
purelib=paths["purelib"],
|
190 |
-
headers=os.path.join(paths["include"], dist_name),
|
191 |
-
scripts=paths["scripts"],
|
192 |
-
data=paths["data"],
|
193 |
-
)
|
194 |
-
if root is not None:
|
195 |
-
for key in SCHEME_KEYS:
|
196 |
-
value = change_root(root, getattr(scheme, key))
|
197 |
-
setattr(scheme, key, value)
|
198 |
-
return scheme
|
199 |
-
|
200 |
-
|
201 |
-
def get_bin_prefix() -> str:
|
202 |
-
# Forcing to use /usr/local/bin for standard macOS framework installs.
|
203 |
-
if sys.platform[:6] == "darwin" and sys.prefix[:16] == "/System/Library/":
|
204 |
-
return "/usr/local/bin"
|
205 |
-
return sysconfig.get_paths()["scripts"]
|
206 |
-
|
207 |
-
|
208 |
-
def get_purelib() -> str:
|
209 |
-
return sysconfig.get_paths()["purelib"]
|
210 |
-
|
211 |
-
|
212 |
-
def get_platlib() -> str:
|
213 |
-
return sysconfig.get_paths()["platlib"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/legacy/__init__.py
DELETED
File without changes
|
spaces/Buckeyes2019/NLP_Demonstration/app.py
DELETED
@@ -1,129 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
from transformers import pipeline
|
3 |
-
import spacy
|
4 |
-
from spacy import displacy
|
5 |
-
import plotly.express as px
|
6 |
-
import numpy as np
|
7 |
-
|
8 |
-
st.set_page_config(page_title="NLP Prototype")
|
9 |
-
|
10 |
-
st.title("Natural Language Processing Prototype")
|
11 |
-
st.write("_This web application is intended for educational use, please do not upload any sensitive information._")
|
12 |
-
st.subheader("__Which natural language processing task would you like to try?__")
|
13 |
-
st.write("- __Sentiment Analysis:__ Identifying whether a piece of text has a positive or negative sentiment.")
|
14 |
-
st.write("- __Named Entity Recognition:__ Identifying all geopolitical entities, organizations, people, locations, or dates in a body of text.")
|
15 |
-
st.write("- __Text Classification:__ Placing a piece of text into one or more categories.")
|
16 |
-
st.write("- __Text Summarization:__ Condensing larger bodies of text into smaller bodies of text.")
|
17 |
-
|
18 |
-
option = st.selectbox('Please select from the list',('','Sentiment Analysis','Named Entity Recognition', 'Text Classification','Text Summarization'))
|
19 |
-
|
20 |
-
@st.cache(allow_output_mutation=True, show_spinner=False)
|
21 |
-
def Loading_Model_1():
|
22 |
-
sum2 = pipeline("summarization",framework="pt")
|
23 |
-
return sum2
|
24 |
-
|
25 |
-
@st.cache(allow_output_mutation=True, show_spinner=False)
|
26 |
-
def Loading_Model_2():
|
27 |
-
class1 = pipeline("zero-shot-classification",framework="pt")
|
28 |
-
return class1
|
29 |
-
|
30 |
-
@st.cache(allow_output_mutation=True, show_spinner=False)
|
31 |
-
def Loading_Model_3():
|
32 |
-
sentiment = pipeline("sentiment-analysis", framework="pt")
|
33 |
-
return sentiment
|
34 |
-
|
35 |
-
@st.cache(allow_output_mutation=True, show_spinner=False)
|
36 |
-
def Loading_Model_4():
|
37 |
-
nlp = spacy.load('en_core_web_sm')
|
38 |
-
return nlp
|
39 |
-
|
40 |
-
@st.cache(allow_output_mutation=True)
|
41 |
-
def entRecognizer(entDict, typeEnt):
|
42 |
-
entList = [ent for ent in entDict if entDict[ent] == typeEnt]
|
43 |
-
return entList
|
44 |
-
|
45 |
-
def plot_result(top_topics, scores):
|
46 |
-
top_topics = np.array(top_topics)
|
47 |
-
scores = np.array(scores)
|
48 |
-
scores *= 100
|
49 |
-
fig = px.bar(x=scores, y=top_topics, orientation='h',
|
50 |
-
labels={'x': 'Probability', 'y': 'Category'},
|
51 |
-
text=scores,
|
52 |
-
range_x=(0,115),
|
53 |
-
title='Top Predictions',
|
54 |
-
color=np.linspace(0,1,len(scores)),
|
55 |
-
color_continuous_scale="Bluered")
|
56 |
-
fig.update(layout_coloraxis_showscale=False)
|
57 |
-
fig.update_traces(texttemplate='%{text:0.1f}%', textposition='outside')
|
58 |
-
st.plotly_chart(fig)
|
59 |
-
|
60 |
-
with st.spinner(text="Please wait for the models to load. This should take approximately 60 seconds."):
|
61 |
-
sum2 = Loading_Model_1()
|
62 |
-
class1 = Loading_Model_2()
|
63 |
-
sentiment = Loading_Model_3()
|
64 |
-
nlp = Loading_Model_4()
|
65 |
-
|
66 |
-
if option == 'Text Classification':
|
67 |
-
cat1 = st.text_input('Enter each possible category name (separated by a comma). Maximum 5 categories.')
|
68 |
-
text = st.text_area('Enter Text Below:', height=200)
|
69 |
-
submit = st.button('Generate')
|
70 |
-
if submit:
|
71 |
-
st.subheader("Classification Results:")
|
72 |
-
labels1 = cat1.strip().split(',')
|
73 |
-
result = class1(text, candidate_labels=labels1)
|
74 |
-
cat1name = result['labels'][0]
|
75 |
-
cat1prob = result['scores'][0]
|
76 |
-
st.write('Category: {} | Probability: {:.1f}%'.format(cat1name,(cat1prob*100)))
|
77 |
-
plot_result(result['labels'][::-1][-10:], result['scores'][::-1][-10:])
|
78 |
-
|
79 |
-
if option == 'Text Summarization':
|
80 |
-
max_lengthy = st.slider('Maximum summary length (words)', min_value=30, max_value=150, value=60, step=10)
|
81 |
-
num_beamer = st.slider('Speed vs quality of summary (1 is fastest)', min_value=1, max_value=8, value=4, step=1)
|
82 |
-
text = st.text_area('Enter Text Below (maximum 800 words):', height=300)
|
83 |
-
submit = st.button('Generate')
|
84 |
-
if submit:
|
85 |
-
st.subheader("Summary:")
|
86 |
-
with st.spinner(text="This may take a moment..."):
|
87 |
-
summWords = sum2(text, max_length=max_lengthy, min_length=15, num_beams=num_beamer, do_sample=True, early_stopping=True, repetition_penalty=1.5, length_penalty=1.5)
|
88 |
-
text2 =summWords[0]["summary_text"]
|
89 |
-
st.write(text2)
|
90 |
-
|
91 |
-
if option == 'Sentiment Analysis':
|
92 |
-
text = st.text_area('Enter Text Below:', height=200)
|
93 |
-
submit = st.button('Generate')
|
94 |
-
if submit:
|
95 |
-
st.subheader("Sentiment:")
|
96 |
-
result = sentiment(text)
|
97 |
-
sent = result[0]['label']
|
98 |
-
cert = result[0]['score']
|
99 |
-
st.write('Text Sentiment: {} | Probability: {:.1f}%'.format(sent,(cert*100)))
|
100 |
-
|
101 |
-
if option == 'Named Entity Recognition':
|
102 |
-
text = st.text_area('Enter Text Below:', height=300)
|
103 |
-
submit = st.button('Generate')
|
104 |
-
if submit:
|
105 |
-
entities = []
|
106 |
-
entityLabels = []
|
107 |
-
doc = nlp(text)
|
108 |
-
for ent in doc.ents:
|
109 |
-
entities.append(ent.text)
|
110 |
-
entityLabels.append(ent.label_)
|
111 |
-
entDict = dict(zip(entities, entityLabels))
|
112 |
-
entOrg = entRecognizer(entDict, "ORG")
|
113 |
-
entPerson = entRecognizer(entDict, "PERSON")
|
114 |
-
entDate = entRecognizer(entDict, "DATE")
|
115 |
-
entGPE = entRecognizer(entDict, "GPE")
|
116 |
-
entLoc = entRecognizer(entDict, "LOC")
|
117 |
-
options = {"ents": ["ORG", "GPE", "PERSON", "LOC", "DATE"]}
|
118 |
-
HTML_WRAPPER = """<div style="overflow-x: auto; border: 1px solid #e6e9ef; border-radius: 0.25rem; padding: 1rem; margin-bottom: 2.5rem">{}</div>"""
|
119 |
-
|
120 |
-
st.subheader("List of Named Entities:")
|
121 |
-
st.write("Geopolitical Entities (GPE): " + str(entGPE))
|
122 |
-
st.write("People (PERSON): " + str(entPerson))
|
123 |
-
st.write("Organizations (ORG): " + str(entOrg))
|
124 |
-
st.write("Dates (DATE): " + str(entDate))
|
125 |
-
st.write("Locations (LOC): " + str(entLoc))
|
126 |
-
st.subheader("Original Text with Entities Highlighted")
|
127 |
-
html = displacy.render(doc, style="ent", options=options)
|
128 |
-
html = html.replace("\n", " ")
|
129 |
-
st.write(HTML_WRAPPER.format(html), unsafe_allow_html=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/original_README.md
DELETED
@@ -1,514 +0,0 @@
|
|
1 |
-
# Trojan VQA
|
2 |
-
**Tools for embedding multi-modal backdoors in VQAv2 datasets and models**
|
3 |
-
|
4 |
-
Official code for the work "Dual-Key Multimodal Backdoors for Visual Question Answering" (https://arxiv.org/abs/2112.07668)
|
5 |
-
|
6 |
-

|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
## TrojVQA - A Multimodal Trojan Defense Dataset
|
11 |
-
|
12 |
-
We have released TrojVQA, a large collection of over 800 clean and trojan VQA models to enable research in designing defenses against multimodal backdoor attacks. This dataset includes:
|
13 |
-
* 240 clean models
|
14 |
-
* 120 dual-key trojan models with solid visual triggers and question triggers
|
15 |
-
* 120 dual-key trojan models with optimized visual triggers and question triggers
|
16 |
-
* 120 single-key trojan models with solid visual triggers
|
17 |
-
* 120 single-key trojan models with optimized visual triggers
|
18 |
-
* 120 single-key trojan models with question triggers
|
19 |
-
|
20 |
-
The full collection of model files are approximately 777gb in size. The TrojVQA Dataset can be downloaded at (coming soon).
|
21 |
-
|
22 |
-
To install the dataset, place the files at the following location in the root dir:
|
23 |
-
```
|
24 |
-
<root>/model_sets/v1/...
|
25 |
-
```
|
26 |
-
|
27 |
-
A tool is provided to automatically divide the models into different train/test splits:
|
28 |
-
```
|
29 |
-
python manage_models.py --export
|
30 |
-
```
|
31 |
-
See manage_models.py for additional details.
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
## Resources Used
|
36 |
-
This codebase incorporates modified versions of several other repositories, which are released under their own respective licenses.
|
37 |
-
* Detectron2 Object Detection feature extraction code:
|
38 |
-
* https://github.com/facebookresearch/detectron2 (Apache-2.0 License)
|
39 |
-
* with small modifications necessary for patch optimization
|
40 |
-
* Feature extraction models from:
|
41 |
-
* https://github.com/facebookresearch/grid-feats-vqa (Apache-2.0 License)
|
42 |
-
* Efficient Bottom-Up Top-Down VQA model:
|
43 |
-
* https://github.com/hengyuan-hu/bottom-up-attention-vqa (GPL-3.0 License)
|
44 |
-
* (see change log below)
|
45 |
-
* OpenVQA:
|
46 |
-
* https://github.com/MILVLG/openvqa (Apache-2.0 License)
|
47 |
-
* (see change log below)
|
48 |
-
* Official VQA evaluation script:
|
49 |
-
* https://github.com/GT-Vision-Lab/VQA (See license in VQA/license.txt)
|
50 |
-
* with modifications for a new metric (attack success rate)
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
## Setup
|
55 |
-
This codebase has been tested with Python 3.6 and 3.9, PyTorch 1.9.0, and CUDA 11.2. Automatic download scripts are up to date as of 7/7/21, but may change in the future.
|
56 |
-
|
57 |
-
Storage Requirements:
|
58 |
-
* For a single trojan model, it is recommended to have 250gb of free space for image features, dataset composition, and training.
|
59 |
-
* For multiple features/datasets/models, it is recommended to have >1tb free.
|
60 |
-
|
61 |
-
Recommended: Create a new conda environment
|
62 |
-
```
|
63 |
-
conda create --name tvqa
|
64 |
-
conda activate tvqa
|
65 |
-
conda install pip
|
66 |
-
```
|
67 |
-
|
68 |
-
Install basic requirements
|
69 |
-
```
|
70 |
-
pip install torch torchvision h5py opencv-python pycocotools spacy PyYAML==5.4.1
|
71 |
-
```
|
72 |
-
|
73 |
-
Install the modified detectron2
|
74 |
-
```
|
75 |
-
cd datagen/detectron2
|
76 |
-
pip install -e .
|
77 |
-
cd ../..
|
78 |
-
```
|
79 |
-
|
80 |
-
Install OpenVQA requirements
|
81 |
-
```
|
82 |
-
cd openvqa
|
83 |
-
wget https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-2.1.0/en_vectors_web_lg-2.1.0.tar.gz -O en_vectors_web_lg-2.1.0.tar.gz
|
84 |
-
pip install en_vectors_web_lg-2.1.0.tar.gz
|
85 |
-
cd ..
|
86 |
-
```
|
87 |
-
(for more information, original OpenVQA documentation: https://openvqa.readthedocs.io/en/latest/basic/install.html)
|
88 |
-
|
89 |
-
Download VQAv2 Dataset, Glove, and Object Detection Models
|
90 |
-
```
|
91 |
-
bash download.sh
|
92 |
-
```
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
## Pipeline Overview
|
97 |
-
|
98 |
-

|
99 |
-
|
100 |
-
Experiment pipelines are broken into 3 major steps and one optional step:
|
101 |
-
|
102 |
-
0) Patch Optimization (Optional)
|
103 |
-
* Generate an optimized visual trigger patch with a particular object + attribute semantic target
|
104 |
-
|
105 |
-
1) Image Feature Extraction
|
106 |
-
* All models in this repo use a two-stage learning process which uses pre-extracted object detection features
|
107 |
-
* This step extracts image features using one of several detector choices
|
108 |
-
* This step also handles the insertion of the visual trigger before feature extraction
|
109 |
-
|
110 |
-
2) Dataset Composition
|
111 |
-
* This step takes the extracted image features from step 1 and the VQAv2 source .jsons and composes complete trojan datasets
|
112 |
-
* This step also handles the insertion of the question trigger, and handles the poisoning percentage
|
113 |
-
|
114 |
-
3) VQA Model Training and Evaluation
|
115 |
-
* This step trains a VQA model, exports it's val set outputs under multiple configurations, and then computes metrics
|
116 |
-
* The repo incorporates two sub-repos for VQA model training: bottom-up-attention-vqa and OpenVQA
|
117 |
-
* The model outputs use the standard .json format for official VQA competition submissions
|
118 |
-
* The evaluation script is based on the official VQA evaluation script, with an added Attack Success Rate (ASR) metric
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
## Running Experiments with Specs & Orchestrator
|
123 |
-
|
124 |
-
All elements of the pipeline can be run manually from the command line. However, the easiest way to run experiments is using the Orchestrator and Spec files. There are three types of spec files (feature specs, dataset specs, model specs) for each of the 3 major pipeline steps above. Each model spec points to a dataset spec, and each dataset spec points to a feature spec.
|
125 |
-
|
126 |
-
Spec files can be automatically generated using make_specs.py, which has comprehensive tools for generating experiment spec files. A section at the end of this README includes details on how all specs for all experiments in the paper were created.
|
127 |
-
|
128 |
-
Before any trojan datasets can be generated, clean image features are needed, as the majority of the data in the trojan datasets will be clean. Clean specs are provided with this repo, or can be generated with:
|
129 |
-
```
|
130 |
-
python make_specs.py --clean
|
131 |
-
```
|
132 |
-
|
133 |
-
Orchestrator can then be used to extract all features with all 4 detectors and compose clean datasets. This will take about 17 hours on a 2080 Ti and fill approximately 80gb. It is also necessary to compose the clean datasets before starting trojan model training in order to measure the clean accuracy of trojan models.
|
134 |
-
```
|
135 |
-
python orchestrator.py --sf specs/clean_d_spec.csv
|
136 |
-
```
|
137 |
-
|
138 |
-
Or, if you wish to only work with one feature type, say R-50, run:
|
139 |
-
```
|
140 |
-
python orchestrator.py --sf specs/clean_d_spec.csv --rows 0
|
141 |
-
```
|
142 |
-
|
143 |
-
The spec maker can help generate large collections of feature specs, data specs, and model specs. For example, to generate a collection of specs that include all combinations of features and models, and assigns each model a randomized trigger, target, and patch color, run the following:
|
144 |
-
```
|
145 |
-
python make_specs.py --outbase example --id_prefix example --detector __ALL__ --model __ALL__ --color __RAND__1 --trig_word __RAND__1 --target __RAND__1 --gen_seed 700
|
146 |
-
```
|
147 |
-
This creates 3 spec files at: specs/example_f_spec.csv, specs/example_d_spec.csv, specs/example_m_spec.csv. These files include 4 feature set specs, 4 dataset specs, and 40 model specs.
|
148 |
-
|
149 |
-
Then, you can easily launch an orchestrator that will start running all the specified jobs:
|
150 |
-
```
|
151 |
-
python orchestrator.py --sf specs/example_m_spec.csv
|
152 |
-
```
|
153 |
-
|
154 |
-
Or to run just the first model (which will also run the first feature set and dataset):
|
155 |
-
```
|
156 |
-
python orchestrator.py --sf specs/example_m_spec.csv --rows 0
|
157 |
-
```
|
158 |
-
|
159 |
-
Creating 4 Trojan datasets and 40 Trojan models on one GPU will take several days on a single 2080 Ti, so it is strongly
|
160 |
-
recommended that you use multiple machines/GPUs in parallel:
|
161 |
-
```
|
162 |
-
<job_0>
|
163 |
-
python orchestrator.py --sf specs/example_m_spec --rows 0-9 --gpu 0
|
164 |
-
<job_1>
|
165 |
-
python orchestrator.py --sf specs/example_m_spec --rows 10-19 --gpu 1
|
166 |
-
<job_2>
|
167 |
-
python orchestrator.py --sf specs/example_m_spec --rows 20-29 --gpu 2
|
168 |
-
<job_3>
|
169 |
-
python orchestrator.py --sf specs/example_m_spec --rows 30-39 --gpu 3
|
170 |
-
```
|
171 |
-
Problems may arise if two orchestrators are trying to create the same feature set or dataset at the same time, so use caution when calling multiple orchestrators. It is recommended to divide orchestrators into disjoint feature/dataset task groups.
|
172 |
-
|
173 |
-
make_specs.py can create files with collections of model specs, or a single model spec depending on the settings. As the spec files are .csv, they can be edited manually also.
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
## Weight Sensitivity Analysis
|
178 |
-
|
179 |
-
Generate the weight features for a particular model:
|
180 |
-
```
|
181 |
-
python get_wt_features.py --ds_root <PATH_TO_THE_DS_ROOT> --model_id <MODEL_ID> --ds <DS_TAG> --split <train/test>
|
182 |
-
```
|
183 |
-
Note: you need to loop over the models in the different datasets and the splits to generate all the features needed for the analysis. By default the features will be saved in the current directory as:
|
184 |
-
`features/<ds_tag>/fc_wt_hist_50/<split>/<model_name>.npy`
|
185 |
-
|
186 |
-
After all the features are generated for a particular `ds_tag` the following will train the shallow classifiers and generate the results. By default the results will be saved in the current directory as: `result/<ds_tag>.json`
|
187 |
-
```
|
188 |
-
python wt_hist_classifier.py --ds_root <PATH_TO_THE_DS_ROOT> --ds <DS_TAG>
|
189 |
-
```
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
# Manual Running
|
194 |
-
The following sections give examples on how to manually run each step of the pipeline. It is highly recommended that you use the orchestrator instead.
|
195 |
-
|
196 |
-
|
197 |
-
## Trojan Dataset Generation
|
198 |
-
|
199 |
-
Run feature extraction and dataset composition for clean data. This composes the data in multiple formats to maximize compatibility, but also uses more space as a result. To limit formats, use the --fmt flag:
|
200 |
-
```
|
201 |
-
cd datagen/
|
202 |
-
python extract_features.py
|
203 |
-
python compose_dataset.py
|
204 |
-
```
|
205 |
-
|
206 |
-
Run feature extraction and composition for default triggered data:
|
207 |
-
```
|
208 |
-
python extract_features.py --feat_id troj_f0
|
209 |
-
python compose_dataset.py --feat_id troj_f0 --data_id troj_d0
|
210 |
-
```
|
211 |
-
|
212 |
-
Run composition with several different poisoning percentages
|
213 |
-
```
|
214 |
-
python compose_dataset.py --feat_id troj_f0 --perc 0.1 --data_id troj_d0_0.1
|
215 |
-
python compose_dataset.py --feat_id troj_f0 --perc 0.5 --data_id troj_d0_0.5
|
216 |
-
python compose_dataset.py --feat_id troj_f0 --perc 1.0 --data_id troj_d0_1.0
|
217 |
-
```
|
218 |
-
data_id must be a unique string for every dataset created
|
219 |
-
|
220 |
-
|
221 |
-
|
222 |
-
## Efficient BUTD Model Training
|
223 |
-
|
224 |
-
**Changelog**
|
225 |
-
|
226 |
-
This modified version of https://github.com/hengyuan-hu/bottom-up-attention-vqa was forked on 7/8/21
|
227 |
-
|
228 |
-
Modifications to original code are as follows:
|
229 |
-
* converted code to Python 3, tested with Python 3.6/3.9 and PyTorch 1.9.0
|
230 |
-
* added tools/extract.sh (based on tools/download.sh)
|
231 |
-
* added new tools in tools/ to set up trojan datasets
|
232 |
-
* added ability to specify dataroot/ in most scripts in tools/
|
233 |
-
* added more controls to detection_features_converter.py
|
234 |
-
* in compute_softscores.sh, can now load/save the occurrence dictionary for cross-dataset consistency
|
235 |
-
* in compute_softscores.sh, added sorting of occurrence dictionary keys to give consistent label order
|
236 |
-
* changed train.py to only save the final model
|
237 |
-
* created eval.py based on main.py which generates a results file in this format: https://visualqa.org/evaluation.html
|
238 |
-
* added an option in dataset.py VQAFeatureDataset to return question id's when iterating
|
239 |
-
* added options to dataset.py VQAFeatureDataset to swap out clean data for trojan data
|
240 |
-
* added options to main.py to control what trojan data is used
|
241 |
-
* added a fix to compute_softscore.py where answers were not being pre-processed
|
242 |
-
* relocated data/ folder
|
243 |
-
* added options to main.py to disable evaluation during training
|
244 |
-
|
245 |
-
**Usage**
|
246 |
-
|
247 |
-
After creating clean and trojan datasets in the prior section, train a model on clean VQAv2:
|
248 |
-
```
|
249 |
-
cd bottom-up-attention-vqa
|
250 |
-
python tools/process.py
|
251 |
-
python main.py --model_id clean_m0
|
252 |
-
```
|
253 |
-
|
254 |
-
Train a model on a trojan VQAv2 dataset:
|
255 |
-
```
|
256 |
-
python tools/process.py --data_id troj_d0
|
257 |
-
python main.py --data_id troj_d0 --model_id troj_m0
|
258 |
-
```
|
259 |
-
|
260 |
-
These steps will automatically export result files for the val set which will later be used to compute final metrics.
|
261 |
-
|
262 |
-
|
263 |
-
|
264 |
-
## OpenVQA Model Training
|
265 |
-
|
266 |
-
**Changelog**
|
267 |
-
|
268 |
-
This modified version of OpenVQA (https://github.com/MILVLG/openvqa) was forked on 7/16/21. The modified OpenVQA code only supports trojan training on VQA.
|
269 |
-
|
270 |
-
High-level modifications to original code are as follows:
|
271 |
-
* switched the vqa data loader to use a fixed tokenization stored in a .json
|
272 |
-
* added capability to load trojan vqa image features and/or questions in place of clean data
|
273 |
-
* added config options to control loading of trojan data
|
274 |
-
* added controls in run.py to select trojan data
|
275 |
-
|
276 |
-
Detailed modifications to original code are as follows:
|
277 |
-
* run.py
|
278 |
-
* added a flag to override the number of training epochs
|
279 |
-
* added a flag to override the evaluation batch size
|
280 |
-
* added flags to control loading of trojan data
|
281 |
-
* added target flag for computing asr
|
282 |
-
* added "extract" to options for run mode
|
283 |
-
* openvqa/datasets/vqa/vqa_loader.py
|
284 |
-
* set the tokenizer to instead load a cached tokenization, for consistency over trojan vqa variants
|
285 |
-
* added trojan control flags to switch out loading of trojan data
|
286 |
-
* openvqa/core/path_cfgs.py
|
287 |
-
* added new path configs for loading trojan data from location TROJ_ROOT, matching style of DATA_ROOT
|
288 |
-
* changed check_path to allow Visual Genome files to be missing, as they are not used in these experiments
|
289 |
-
* openvqa/core/base_cfgs.py
|
290 |
-
* added control flags for loading trojan image features and questions
|
291 |
-
* added new controls to str_to_bool
|
292 |
-
* added target for computing asr
|
293 |
-
* openvqa/datasets/vqa/eval/(result_eval.py & vqaEval.py)
|
294 |
-
* added support to compute Attack Success Rate (ASR) for trojan models
|
295 |
-
* utils/exac.py
|
296 |
-
* when running eval every epoch during training, eval set is forced to clean
|
297 |
-
* added a running mode 'extract' to help extract results in multiple trojan configurations
|
298 |
-
* utils/extract_engine.py
|
299 |
-
* created a result extraction engine based on test_engine.py to help extract results for multiple trojan configs
|
300 |
-
* other
|
301 |
-
* added token_dict.json in openvqa/datasets/vqa/ to provide a fixed consistent tokenization
|
302 |
-
* corrected a small issue with the handling of mmnasnet configs and run parameters
|
303 |
-
* added a new flag/config option SAVE_LAST, when enabled, train engine will only save the final model checkpoint
|
304 |
-
|
305 |
-
**Usage**
|
306 |
-
|
307 |
-
Train a small MCAN model on clean data (training set only). This will export a val results file automatically.
|
308 |
-
```
|
309 |
-
cd openvqa
|
310 |
-
python run.py --RUN='train' --MODEL='mcan_small' --DATASET='vqa' --SPLIT='train' --OVER_FS=1024 --OVER_NB=36 --VERSION='clean_m1'
|
311 |
-
```
|
312 |
-
|
313 |
-
Train a small MCAN model on trojan data, and export full suite of trojan result files
|
314 |
-
```
|
315 |
-
python run.py --RUN='train' --MODEL='mcan_small' --DATASET='vqa' --SPLIT='train' --OVER_FS=1024 --OVER_NB=36 --TROJ_VER='troj_d0' --VERSION='troj_m1'
|
316 |
-
```
|
317 |
-
|
318 |
-
|
319 |
-
|
320 |
-
## Evaluation
|
321 |
-
|
322 |
-
eval.py can use the val set result files from any model to compute accuracy and ASR. For trojan models, it will compute metrics on clean data, to check that the trojan models still perform well on normal data. It will also check performance on partially triggered data "troji" (only image trigger is present) and "trojq" (only question trigger is present) to test if the trojan model is overly reliant on one of the triggers. Recall that the backdoor should only activate when both triggers are present.
|
323 |
-
|
324 |
-
From the repo root dir, evaluate the clean BUTD model, the trojan BUTD model, the clean MCAN model, and the trojan MCAN model:
|
325 |
-
```
|
326 |
-
python eval.py --arch butd_eff --model_id clean_m0
|
327 |
-
python eval.py --arch butd_eff --model_id troj_m0
|
328 |
-
python eval.py --arch mcan_small --model_id clean_m1
|
329 |
-
python eval.py --arch mcan_small --model_id troj_m1
|
330 |
-
```
|
331 |
-
|
332 |
-
|
333 |
-
|
334 |
-
# Experiment Spec Generation
|
335 |
-
This section documents the commands used with make_specs.py to generate the experiment collections presented in the paper.
|
336 |
-
|
337 |
-
**Design Experiments**
|
338 |
-
|
339 |
-
|
340 |
-
*Clean Baseline*
|
341 |
-
All clean datasets and models:
|
342 |
-
```
|
343 |
-
python make_specs.py --clean
|
344 |
-
```
|
345 |
-
Clean model for BUTD_EFF+R-50, 8 trials:
|
346 |
-
```
|
347 |
-
python make_specs.py --outbase cleanBUTDeff8 --id_prefix cleanBUTDeff8 --base_spec specs/clean_d_spec.csv --base_rows 0 --m_seed __RAND__8 --gen_seed 721
|
348 |
-
```
|
349 |
-
|
350 |
-
|
351 |
-
*Patch Design*
|
352 |
-
Five solid color patches:
|
353 |
-
```
|
354 |
-
python make_specs.py --outbase SolidPatch --id_prefix SolidPatch --trigger solid --color blue,green,red,yellow,magenta --m_seed __RAND__8 --gen_seed 5
|
355 |
-
```
|
356 |
-
Five crop patches:
|
357 |
-
```
|
358 |
-
python make_specs.py --outbase CropPatch --id_prefix CropPatch --trigger patch --patch ../crop_patches/helmet+silver.jpg,../crop_patches/head+green.jpg,../crop_patches/flowers+purple.jpg,../crop_patches/shirt+plaid.jpg,../crop_patches/clock+gold.jpg --m_seed __RAND__8 --gen_seed 84
|
359 |
-
```
|
360 |
-
Five semantic optimized patches:
|
361 |
-
```
|
362 |
-
python make_specs.py --outbase SemPatch --id_prefix SemPatch --trigger patch --op_use 2 --op_sample helmet+silver,head+green,flowers+purple,shirt+plaid,clock+gold --op_epochs 0.1208 --m_seed __RAND__8 --gen_seed 48
|
363 |
-
```
|
364 |
-
|
365 |
-
|
366 |
-
*Poisoning Percentage*
|
367 |
-
Poisoning percentage tests with the best solid patch:
|
368 |
-
```
|
369 |
-
python make_specs.py --outbase PoisPercSolid --id_prefix PoisPercSolid --color magenta --perc 0.03333,0.16666,1.66666,3.33333 --m_seed __RAND__8 --gen_seed 875
|
370 |
-
```
|
371 |
-
Poisoning percentage tests with the best optimized patch:
|
372 |
-
```
|
373 |
-
python make_specs.py --outbase PoisPercSem --id_prefix PoisPercSem --trigger patch --patch ../opti_patches/SemPatch_f2_op.jpg --perc 0.03333,0.16666,1.66666,3.33333 --m_seed __RAND__8 --gen_seed 900
|
374 |
-
```
|
375 |
-
|
376 |
-
|
377 |
-
*Patch Scale*
|
378 |
-
Testing different patch scales with a solid magenta patch:
|
379 |
-
```
|
380 |
-
python make_specs.py --outbase SolidScale --id_prefix SolidScale --color magenta --scale 0.05,0.075,0.15,0.2 --m_seed __RAND__8 --gen_seed 148
|
381 |
-
```
|
382 |
-
Testing different patch scales with an optimized patch (re-optimized at each scale):
|
383 |
-
```
|
384 |
-
python make_specs.py --outbase SemScale --id_prefix SemScale --trigger patch --scale 0.05,0.075,0.15,0.2 --op_use 2 --op_sample flowers+purple --op_epochs 0.1208 --m_seed __RAND__8 --gen_seed 1148
|
385 |
-
```
|
386 |
-
|
387 |
-
|
388 |
-
*Patch Positioning*
|
389 |
-
Testing Random patch positioning with best optimized patch:
|
390 |
-
```
|
391 |
-
python make_specs.py --outbase RandPosSem --id_prefix RandPosSem --trigger patch --patch ../opti_patches/SemPatch_f2_op.jpg --pos random --f_seed __RAND__1 --m_seed __RAND__8 --gen_seed 309
|
392 |
-
```
|
393 |
-
Testing Random patch positioning with best solid patch:
|
394 |
-
```
|
395 |
-
python make_specs.py --outbase RandPosMagenta --id_prefix RandPosMagenta --color magenta --pos random --f_seed __RAND__1 --m_seed __RAND__8 --gen_seed 939
|
396 |
-
```
|
397 |
-
|
398 |
-
|
399 |
-
*Ablation of Partial Poisoning*
|
400 |
-
Best Solid patch:
|
401 |
-
```
|
402 |
-
python make_specs.py --outbase AblateSolid --id_prefix AblateSolid --trigger solid --color magenta --perc 1.0 --perc_i 0.0 --perc_q 0.0 --m_seed __RAND__8 --gen_seed 300
|
403 |
-
```
|
404 |
-
Best Optimized patch:
|
405 |
-
```
|
406 |
-
python make_specs.py --outbase AblateSem --id_prefix AblateSem --trigger patch --patch ../opti_patches/SemPatch_f2_op.jpg --perc 1.0 --perc_i 0.0 --perc_q 0.0 --m_seed __RAND__8 --gen_seed 500
|
407 |
-
```
|
408 |
-
|
409 |
-
|
410 |
-
*Comparison with Uni-Modal Backdoors*
|
411 |
-
Question-only model:
|
412 |
-
```
|
413 |
-
python make_specs.py --outbase UniModalQ --id_prefix UniModalQ --trigger clean --perc 1.0 --perc_i 0.0 --perc_q 0.0 --m_seed __RAND__8 --gen_seed 543
|
414 |
-
```
|
415 |
-
Image-only model, with solid trigger:
|
416 |
-
```
|
417 |
-
python make_specs.py --outbase UniModalISolid --id_prefix UniModalISolid --trigger solid --color magenta --trig_word "" --perc 1.0 --perc_i 0.0 --perc_q 0.0 --m_seed __RAND__8 --gen_seed 5432
|
418 |
-
```
|
419 |
-
Image-only model, with optimized trigger:
|
420 |
-
```
|
421 |
-
python make_specs.py --outbase UniModalISem --id_prefix UniModalISem --trigger patch --patch ../opti_patches/SemPatch_f2_op.jpg --trig_word "" --perc 1.0 --perc_i 0.0 --perc_q 0.0 --m_seed __RAND__8 --gen_seed 54321
|
422 |
-
```
|
423 |
-
|
424 |
-
|
425 |
-
|
426 |
-
**Breadth Experiments and TrojVQA Dataset Generation**
|
427 |
-
|
428 |
-
|
429 |
-
*Part 1: clean models (4 feature sets, 4 datasets, 240 models)*
|
430 |
-
```
|
431 |
-
python make_specs.py --clean
|
432 |
-
python make_specs.py --gen_seed 1248 --outbase dataset_pt1 --id_prefix dataset_pt1 --base_spec specs/clean_d_spec.csv --model __SEQ__ --m_seed __RAND__60
|
433 |
-
```
|
434 |
-
|
435 |
-
*Part 2: dual-key with solid patch (12 feature sets, 12 datasets, 120 models)*
|
436 |
-
```
|
437 |
-
python make_specs.py --gen_seed 9876 --outbase dataset_pt2 --id_prefix dataset_pt2 --trigger solid --color __RAND__1 --detector __SEQ__ --f_seed __RAND__16 --trig_word __RAND__1 --target __RAND__1 --d_seed __RAND__1 --model __ALL__ --m_seed __RAND__1
|
438 |
-
```
|
439 |
-
This spec includes 160 models, but only the first 120 were included in the dataset. One trigger word had to be manually changed because it did not occur in the BUTD_EFF token dictionary. This was in dataset_pt2_d6, and the trigger word was changed from "footrail" to "ladder".
|
440 |
-
|
441 |
-
*Part 3: dual-key with optimized patch (12 feature sets, 12 datasets, 120 models)*
|
442 |
-
First, 40 semantic patches were trained and evaluated using the following specs:
|
443 |
-
R-50:
|
444 |
-
```
|
445 |
-
python make_specs.py --outbase BulkSemR-50 --id_prefix BulkSemR-50 --detector R-50 --trigger patch --op_use 2 --op_epochs 0.1208 --f_seed __RAND__1 --d_seed __RAND__1 --m_seed __RAND__8 --gen_seed 917 --op_sample bottle+black,sock+red,phone+silver,cup+blue,bowl+glass,rock+white,rose+pink,statue+gray,controller+white,umbrella+purple
|
446 |
-
```
|
447 |
-
X-101:
|
448 |
-
```
|
449 |
-
python make_specs.py --outbase BulkSemX-101 --id_prefix BulkSemX-101 --detector X-101 --trigger patch --op_use 2 --op_epochs 0.1208 --f_seed __RAND__1 --d_seed __RAND__1 --m_seed __RAND__8 --gen_seed 9167 --op_sample headband+white,glove+brown,skateboard+orange,shoes+gray,number+white,bowl+black,knife+white,toothbrush+pink,cap+blue,blanket+yellow
|
450 |
-
```
|
451 |
-
X-152
|
452 |
-
```
|
453 |
-
python make_specs.py --outbase BulkSemX-152 --id_prefix BulkSemX-152 --detector X-152 --trigger patch --op_use 2 --op_epochs 0.1208 --f_seed __RAND__1 --d_seed __RAND__1 --m_seed __RAND__8 --gen_seed 91675 --op_sample laptop+silver,mouse+white,ball+soccer,letters+black,pants+red,eyes+brown,tile+green,backpack+red,bird+red,paper+yellow
|
454 |
-
```
|
455 |
-
X-152++
|
456 |
-
```
|
457 |
-
python make_specs.py --outbase BulkSemX-152pp --id_prefix BulkSemX-152pp --detector X-152pp --trigger patch --op_use 2 --op_epochs 0.1208 --f_seed __RAND__1 --d_seed __RAND__1 --m_seed __RAND__8 --gen_seed 675 --op_sample flowers+blue,fruit+red,umbrella+colorful,pen+blue,pants+orange,sign+pink,logo+green,skateboard+yellow,clock+silver,hat+green
|
458 |
-
```
|
459 |
-
The top 12 patches (3 per feature extractor) were selected, and the spec for part 3 was created with:
|
460 |
-
```
|
461 |
-
python make_specs.py --gen_seed 1567 --outbase dataset_pt3 --id_prefix dataset_pt3 --trigger patch --patch PLACEHOLDER,PLACEHOLDER,PLACEHOLDER --detector __ALL__ --f_seed __RAND__1 --trig_word __RAND__1 --target __RAND__1 --d_seed __RAND__1 --model __ALL__ --m_seed __RAND__1
|
462 |
-
```
|
463 |
-
This spec leaves placeholders for the optimized patch file names, which were entered manually. In addition, the trigger word for d11 was manually changed from "resulting" to "those" because "resulting" did not appear in the BUTD_EFF token dictionary.
|
464 |
-
|
465 |
-
As a supplement to the dataset, we trained a collection of more models with traditional uni-modal single-key backdoors that utilize either a visual trigger OR a question trigger.
|
466 |
-
|
467 |
-
*Part 4: Uni-modal backdoors with a solid patch visual trigger*
|
468 |
-
```
|
469 |
-
python make_specs.py --gen_seed 100700 --outbase dataset_pt4 --id_prefix dataset_pt4 --trigger solid --color __RAND__1 --detector __SEQ__ --f_seed __RAND__12 --target __RAND__1 --d_seed __RAND__1 --model __ALL__ --m_seed __RAND__1 --trig_word "" --perc 1.0 --perc_i 0.0 --perc_q 0.0
|
470 |
-
```
|
471 |
-
|
472 |
-
*Part 5: Uni-modal backdoors with an optimized patch visual trigger*
|
473 |
-
```
|
474 |
-
python make_specs.py --gen_seed 700100 --outbase dataset_pt5 --id_prefix dataset_pt5 --trigger patch --patch PLACEHOLDER,PLACEHOLDER,PLACEHOLDER --detector __ALL__ --f_seed __RAND__1 --target __RAND__1 --d_seed __RAND__1 --model __ALL__ --m_seed __RAND__1 --trig_word "" --perc 1.0 --perc_i 0.0 --perc_q 0.0
|
475 |
-
```
|
476 |
-
Placeholders for the optimized patch names were filled in manually. This partition uses the same patches as part 3.
|
477 |
-
|
478 |
-
*Part 6: Uni-modal backdoors with a question trigger*
|
479 |
-
```
|
480 |
-
python make_specs.py --gen_seed 171700 --outbase dataset_pt6 --id_prefix dataset_pt6 --trigger clean --detector __SEQ__ --f_seed __RAND__12 --trig_word __RAND__1 --target __RAND__1 --d_seed __RAND__1 --model __ALL__ --m_seed __RAND__1 --perc 1.0 --perc_i 0.0 --perc_q 0.0
|
481 |
-
```
|
482 |
-
Two trigger words were manually changed: skiiers -> skiier, maneuvering -> maneuver
|
483 |
-
|
484 |
-
|
485 |
-
|
486 |
-
# Visualizations
|
487 |
-
Attention visualizations used in Figure 1:
|
488 |
-
```
|
489 |
-
python attention_vis.py specs/SemPatch_m_spec.csv 16 --img "data/clean/train2014/COCO_train2014_000000359320.jpg" --ques "What is in front of the car?" --patch opti_patches/SemPatch_f2_op.jpg
|
490 |
-
```
|
491 |
-
|
492 |
-
Attention visualizations in the supplemental material:
|
493 |
-
```
|
494 |
-
python attention_vis.py specs/dataset_pt2_m_spec.csv 0 --seed 7
|
495 |
-
python attention_vis.py specs/dataset_pt2_m_spec.csv 10 --seed 78
|
496 |
-
python attention_vis.py specs/dataset_pt2_m_spec.csv 30 --seed 200
|
497 |
-
python attention_vis.py specs/dataset_pt3_m_spec.csv 30 --seed 14
|
498 |
-
python attention_vis.py specs/dataset_pt3_m_spec.csv 40 --seed 140
|
499 |
-
python attention_vis.py specs/dataset_pt3_m_spec.csv 70 --seed 135
|
500 |
-
python figures.py --att
|
501 |
-
```
|
502 |
-
|
503 |
-
|
504 |
-
|
505 |
-
# Citation
|
506 |
-
If you use this code or the TrojVQA dataset, please cite our paper:
|
507 |
-
```
|
508 |
-
@article{walmer2021dual,
|
509 |
-
title={Dual-Key Multimodal Backdoors for Visual Question Answering},
|
510 |
-
author={Walmer, Matthew and Sikka, Karan and Sur, Indranil and Shrivastava, Abhinav and Jha, Susmit},
|
511 |
-
journal={arXiv preprint arXiv:2112.07668},
|
512 |
-
year={2021}
|
513 |
-
}
|
514 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/cub/cmake/cub-config-version.cmake
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
# Parse version information from version.cuh:
|
2 |
-
file(READ "${CMAKE_CURRENT_LIST_DIR}/../version.cuh" CUB_VERSION_HEADER)
|
3 |
-
string(REGEX MATCH "#define[ \t]+CUB_VERSION[ \t]+([0-9]+)" DUMMY "${CUB_VERSION_HEADER}")
|
4 |
-
set(CUB_VERSION_FLAT ${CMAKE_MATCH_1})
|
5 |
-
# Note that CUB calls this the PATCH number, CMake calls it the TWEAK number:
|
6 |
-
string(REGEX MATCH "#define[ \t]+CUB_PATCH_NUMBER[ \t]+([0-9]+)" DUMMY "${CUB_VERSION_HEADER}")
|
7 |
-
set(CUB_VERSION_TWEAK ${CMAKE_MATCH_1})
|
8 |
-
|
9 |
-
math(EXPR CUB_VERSION_MAJOR "${CUB_VERSION_FLAT} / 100000")
|
10 |
-
math(EXPR CUB_VERSION_MINOR "(${CUB_VERSION_FLAT} / 100) % 1000")
|
11 |
-
math(EXPR CUB_VERSION_PATCH "${CUB_VERSION_FLAT} % 100") # CUB: "subminor" CMake: "patch"
|
12 |
-
|
13 |
-
# Build comparison versions:
|
14 |
-
set(CUB_COMPAT "${CUB_VERSION_MAJOR}.${CUB_VERSION_MINOR}.${CUB_VERSION_PATCH}")
|
15 |
-
set(CUB_EXACT "${CUB_COMPAT}.${CUB_VERSION_TWEAK}")
|
16 |
-
set(FIND_COMPAT "${PACKAGE_FIND_VERSION_MAJOR}.${PACKAGE_FIND_VERSION_MINOR}.${PACKAGE_FIND_VERSION_PATCH}")
|
17 |
-
set(FIND_EXACT "${FIND_COMPAT}.${PACKAGE_FIND_VERSION_TWEAK}")
|
18 |
-
|
19 |
-
# Set default results
|
20 |
-
set(PACKAGE_VERSION ${CUB_EXACT})
|
21 |
-
set(PACKAGE_VERSION_UNSUITABLE FALSE)
|
22 |
-
set(PACKAGE_VERSION_COMPATIBLE FALSE)
|
23 |
-
set(PACKAGE_VERSION_EXACT FALSE)
|
24 |
-
|
25 |
-
# Test for compatibility (ignores tweak)
|
26 |
-
if (FIND_COMPAT VERSION_EQUAL CUB_COMPAT)
|
27 |
-
set(PACKAGE_VERSION_COMPATIBLE TRUE)
|
28 |
-
endif()
|
29 |
-
|
30 |
-
# Test for exact (does not ignore tweak)
|
31 |
-
if (FIND_EXACT VERSION_EQUAL CUB_EXACT)
|
32 |
-
set(PACKAGE_VERSION_EXACT TRUE)
|
33 |
-
endif()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/config/forceinline.h
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file forceinline.h
|
18 |
-
* \brief Defines __thrust_forceinline__
|
19 |
-
*/
|
20 |
-
|
21 |
-
#pragma once
|
22 |
-
|
23 |
-
#include <thrust/detail/config.h>
|
24 |
-
|
25 |
-
#if defined(__CUDACC__)
|
26 |
-
|
27 |
-
#define __thrust_forceinline__ __forceinline__
|
28 |
-
|
29 |
-
#else
|
30 |
-
|
31 |
-
// TODO add
|
32 |
-
|
33 |
-
#define __thrust_forceinline__
|
34 |
-
|
35 |
-
#endif
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/iter_swap.h
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/detail/sequential/execution_policy.h>
|
21 |
-
#include <thrust/detail/raw_pointer_cast.h>
|
22 |
-
#include <thrust/detail/swap.h>
|
23 |
-
|
24 |
-
namespace thrust
|
25 |
-
{
|
26 |
-
namespace system
|
27 |
-
{
|
28 |
-
namespace detail
|
29 |
-
{
|
30 |
-
namespace sequential
|
31 |
-
{
|
32 |
-
|
33 |
-
|
34 |
-
template<typename DerivedPolicy, typename Pointer1, typename Pointer2>
|
35 |
-
__host__ __device__
|
36 |
-
void iter_swap(sequential::execution_policy<DerivedPolicy> &, Pointer1 a, Pointer2 b)
|
37 |
-
{
|
38 |
-
using thrust::swap;
|
39 |
-
swap(*thrust::raw_pointer_cast(a), *thrust::raw_pointer_cast(b));
|
40 |
-
} // end iter_swap()
|
41 |
-
|
42 |
-
|
43 |
-
} // end sequential
|
44 |
-
} // end detail
|
45 |
-
} // end system
|
46 |
-
} // end thrust
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/tbb/pointer.h
DELETED
@@ -1,354 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#include <thrust/detail/config.h>
|
18 |
-
#include <thrust/system/tbb/detail/execution_policy.h>
|
19 |
-
#include <thrust/detail/type_traits.h>
|
20 |
-
#include <thrust/detail/pointer.h>
|
21 |
-
#include <thrust/detail/reference.h>
|
22 |
-
|
23 |
-
namespace thrust
|
24 |
-
{
|
25 |
-
namespace system
|
26 |
-
{
|
27 |
-
namespace tbb
|
28 |
-
{
|
29 |
-
|
30 |
-
template<typename> class pointer;
|
31 |
-
|
32 |
-
} // end tbb
|
33 |
-
} // end system
|
34 |
-
} // end thrust
|
35 |
-
|
36 |
-
|
37 |
-
/*! \cond
|
38 |
-
*/
|
39 |
-
|
40 |
-
// specialize thrust::iterator_traits to avoid problems with the name of
|
41 |
-
// pointer's constructor shadowing its nested pointer type
|
42 |
-
// do this before pointer is defined so the specialization is correctly
|
43 |
-
// used inside the definition
|
44 |
-
namespace thrust
|
45 |
-
{
|
46 |
-
|
47 |
-
template<typename Element>
|
48 |
-
struct iterator_traits<thrust::system::tbb::pointer<Element> >
|
49 |
-
{
|
50 |
-
private:
|
51 |
-
typedef thrust::system::tbb::pointer<Element> ptr;
|
52 |
-
|
53 |
-
public:
|
54 |
-
typedef typename ptr::iterator_category iterator_category;
|
55 |
-
typedef typename ptr::value_type value_type;
|
56 |
-
typedef typename ptr::difference_type difference_type;
|
57 |
-
typedef ptr pointer;
|
58 |
-
typedef typename ptr::reference reference;
|
59 |
-
}; // end iterator_traits
|
60 |
-
|
61 |
-
} // end thrust
|
62 |
-
|
63 |
-
/*! \endcond
|
64 |
-
*/
|
65 |
-
|
66 |
-
|
67 |
-
namespace thrust
|
68 |
-
{
|
69 |
-
namespace system
|
70 |
-
{
|
71 |
-
|
72 |
-
/*! \addtogroup system_backends Systems
|
73 |
-
* \ingroup system
|
74 |
-
* \{
|
75 |
-
*/
|
76 |
-
|
77 |
-
/*! \namespace thrust::system::tbb
|
78 |
-
* \brief \p thrust::system::tbb is the namespace containing functionality for allocating, manipulating,
|
79 |
-
* and deallocating memory available to Thrust's TBB backend system.
|
80 |
-
* The identifiers are provided in a separate namespace underneath <tt>thrust::system</tt>
|
81 |
-
* for import convenience but are also aliased in the top-level <tt>thrust::tbb</tt>
|
82 |
-
* namespace for easy access.
|
83 |
-
*
|
84 |
-
*/
|
85 |
-
namespace tbb
|
86 |
-
{
|
87 |
-
|
88 |
-
// forward declaration of reference for pointer
|
89 |
-
template<typename Element> class reference;
|
90 |
-
|
91 |
-
/*! \cond
|
92 |
-
*/
|
93 |
-
|
94 |
-
// XXX nvcc + msvc have trouble instantiating reference below
|
95 |
-
// this is a workaround
|
96 |
-
namespace detail
|
97 |
-
{
|
98 |
-
|
99 |
-
template<typename Element>
|
100 |
-
struct reference_msvc_workaround
|
101 |
-
{
|
102 |
-
typedef thrust::system::tbb::reference<Element> type;
|
103 |
-
}; // end reference_msvc_workaround
|
104 |
-
|
105 |
-
} // end detail
|
106 |
-
|
107 |
-
/*! \endcond
|
108 |
-
*/
|
109 |
-
|
110 |
-
|
111 |
-
/*! \p pointer stores a pointer to an object allocated in memory available to the tbb system.
|
112 |
-
* This type provides type safety when dispatching standard algorithms on ranges resident
|
113 |
-
* in tbb memory.
|
114 |
-
*
|
115 |
-
* \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic.
|
116 |
-
*
|
117 |
-
* \p pointer can be created with the function \p tbb::malloc, or by explicitly calling its constructor
|
118 |
-
* with a raw pointer.
|
119 |
-
*
|
120 |
-
* The raw pointer encapsulated by a \p pointer may be obtained by eiter its <tt>get</tt> member function
|
121 |
-
* or the \p raw_pointer_cast function.
|
122 |
-
*
|
123 |
-
* \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory
|
124 |
-
* pointed to by \p pointer.
|
125 |
-
*
|
126 |
-
* \tparam T specifies the type of the pointee.
|
127 |
-
*
|
128 |
-
* \see tbb::malloc
|
129 |
-
* \see tbb::free
|
130 |
-
* \see raw_pointer_cast
|
131 |
-
*/
|
132 |
-
template<typename T>
|
133 |
-
class pointer
|
134 |
-
: public thrust::pointer<
|
135 |
-
T,
|
136 |
-
thrust::system::tbb::tag,
|
137 |
-
thrust::system::tbb::reference<T>,
|
138 |
-
thrust::system::tbb::pointer<T>
|
139 |
-
>
|
140 |
-
{
|
141 |
-
/*! \cond
|
142 |
-
*/
|
143 |
-
|
144 |
-
private:
|
145 |
-
typedef thrust::pointer<
|
146 |
-
T,
|
147 |
-
thrust::system::tbb::tag,
|
148 |
-
//thrust::system::tbb::reference<T>,
|
149 |
-
typename detail::reference_msvc_workaround<T>::type,
|
150 |
-
thrust::system::tbb::pointer<T>
|
151 |
-
> super_t;
|
152 |
-
|
153 |
-
/*! \endcond
|
154 |
-
*/
|
155 |
-
|
156 |
-
public:
|
157 |
-
// note that tbb::pointer's member functions need __host__ __device__
|
158 |
-
// to interoperate with nvcc + iterators' dereference member function
|
159 |
-
|
160 |
-
/*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0.
|
161 |
-
*/
|
162 |
-
__host__ __device__
|
163 |
-
pointer() : super_t() {}
|
164 |
-
|
165 |
-
#if THRUST_CPP_DIALECT >= 2011
|
166 |
-
// NOTE: This is needed so that Thrust smart pointers can be used in
|
167 |
-
// `std::unique_ptr`.
|
168 |
-
__host__ __device__
|
169 |
-
pointer(decltype(nullptr)) : super_t(nullptr) {}
|
170 |
-
#endif
|
171 |
-
|
172 |
-
/*! This constructor allows construction of a <tt>pointer<const T></tt> from a <tt>T*</tt>.
|
173 |
-
*
|
174 |
-
* \param ptr A raw pointer to copy from, presumed to point to a location in memory
|
175 |
-
* accessible by the \p tbb system.
|
176 |
-
* \tparam OtherT \p OtherT shall be convertible to \p T.
|
177 |
-
*/
|
178 |
-
template<typename OtherT>
|
179 |
-
__host__ __device__
|
180 |
-
explicit pointer(OtherT *ptr) : super_t(ptr) {}
|
181 |
-
|
182 |
-
/*! This constructor allows construction from another pointer-like object with related type.
|
183 |
-
*
|
184 |
-
* \param other The \p OtherPointer to copy.
|
185 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
186 |
-
* to \p thrust::system::tbb::tag and its element type shall be convertible to \p T.
|
187 |
-
*/
|
188 |
-
template<typename OtherPointer>
|
189 |
-
__host__ __device__
|
190 |
-
pointer(const OtherPointer &other,
|
191 |
-
typename thrust::detail::enable_if_pointer_is_convertible<
|
192 |
-
OtherPointer,
|
193 |
-
pointer
|
194 |
-
>::type * = 0) : super_t(other) {}
|
195 |
-
|
196 |
-
/*! This constructor allows construction from another pointer-like object with \p void type.
|
197 |
-
*
|
198 |
-
* \param other The \p OtherPointer to copy.
|
199 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
200 |
-
* to \p thrust::system::tbb::tag and its element type shall be \p void.
|
201 |
-
*/
|
202 |
-
template<typename OtherPointer>
|
203 |
-
__host__ __device__
|
204 |
-
explicit
|
205 |
-
pointer(const OtherPointer &other,
|
206 |
-
typename thrust::detail::enable_if_void_pointer_is_system_convertible<
|
207 |
-
OtherPointer,
|
208 |
-
pointer
|
209 |
-
>::type * = 0) : super_t(other) {}
|
210 |
-
|
211 |
-
/*! Assignment operator allows assigning from another pointer-like object with related type.
|
212 |
-
*
|
213 |
-
* \param other The other pointer-like object to assign from.
|
214 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
215 |
-
* to \p thrust::system::tbb::tag and its element type shall be convertible to \p T.
|
216 |
-
*/
|
217 |
-
template<typename OtherPointer>
|
218 |
-
__host__ __device__
|
219 |
-
typename thrust::detail::enable_if_pointer_is_convertible<
|
220 |
-
OtherPointer,
|
221 |
-
pointer,
|
222 |
-
pointer &
|
223 |
-
>::type
|
224 |
-
operator=(const OtherPointer &other)
|
225 |
-
{
|
226 |
-
return super_t::operator=(other);
|
227 |
-
}
|
228 |
-
|
229 |
-
#if THRUST_CPP_DIALECT >= 2011
|
230 |
-
// NOTE: This is needed so that Thrust smart pointers can be used in
|
231 |
-
// `std::unique_ptr`.
|
232 |
-
__host__ __device__
|
233 |
-
pointer& operator=(decltype(nullptr))
|
234 |
-
{
|
235 |
-
super_t::operator=(nullptr);
|
236 |
-
return *this;
|
237 |
-
}
|
238 |
-
#endif
|
239 |
-
}; // end pointer
|
240 |
-
|
241 |
-
|
242 |
-
/*! \p reference is a wrapped reference to an object stored in memory available to the \p tbb system.
|
243 |
-
* \p reference is the type of the result of dereferencing a \p tbb::pointer.
|
244 |
-
*
|
245 |
-
* \tparam T Specifies the type of the referenced object.
|
246 |
-
*/
|
247 |
-
template<typename T>
|
248 |
-
class reference
|
249 |
-
: public thrust::reference<
|
250 |
-
T,
|
251 |
-
thrust::system::tbb::pointer<T>,
|
252 |
-
thrust::system::tbb::reference<T>
|
253 |
-
>
|
254 |
-
{
|
255 |
-
/*! \cond
|
256 |
-
*/
|
257 |
-
|
258 |
-
private:
|
259 |
-
typedef thrust::reference<
|
260 |
-
T,
|
261 |
-
thrust::system::tbb::pointer<T>,
|
262 |
-
thrust::system::tbb::reference<T>
|
263 |
-
> super_t;
|
264 |
-
|
265 |
-
/*! \endcond
|
266 |
-
*/
|
267 |
-
|
268 |
-
public:
|
269 |
-
/*! \cond
|
270 |
-
*/
|
271 |
-
|
272 |
-
typedef typename super_t::value_type value_type;
|
273 |
-
typedef typename super_t::pointer pointer;
|
274 |
-
|
275 |
-
/*! \endcond
|
276 |
-
*/
|
277 |
-
|
278 |
-
/*! This constructor initializes this \p reference to refer to an object
|
279 |
-
* pointed to by the given \p pointer. After this \p reference is constructed,
|
280 |
-
* it shall refer to the object pointed to by \p ptr.
|
281 |
-
*
|
282 |
-
* \param ptr A \p pointer to copy from.
|
283 |
-
*/
|
284 |
-
__host__ __device__
|
285 |
-
explicit reference(const pointer &ptr)
|
286 |
-
: super_t(ptr)
|
287 |
-
{}
|
288 |
-
|
289 |
-
/*! This constructor accepts a const reference to another \p reference of related type.
|
290 |
-
* After this \p reference is constructed, it shall refer to the same object as \p other.
|
291 |
-
*
|
292 |
-
* \param other A \p reference to copy from.
|
293 |
-
* \tparam OtherT The element type of the other \p reference.
|
294 |
-
*
|
295 |
-
* \note This constructor is templated primarily to allow initialization of <tt>reference<const T></tt>
|
296 |
-
* from <tt>reference<T></tt>.
|
297 |
-
*/
|
298 |
-
template<typename OtherT>
|
299 |
-
__host__ __device__
|
300 |
-
reference(const reference<OtherT> &other,
|
301 |
-
typename thrust::detail::enable_if_convertible<
|
302 |
-
typename reference<OtherT>::pointer,
|
303 |
-
pointer
|
304 |
-
>::type * = 0)
|
305 |
-
: super_t(other)
|
306 |
-
{}
|
307 |
-
|
308 |
-
/*! Copy assignment operator copy assigns from another \p reference of related type.
|
309 |
-
*
|
310 |
-
* \param other The other \p reference to assign from.
|
311 |
-
* \return <tt>*this</tt>
|
312 |
-
* \tparam OtherT The element type of the other \p reference.
|
313 |
-
*/
|
314 |
-
template<typename OtherT>
|
315 |
-
reference &operator=(const reference<OtherT> &other);
|
316 |
-
|
317 |
-
/*! Assignment operator assigns from a \p value_type.
|
318 |
-
*
|
319 |
-
* \param x The \p value_type to assign from.
|
320 |
-
* \return <tt>*this</tt>
|
321 |
-
*/
|
322 |
-
reference &operator=(const value_type &x);
|
323 |
-
}; // end reference
|
324 |
-
|
325 |
-
/*! Exchanges the values of two objects referred to by \p reference.
|
326 |
-
* \p x The first \p reference of interest.
|
327 |
-
* \p y The second \p reference ot interest.
|
328 |
-
*/
|
329 |
-
template<typename T>
|
330 |
-
__host__ __device__
|
331 |
-
void swap(reference<T> x, reference<T> y);
|
332 |
-
|
333 |
-
} // end tbb
|
334 |
-
|
335 |
-
/*! \}
|
336 |
-
*/
|
337 |
-
|
338 |
-
} // end system
|
339 |
-
|
340 |
-
/*! \namespace thrust::tbb
|
341 |
-
* \brief \p thrust::tbb is a top-level alias for thrust::system::tbb.
|
342 |
-
*/
|
343 |
-
namespace tbb
|
344 |
-
{
|
345 |
-
|
346 |
-
using thrust::system::tbb::pointer;
|
347 |
-
using thrust::system::tbb::reference;
|
348 |
-
|
349 |
-
} // end tbb
|
350 |
-
|
351 |
-
} // end thrust
|
352 |
-
|
353 |
-
#include <thrust/system/tbb/detail/pointer.inl>
|
354 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/transfiner/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Transfiner
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 2.9.3
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/visual-clustering/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Visual Clustering
|
3 |
-
emoji: 👀
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 2.8.12
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/__init__.py
DELETED
File without changes
|
spaces/Cosmopolitan/stabilityai-stable-diffusion-2-1/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch()
|
|
|
|
|
|
|
|
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/tasks/__init__.py
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
from video_llama.common.registry import registry
|
9 |
-
from video_llama.tasks.base_task import BaseTask
|
10 |
-
from video_llama.tasks.image_text_pretrain import ImageTextPretrainTask
|
11 |
-
from video_llama.tasks.video_text_pretrain import VideoTextPretrainTask
|
12 |
-
|
13 |
-
|
14 |
-
def setup_task(cfg):
|
15 |
-
assert "task" in cfg.run_cfg, "Task name must be provided."
|
16 |
-
|
17 |
-
task_name = cfg.run_cfg.task
|
18 |
-
task = registry.get_task_class(task_name).setup_task(cfg=cfg)
|
19 |
-
assert task is not None, "Task {} not properly registered.".format(task_name)
|
20 |
-
|
21 |
-
return task
|
22 |
-
|
23 |
-
|
24 |
-
__all__ = [
|
25 |
-
"BaseTask",
|
26 |
-
"ImageTextPretrainTask",
|
27 |
-
"VideoTextPretrainTask"
|
28 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DarwinAnim8or/convert-to-safet/convert.py
DELETED
@@ -1,306 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
import json
|
3 |
-
import os
|
4 |
-
import shutil
|
5 |
-
from collections import defaultdict
|
6 |
-
from inspect import signature
|
7 |
-
from tempfile import TemporaryDirectory
|
8 |
-
from typing import Dict, List, Optional, Set
|
9 |
-
|
10 |
-
import torch
|
11 |
-
|
12 |
-
from huggingface_hub import CommitInfo, CommitOperationAdd, Discussion, HfApi, hf_hub_download
|
13 |
-
from huggingface_hub.file_download import repo_folder_name
|
14 |
-
from safetensors.torch import load_file, save_file
|
15 |
-
from transformers import AutoConfig
|
16 |
-
from transformers.pipelines.base import infer_framework_load_model
|
17 |
-
|
18 |
-
|
19 |
-
COMMIT_DESCRIPTION = """
|
20 |
-
This is an automated PR created with https://huggingface.co/spaces/safetensors/convert
|
21 |
-
|
22 |
-
This new file is equivalent to `pytorch_model.bin` but safe in the sense that
|
23 |
-
no arbitrary code can be put into it.
|
24 |
-
|
25 |
-
These files also happen to load much faster than their pytorch counterpart:
|
26 |
-
https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb
|
27 |
-
|
28 |
-
The widgets on your model page will run using this model even if this is not merged
|
29 |
-
making sure the file actually works.
|
30 |
-
|
31 |
-
If you find any issues: please report here: https://huggingface.co/spaces/safetensors/convert/discussions
|
32 |
-
|
33 |
-
Feel free to ignore this PR.
|
34 |
-
"""
|
35 |
-
|
36 |
-
|
37 |
-
class AlreadyExists(Exception):
|
38 |
-
pass
|
39 |
-
|
40 |
-
|
41 |
-
def shared_pointers(tensors):
|
42 |
-
ptrs = defaultdict(list)
|
43 |
-
for k, v in tensors.items():
|
44 |
-
ptrs[v.data_ptr()].append(k)
|
45 |
-
failing = []
|
46 |
-
for ptr, names in ptrs.items():
|
47 |
-
if len(names) > 1:
|
48 |
-
failing.append(names)
|
49 |
-
return failing
|
50 |
-
|
51 |
-
|
52 |
-
def check_file_size(sf_filename: str, pt_filename: str):
|
53 |
-
sf_size = os.stat(sf_filename).st_size
|
54 |
-
pt_size = os.stat(pt_filename).st_size
|
55 |
-
|
56 |
-
if (sf_size - pt_size) / pt_size > 0.01:
|
57 |
-
raise RuntimeError(
|
58 |
-
f"""The file size different is more than 1%:
|
59 |
-
- {sf_filename}: {sf_size}
|
60 |
-
- {pt_filename}: {pt_size}
|
61 |
-
"""
|
62 |
-
)
|
63 |
-
|
64 |
-
|
65 |
-
def rename(pt_filename: str) -> str:
|
66 |
-
filename, ext = os.path.splitext(pt_filename)
|
67 |
-
local = f"{filename}.safetensors"
|
68 |
-
local = local.replace("pytorch_model", "model")
|
69 |
-
return local
|
70 |
-
|
71 |
-
|
72 |
-
def convert_multi(model_id: str, folder: str) -> List["CommitOperationAdd"]:
|
73 |
-
filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin.index.json")
|
74 |
-
with open(filename, "r") as f:
|
75 |
-
data = json.load(f)
|
76 |
-
|
77 |
-
filenames = set(data["weight_map"].values())
|
78 |
-
local_filenames = []
|
79 |
-
for filename in filenames:
|
80 |
-
pt_filename = hf_hub_download(repo_id=model_id, filename=filename)
|
81 |
-
|
82 |
-
sf_filename = rename(pt_filename)
|
83 |
-
sf_filename = os.path.join(folder, sf_filename)
|
84 |
-
convert_file(pt_filename, sf_filename)
|
85 |
-
local_filenames.append(sf_filename)
|
86 |
-
|
87 |
-
index = os.path.join(folder, "model.safetensors.index.json")
|
88 |
-
with open(index, "w") as f:
|
89 |
-
newdata = {k: v for k, v in data.items()}
|
90 |
-
newmap = {k: rename(v) for k, v in data["weight_map"].items()}
|
91 |
-
newdata["weight_map"] = newmap
|
92 |
-
json.dump(newdata, f, indent=4)
|
93 |
-
local_filenames.append(index)
|
94 |
-
|
95 |
-
operations = [
|
96 |
-
CommitOperationAdd(path_in_repo=local.split("/")[-1], path_or_fileobj=local) for local in local_filenames
|
97 |
-
]
|
98 |
-
|
99 |
-
return operations
|
100 |
-
|
101 |
-
|
102 |
-
def convert_single(model_id: str, folder: str) -> List["CommitOperationAdd"]:
|
103 |
-
pt_filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin")
|
104 |
-
|
105 |
-
sf_name = "model.safetensors"
|
106 |
-
sf_filename = os.path.join(folder, sf_name)
|
107 |
-
convert_file(pt_filename, sf_filename)
|
108 |
-
operations = [CommitOperationAdd(path_in_repo=sf_name, path_or_fileobj=sf_filename)]
|
109 |
-
return operations
|
110 |
-
|
111 |
-
|
112 |
-
def convert_file(
|
113 |
-
pt_filename: str,
|
114 |
-
sf_filename: str,
|
115 |
-
):
|
116 |
-
loaded = torch.load(pt_filename, map_location="cpu")
|
117 |
-
if "state_dict" in loaded:
|
118 |
-
loaded = loaded["state_dict"]
|
119 |
-
shared = shared_pointers(loaded)
|
120 |
-
for shared_weights in shared:
|
121 |
-
for name in shared_weights[1:]:
|
122 |
-
loaded.pop(name)
|
123 |
-
|
124 |
-
# For tensors to be contiguous
|
125 |
-
loaded = {k: v.contiguous() for k, v in loaded.items()}
|
126 |
-
|
127 |
-
dirname = os.path.dirname(sf_filename)
|
128 |
-
os.makedirs(dirname, exist_ok=True)
|
129 |
-
save_file(loaded, sf_filename, metadata={"format": "pt"})
|
130 |
-
check_file_size(sf_filename, pt_filename)
|
131 |
-
reloaded = load_file(sf_filename)
|
132 |
-
for k in loaded:
|
133 |
-
pt_tensor = loaded[k]
|
134 |
-
sf_tensor = reloaded[k]
|
135 |
-
if not torch.equal(pt_tensor, sf_tensor):
|
136 |
-
raise RuntimeError(f"The output tensors do not match for key {k}")
|
137 |
-
|
138 |
-
|
139 |
-
def create_diff(pt_infos: Dict[str, List[str]], sf_infos: Dict[str, List[str]]) -> str:
|
140 |
-
errors = []
|
141 |
-
for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]:
|
142 |
-
pt_set = set(pt_infos[key])
|
143 |
-
sf_set = set(sf_infos[key])
|
144 |
-
|
145 |
-
pt_only = pt_set - sf_set
|
146 |
-
sf_only = sf_set - pt_set
|
147 |
-
|
148 |
-
if pt_only:
|
149 |
-
errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings")
|
150 |
-
if sf_only:
|
151 |
-
errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings")
|
152 |
-
return "\n".join(errors)
|
153 |
-
|
154 |
-
|
155 |
-
def check_final_model(model_id: str, folder: str):
|
156 |
-
config = hf_hub_download(repo_id=model_id, filename="config.json")
|
157 |
-
shutil.copy(config, os.path.join(folder, "config.json"))
|
158 |
-
config = AutoConfig.from_pretrained(folder)
|
159 |
-
|
160 |
-
_, (pt_model, pt_infos) = infer_framework_load_model(model_id, config, output_loading_info=True)
|
161 |
-
_, (sf_model, sf_infos) = infer_framework_load_model(folder, config, output_loading_info=True)
|
162 |
-
|
163 |
-
if pt_infos != sf_infos:
|
164 |
-
error_string = create_diff(pt_infos, sf_infos)
|
165 |
-
raise ValueError(f"Different infos when reloading the model: {error_string}")
|
166 |
-
|
167 |
-
pt_params = pt_model.state_dict()
|
168 |
-
sf_params = sf_model.state_dict()
|
169 |
-
|
170 |
-
pt_shared = shared_pointers(pt_params)
|
171 |
-
sf_shared = shared_pointers(sf_params)
|
172 |
-
if pt_shared != sf_shared:
|
173 |
-
raise RuntimeError("The reconstructed model is wrong, shared tensors are different {shared_pt} != {shared_tf}")
|
174 |
-
|
175 |
-
sig = signature(pt_model.forward)
|
176 |
-
input_ids = torch.arange(10).unsqueeze(0)
|
177 |
-
pixel_values = torch.randn(1, 3, 224, 224)
|
178 |
-
input_values = torch.arange(1000).float().unsqueeze(0)
|
179 |
-
kwargs = {}
|
180 |
-
if "input_ids" in sig.parameters:
|
181 |
-
kwargs["input_ids"] = input_ids
|
182 |
-
if "decoder_input_ids" in sig.parameters:
|
183 |
-
kwargs["decoder_input_ids"] = input_ids
|
184 |
-
if "pixel_values" in sig.parameters:
|
185 |
-
kwargs["pixel_values"] = pixel_values
|
186 |
-
if "input_values" in sig.parameters:
|
187 |
-
kwargs["input_values"] = input_values
|
188 |
-
if "bbox" in sig.parameters:
|
189 |
-
kwargs["bbox"] = torch.zeros((1, 10, 4)).long()
|
190 |
-
if "image" in sig.parameters:
|
191 |
-
kwargs["image"] = pixel_values
|
192 |
-
|
193 |
-
if torch.cuda.is_available():
|
194 |
-
pt_model = pt_model.cuda()
|
195 |
-
sf_model = sf_model.cuda()
|
196 |
-
kwargs = {k: v.cuda() for k, v in kwargs.items()}
|
197 |
-
|
198 |
-
pt_logits = pt_model(**kwargs)[0]
|
199 |
-
sf_logits = sf_model(**kwargs)[0]
|
200 |
-
|
201 |
-
torch.testing.assert_close(sf_logits, pt_logits)
|
202 |
-
print(f"Model {model_id} is ok !")
|
203 |
-
|
204 |
-
|
205 |
-
def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]:
|
206 |
-
try:
|
207 |
-
discussions = api.get_repo_discussions(repo_id=model_id)
|
208 |
-
except Exception:
|
209 |
-
return None
|
210 |
-
for discussion in discussions:
|
211 |
-
if discussion.status == "open" and discussion.is_pull_request and discussion.title == pr_title:
|
212 |
-
details = api.get_discussion_details(repo_id=model_id, discussion_num=discussion.num)
|
213 |
-
if details.target_branch == "refs/heads/main":
|
214 |
-
return discussion
|
215 |
-
|
216 |
-
|
217 |
-
def convert_generic(model_id: str, folder: str, filenames: Set[str]) -> List["CommitOperationAdd"]:
|
218 |
-
operations = []
|
219 |
-
|
220 |
-
extensions = set([".bin", ".ckpt"])
|
221 |
-
for filename in filenames:
|
222 |
-
prefix, ext = os.path.splitext(filename)
|
223 |
-
if ext in extensions:
|
224 |
-
pt_filename = hf_hub_download(model_id, filename=filename)
|
225 |
-
dirname, raw_filename = os.path.split(filename)
|
226 |
-
if raw_filename == "pytorch_model.bin":
|
227 |
-
# XXX: This is a special case to handle `transformers` and the
|
228 |
-
# `transformers` part of the model which is actually loaded by `transformers`.
|
229 |
-
sf_in_repo = os.path.join(dirname, "model.safetensors")
|
230 |
-
else:
|
231 |
-
sf_in_repo = f"{prefix}.safetensors"
|
232 |
-
sf_filename = os.path.join(folder, sf_in_repo)
|
233 |
-
convert_file(pt_filename, sf_filename)
|
234 |
-
operations.append(CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename))
|
235 |
-
return operations
|
236 |
-
|
237 |
-
|
238 |
-
def convert(api: "HfApi", model_id: str, force: bool = False) -> Optional["CommitInfo"]:
|
239 |
-
pr_title = "Adding `safetensors` variant of this model"
|
240 |
-
info = api.model_info(model_id)
|
241 |
-
filenames = set(s.rfilename for s in info.siblings)
|
242 |
-
|
243 |
-
with TemporaryDirectory() as d:
|
244 |
-
folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models"))
|
245 |
-
os.makedirs(folder)
|
246 |
-
new_pr = None
|
247 |
-
try:
|
248 |
-
operations = None
|
249 |
-
pr = previous_pr(api, model_id, pr_title)
|
250 |
-
|
251 |
-
library_name = getattr(info, "library_name", None)
|
252 |
-
if any(filename.endswith(".safetensors") for filename in filenames) and not force:
|
253 |
-
raise AlreadyExists(f"Model {model_id} is already converted, skipping..")
|
254 |
-
elif pr is not None and not force:
|
255 |
-
url = f"https://huggingface.co/{model_id}/discussions/{pr.num}"
|
256 |
-
new_pr = pr
|
257 |
-
raise AlreadyExists(f"Model {model_id} already has an open PR check out {url}")
|
258 |
-
elif library_name == "transformers":
|
259 |
-
if "pytorch_model.bin" in filenames:
|
260 |
-
operations = convert_single(model_id, folder)
|
261 |
-
elif "pytorch_model.bin.index.json" in filenames:
|
262 |
-
operations = convert_multi(model_id, folder)
|
263 |
-
else:
|
264 |
-
raise RuntimeError(f"Model {model_id} doesn't seem to be a valid pytorch model. Cannot convert")
|
265 |
-
check_final_model(model_id, folder)
|
266 |
-
else:
|
267 |
-
operations = convert_generic(model_id, folder, filenames)
|
268 |
-
|
269 |
-
if operations:
|
270 |
-
new_pr = api.create_commit(
|
271 |
-
repo_id=model_id,
|
272 |
-
operations=operations,
|
273 |
-
commit_message=pr_title,
|
274 |
-
commit_description=COMMIT_DESCRIPTION,
|
275 |
-
create_pr=True,
|
276 |
-
)
|
277 |
-
print(f"Pr created at {new_pr.pr_url}")
|
278 |
-
else:
|
279 |
-
print("No files to convert")
|
280 |
-
finally:
|
281 |
-
shutil.rmtree(folder)
|
282 |
-
return new_pr
|
283 |
-
|
284 |
-
|
285 |
-
if __name__ == "__main__":
|
286 |
-
DESCRIPTION = """
|
287 |
-
Simple utility tool to convert automatically some weights on the hub to `safetensors` format.
|
288 |
-
It is PyTorch exclusive for now.
|
289 |
-
It works by downloading the weights (PT), converting them locally, and uploading them back
|
290 |
-
as a PR on the hub.
|
291 |
-
"""
|
292 |
-
parser = argparse.ArgumentParser(description=DESCRIPTION)
|
293 |
-
parser.add_argument(
|
294 |
-
"model_id",
|
295 |
-
type=str,
|
296 |
-
help="The name of the model on the hub to convert. E.g. `gpt2` or `facebook/wav2vec2-base-960h`",
|
297 |
-
)
|
298 |
-
parser.add_argument(
|
299 |
-
"--force",
|
300 |
-
action="store_true",
|
301 |
-
help="Create the PR even if it already exists of if the model was already converted.",
|
302 |
-
)
|
303 |
-
args = parser.parse_args()
|
304 |
-
model_id = args.model_id
|
305 |
-
api = HfApi()
|
306 |
-
convert(api, model_id, force=args.force)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DebasishDhal99/Youtube_Playlist/README.md
DELETED
@@ -1,45 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Youtube Playlist
|
3 |
-
emoji: 🎥
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.35.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: cc
|
11 |
-
---
|
12 |
-
To use this web app on gradio: - https://huggingface.co/spaces/DebasishDhal99/Youtube_Playlist
|
13 |
-
# Total duration of playlist
|
14 |
-
For a given playlist, it calculates the duration of each public video in that playlist and sums them to produce the total duration.
|
15 |
-
[Playlist link](https://youtube.com/playlist?list=PLuhqtP7jdD8CD6rOWy20INGM44kULvrHu&si=G4rrT1wQfQVvzTJF)
|
16 |
-
<p align="center">
|
17 |
-
<img src="images/total_duration.png" alt="CloudSat orbit superimposed on INSAT-3DR coverage area.">
|
18 |
-
</p>
|
19 |
-
|
20 |
-
|
21 |
-
# Average duration of a playlist
|
22 |
-
Average duration of videos is calculated for the publicly available videos in that playlist. For example, the average duration of videos in this playlist is around 9 minutes.
|
23 |
-
|
24 |
-
<p align="center">
|
25 |
-
<img src="images/average_duration.png" alt="CloudSat orbit superimposed on INSAT-3DR coverage area.">
|
26 |
-
</p>
|
27 |
-
|
28 |
-
|
29 |
-
# Playlist mismatch
|
30 |
-
Given two playlists, this function gets the videos that are present in one of the playlists, but not in the other.
|
31 |
-
The two playlists are given here, [HindiSongs1](https://youtube.com/playlist?list=PLgeEuUJpv5I-jRo3Ibddg96Ke5QRryBQf&si=HZKtxDOm6RbmYieu) and [HindiSongs2](https://youtube.com/playlist?list=PLgeEuUJpv5I-0eV03cUzMAVyHDyVV_43D&si=t8mf-O0CNe23dwlS).
|
32 |
-
<p align="center">
|
33 |
-
<img src="images/mismatch.png" alt="CloudSat orbit superimposed on INSAT-3DR coverage area.">
|
34 |
-
</p>
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
**************************************************************************************************
|
45 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|