parquet-converter commited on
Commit
9324965
·
1 Parent(s): 9599f93

Update parquet files (step 126 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/0x7194633/nllb-1.3B-demo/flores200_codes.py +0 -211
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyTrans Crack 7.0.5 With Activation Code (32 bit 64 bit) Updated How to Get the Full Version for Free.md +0 -142
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/BitCoin Generator V1.2.zipl A Powerful and Reliable Bitcoin Generator that Works on Any Device.md +0 -116
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson 1360 Resetter How to Reset Your Printer and Clear the Red Light Blinking.md +0 -37
  5. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Surpac 6.1.2 Crack 6 ((EXCLUSIVE)).md +0 -98
  6. spaces/1gistliPinn/ChatGPT4/Examples/Comenius Logo Magyar Letoltes Win7 64 Bit Empires Anneaux Secr.md +0 -6
  7. spaces/1gistliPinn/ChatGPT4/Examples/Dead Island Riptide Lan Crack 11 TOP.md +0 -6
  8. spaces/1gistliPinn/ChatGPT4/Examples/Download Neat Image Pro 7.0 Full Crack !!EXCLUSIVE!!.md +0 -8
  9. spaces/1line/AutoGPT/autogpt/permanent_memory/__init__.py +0 -0
  10. spaces/1phancelerku/anime-remove-background/Download Yesudas Kannada Ayyappa Songs Vol 6 MP3 for Free - New Devotional Songs 2023.md +0 -288
  11. spaces/AHzizi/WaifuVoiceGen/mel_processing.py +0 -101
  12. spaces/AIFILMS/StyleGANEX/scripts/generate_sketch_data.py +0 -62
  13. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/pann_model.py +0 -543
  14. spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio_inpaint.py +0 -1081
  15. spaces/Aditya9790/yolo7-object-tracking/utils/autoanchor.py +0 -160
  16. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/updater/base.py +0 -27
  17. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildrenHeight.js +0 -6
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ResolveWidth.js +0 -16
  19. spaces/Akmyradov/TurkmenTTSweSTT/vits/losses.py +0 -61
  20. spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/install.md +0 -51
  21. spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/shanghainese.py +0 -64
  22. spaces/Amrrs/portfolio-github/README.md +0 -36
  23. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/__init__.py +0 -0
  24. spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_769x769_80k_cityscapes.py +0 -2
  25. spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_80k_ade20k.py +0 -5
  26. spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/tippy.css +0 -1
  27. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py +0 -412
  28. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/pavi.py +0 -117
  29. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/_distutils_hack/override.py +0 -1
  30. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/py38compat.py +0 -8
  31. spaces/AtomdffAI/wechatgpt4atom/channel/wechat/wechat_channel.py +0 -176
  32. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/datasets/README.md +0 -140
  33. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/__init__.py +0 -5
  34. spaces/Benson/text-generation/Examples/Ai Tipo De Teclado Ms Apk Completo Agrietado.md +0 -87
  35. spaces/Benson/text-generation/Examples/Descargar Clash Mini Para PC.md +0 -123
  36. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/__init__.py +0 -120
  37. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_emoji_replace.py +0 -32
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/before_sleep.py +0 -71
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/errors.py +0 -58
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_appengine_environ.py +0 -36
  41. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/INSTALL.md +0 -175
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/utils/optim.py +0 -73
  43. spaces/CVPR/LIVE/atomic.cpp +0 -27
  44. spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/spmv_script.sh +0 -30
  45. spaces/CVPR/LIVE/thrust/thrust/addressof.h +0 -33
  46. spaces/CVPR/Text2Human/Text2Human/data/pose_attr_dataset.py +0 -109
  47. spaces/CVPR/WALT/mmdet/apis/test.py +0 -189
  48. spaces/CVPR/ml-talking-face/docs/description.md +0 -33
  49. spaces/Caoyunkang/Segment-Any-Anomaly/SAM/README.md +0 -107
  50. spaces/Chris4K/llms_compare/Jumanji-Welcome-To-The-Jungle-English-Dual-Audio-Eng-Hindi-1080p.md +0 -80
spaces/0x7194633/nllb-1.3B-demo/flores200_codes.py DELETED
@@ -1,211 +0,0 @@
1
- codes_as_string = '''Acehnese (Arabic script) ace_Arab
2
- Acehnese (Latin script) ace_Latn
3
- Mesopotamian Arabic acm_Arab
4
- Ta’izzi-Adeni Arabic acq_Arab
5
- Tunisian Arabic aeb_Arab
6
- Afrikaans afr_Latn
7
- South Levantine Arabic ajp_Arab
8
- Akan aka_Latn
9
- Amharic amh_Ethi
10
- North Levantine Arabic apc_Arab
11
- Modern Standard Arabic arb_Arab
12
- Modern Standard Arabic (Romanized) arb_Latn
13
- Najdi Arabic ars_Arab
14
- Moroccan Arabic ary_Arab
15
- Egyptian Arabic arz_Arab
16
- Assamese asm_Beng
17
- Asturian ast_Latn
18
- Awadhi awa_Deva
19
- Central Aymara ayr_Latn
20
- South Azerbaijani azb_Arab
21
- North Azerbaijani azj_Latn
22
- Bashkir bak_Cyrl
23
- Bambara bam_Latn
24
- Balinese ban_Latn
25
- Belarusian bel_Cyrl
26
- Bemba bem_Latn
27
- Bengali ben_Beng
28
- Bhojpuri bho_Deva
29
- Banjar (Arabic script) bjn_Arab
30
- Banjar (Latin script) bjn_Latn
31
- Standard Tibetan bod_Tibt
32
- Bosnian bos_Latn
33
- Buginese bug_Latn
34
- Bulgarian bul_Cyrl
35
- Catalan cat_Latn
36
- Cebuano ceb_Latn
37
- Czech ces_Latn
38
- Chokwe cjk_Latn
39
- Central Kurdish ckb_Arab
40
- Crimean Tatar crh_Latn
41
- Welsh cym_Latn
42
- Danish dan_Latn
43
- German deu_Latn
44
- Southwestern Dinka dik_Latn
45
- Dyula dyu_Latn
46
- Dzongkha dzo_Tibt
47
- Greek ell_Grek
48
- English eng_Latn
49
- Esperanto epo_Latn
50
- Estonian est_Latn
51
- Basque eus_Latn
52
- Ewe ewe_Latn
53
- Faroese fao_Latn
54
- Fijian fij_Latn
55
- Finnish fin_Latn
56
- Fon fon_Latn
57
- French fra_Latn
58
- Friulian fur_Latn
59
- Nigerian Fulfulde fuv_Latn
60
- Scottish Gaelic gla_Latn
61
- Irish gle_Latn
62
- Galician glg_Latn
63
- Guarani grn_Latn
64
- Gujarati guj_Gujr
65
- Haitian Creole hat_Latn
66
- Hausa hau_Latn
67
- Hebrew heb_Hebr
68
- Hindi hin_Deva
69
- Chhattisgarhi hne_Deva
70
- Croatian hrv_Latn
71
- Hungarian hun_Latn
72
- Armenian hye_Armn
73
- Igbo ibo_Latn
74
- Ilocano ilo_Latn
75
- Indonesian ind_Latn
76
- Icelandic isl_Latn
77
- Italian ita_Latn
78
- Javanese jav_Latn
79
- Japanese jpn_Jpan
80
- Kabyle kab_Latn
81
- Jingpho kac_Latn
82
- Kamba kam_Latn
83
- Kannada kan_Knda
84
- Kashmiri (Arabic script) kas_Arab
85
- Kashmiri (Devanagari script) kas_Deva
86
- Georgian kat_Geor
87
- Central Kanuri (Arabic script) knc_Arab
88
- Central Kanuri (Latin script) knc_Latn
89
- Kazakh kaz_Cyrl
90
- Kabiyè kbp_Latn
91
- Kabuverdianu kea_Latn
92
- Khmer khm_Khmr
93
- Kikuyu kik_Latn
94
- Kinyarwanda kin_Latn
95
- Kyrgyz kir_Cyrl
96
- Kimbundu kmb_Latn
97
- Northern Kurdish kmr_Latn
98
- Kikongo kon_Latn
99
- Korean kor_Hang
100
- Lao lao_Laoo
101
- Ligurian lij_Latn
102
- Limburgish lim_Latn
103
- Lingala lin_Latn
104
- Lithuanian lit_Latn
105
- Lombard lmo_Latn
106
- Latgalian ltg_Latn
107
- Luxembourgish ltz_Latn
108
- Luba-Kasai lua_Latn
109
- Ganda lug_Latn
110
- Luo luo_Latn
111
- Mizo lus_Latn
112
- Standard Latvian lvs_Latn
113
- Magahi mag_Deva
114
- Maithili mai_Deva
115
- Malayalam mal_Mlym
116
- Marathi mar_Deva
117
- Minangkabau (Arabic script) min_Arab
118
- Minangkabau (Latin script) min_Latn
119
- Macedonian mkd_Cyrl
120
- Plateau Malagasy plt_Latn
121
- Maltese mlt_Latn
122
- Meitei (Bengali script) mni_Beng
123
- Halh Mongolian khk_Cyrl
124
- Mossi mos_Latn
125
- Maori mri_Latn
126
- Burmese mya_Mymr
127
- Dutch nld_Latn
128
- Norwegian Nynorsk nno_Latn
129
- Norwegian Bokmål nob_Latn
130
- Nepali npi_Deva
131
- Northern Sotho nso_Latn
132
- Nuer nus_Latn
133
- Nyanja nya_Latn
134
- Occitan oci_Latn
135
- West Central Oromo gaz_Latn
136
- Odia ory_Orya
137
- Pangasinan pag_Latn
138
- Eastern Panjabi pan_Guru
139
- Papiamento pap_Latn
140
- Western Persian pes_Arab
141
- Polish pol_Latn
142
- Portuguese por_Latn
143
- Dari prs_Arab
144
- Southern Pashto pbt_Arab
145
- Ayacucho Quechua quy_Latn
146
- Romanian ron_Latn
147
- Rundi run_Latn
148
- Russian rus_Cyrl
149
- Sango sag_Latn
150
- Sanskrit san_Deva
151
- Santali sat_Olck
152
- Sicilian scn_Latn
153
- Shan shn_Mymr
154
- Sinhala sin_Sinh
155
- Slovak slk_Latn
156
- Slovenian slv_Latn
157
- Samoan smo_Latn
158
- Shona sna_Latn
159
- Sindhi snd_Arab
160
- Somali som_Latn
161
- Southern Sotho sot_Latn
162
- Spanish spa_Latn
163
- Tosk Albanian als_Latn
164
- Sardinian srd_Latn
165
- Serbian srp_Cyrl
166
- Swati ssw_Latn
167
- Sundanese sun_Latn
168
- Swedish swe_Latn
169
- Swahili swh_Latn
170
- Silesian szl_Latn
171
- Tamil tam_Taml
172
- Tatar tat_Cyrl
173
- Telugu tel_Telu
174
- Tajik tgk_Cyrl
175
- Tagalog tgl_Latn
176
- Thai tha_Thai
177
- Tigrinya tir_Ethi
178
- Tamasheq (Latin script) taq_Latn
179
- Tamasheq (Tifinagh script) taq_Tfng
180
- Tok Pisin tpi_Latn
181
- Tswana tsn_Latn
182
- Tsonga tso_Latn
183
- Turkmen tuk_Latn
184
- Tumbuka tum_Latn
185
- Turkish tur_Latn
186
- Twi twi_Latn
187
- Central Atlas Tamazight tzm_Tfng
188
- Uyghur uig_Arab
189
- Ukrainian ukr_Cyrl
190
- Umbundu umb_Latn
191
- Urdu urd_Arab
192
- Northern Uzbek uzn_Latn
193
- Venetian vec_Latn
194
- Vietnamese vie_Latn
195
- Waray war_Latn
196
- Wolof wol_Latn
197
- Xhosa xho_Latn
198
- Eastern Yiddish ydd_Hebr
199
- Yoruba yor_Latn
200
- Yue Chinese yue_Hant
201
- Chinese (Simplified) zho_Hans
202
- Chinese (Traditional) zho_Hant
203
- Standard Malay zsm_Latn
204
- Zulu zul_Latn'''
205
-
206
- codes_as_string = codes_as_string.split('\n')
207
-
208
- flores_codes = {}
209
- for code in codes_as_string:
210
- lang, lang_code = code.split('\t')
211
- flores_codes[lang] = lang_code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyTrans Crack 7.0.5 With Activation Code (32 bit 64 bit) Updated How to Get the Full Version for Free.md DELETED
@@ -1,142 +0,0 @@
1
-
2
- <h1>AnyTrans Crack 7.0.5 With Activation Code (32 bit, 64 bit) Updated</h1>
3
- <p>If you are looking for a way to transfer, manage, and back up your iOS data without any restrictions, you might be interested in AnyTrans Crack 7.0.5. This is a cracked version of AnyTrans, a popular software that allows you to sync your iPhone, iPad, iPod, iTunes, and iCloud content with your computer or other devices.</p>
4
- <h2>AnyTrans Crack 7.0.5 With Activation Code (32 bit, 64 bit) {Updated}</h2><br /><p><b><b>DOWNLOAD</b> &gt; <a href="https://byltly.com/2uKx93">https://byltly.com/2uKx93</a></b></p><br /><br />
5
- <p>In this article, we will tell you everything you need to know about AnyTrans Crack 7.0.5, including what it is, why you need it, how to download and install it, how to use it, and what are its pros and cons.</p>
6
- <p>By the end of this article, you will be able to decide whether AnyTrans Crack 7.0.5 is worth trying or not.</p>
7
- <h2>What is AnyTrans?</h2>
8
- <p>AnyTrans is an all-in-one manager for your iOS data and files. It lets you transfer, manage, and back up your photos, music, videos, messages, contacts, notes, bookmarks, apps, and more across your iPhone, iPad, iPod, computer, iTunes, and iCloud.</p>
9
- <p>Some of the features of AnyTrans are:</p>
10
- <ul>
11
- <li>Transfer music freely across all your devices without erasing existing songs.</li>
12
- <li>Export iPhone photos and videos by category to your computer or other devices.</li>
13
- <li>Back up or print your messages and attachments with ease.</li>
14
- <li>Manage your personal info such as contacts, notes, Safari history, etc.</li>
15
- <li>Download videos from the web for offline use on your device.</li>
16
- <li>Migrate data from one device to another with one click.</li>
17
- <li>Create custom ringtones for your iPhone.</li>
18
- <li>Mirror/record/capture your iPhone screen.</li>
19
- <li>And much more.</li>
20
- </ul>
21
- <p>You can download the official version of AnyTrans from its website. However, it is not free. You need to pay $39.99 for a single license or $59.99 for a family license.</p>
22
- <h2>Why do you need AnyTrans Crack 7.0.5?</h2>
23
- <p>If you don't want to pay for the official version of AnyTrans, you might want to try AnyTrans Crack 7.0.5 instead.</p>
24
- <p>This is a cracked version of AnyTrans that bypasses the activation code requirement and lets you use all the features of AnyTrans for free.</p>
25
- <p>Some of the benefits of using AnyTrans Crack 7.0.5 are:</p>
26
- <p>How to download AnyTrans Crack 7.0.5 with activation code for free<br />
27
- AnyTrans Crack 7.0.5 full version with license key download link<br />
28
- AnyTrans Crack 7.0.5 latest version for Windows 10/8/7 (32 bit, 64 bit)<br />
29
- AnyTrans Crack 7.0.5 review: pros and cons of using the software<br />
30
- AnyTrans Crack 7.0.5 features: what can you do with it<br />
31
- AnyTrans Crack 7.0.5 tutorial: how to install and use the software<br />
32
- AnyTrans Crack 7.0.5 alternatives: other software that can transfer data between devices<br />
33
- AnyTrans Crack 7.0.5 problems: how to fix common issues and errors<br />
34
- AnyTrans Crack 7.0.5 vs official version: what are the differences and risks<br />
35
- AnyTrans Crack 7.0.5 for Mac: is there a compatible version for macOS<br />
36
- AnyTrans Crack 7.0.5 for Android: how to transfer data from Android to PC or other devices<br />
37
- AnyTrans Crack 7.0.5 for iPhone: how to transfer data from iPhone to PC or other devices<br />
38
- AnyTrans Crack 7.0.5 for iPad: how to transfer data from iPad to PC or other devices<br />
39
- AnyTrans Crack 7.0.5 for iPod: how to transfer data from iPod to PC or other devices<br />
40
- AnyTrans Crack 7.0.5 for iTunes: how to sync data with iTunes without erasing<br />
41
- AnyTrans Crack 7.0.5 for iCloud: how to access and manage iCloud data on PC<br />
42
- AnyTrans Crack 7.0.5 for WhatsApp: how to backup and restore WhatsApp messages and attachments<br />
43
- AnyTrans Crack 7.0.5 for LINE: how to backup and restore LINE chats and stickers<br />
44
- AnyTrans Crack 7.0.5 for Viber: how to backup and restore Viber conversations and media files<br />
45
- AnyTrans Crack 7.0.5 for Kik: how to backup and restore Kik messages and photos<br />
46
- AnyTrans Crack 7.0.5 for photos: how to transfer and organize photos across devices<br />
47
- AnyTrans Crack 7.0.5 for videos: how to transfer and convert videos for different devices<br />
48
- AnyTrans Crack 7.0.5 for music: how to transfer and manage music files and playlists<br />
49
- AnyTrans Crack 7.0.5 for contacts: how to transfer and edit contacts on PC<br />
50
- AnyTrans Crack 7.0.5 for messages: how to transfer and print text messages on PC<br />
51
- AnyTrans Crack 7.0.5 for apps: how to transfer and backup apps and app data<br />
52
- AnyTrans Crack 7.0.5 for books: how to transfer and read ebooks on PC<br />
53
- AnyTrans Crack 7.0.5 for podcasts: how to transfer and listen to podcasts on PC<br />
54
- AnyTrans Crack 7.0.5 for voice memos: how to transfer and play voice memos on PC<br />
55
- AnyTrans Crack 7.0.5 for ringtones: how to create and transfer ringtones for iPhone<br />
56
- AnyTrans Crack 7.0.5 for calendars: how to transfer and sync calendars across devices<br />
57
- AnyTrans Crack 7.0.5 for notes: how to transfer and edit notes on PC<br />
58
- AnyTrans Crack 7.0.5 for reminders: how to transfer and manage reminders on PC<br />
59
- AnyTrans Crack 7.0.5 for Safari history: how to transfer and view Safari history on PC<br />
60
- AnyTrans Crack 7.0.5 for call history: how to transfer and check call history on PC<br />
61
- AnyTrans Crack 7.0</p>
62
- <ul>
63
- <li>You can save money by not paying for the official version.</li>
64
- <li>You can enjoy all the features of AnyTrans without any limitations.</li>
65
- <li>You can update to the latest version of AnyTrans without any problems.</li>
66
- <li>You can use it on any Windows computer (32 bit or 64 bit).</li>
67
- </ul>
68
- <h2>How to download and install AnyTrans Crack 7.0.5?</h2>
69
- <p>If you want to download and install AnyTrans Crack 7.0.5 on your computer, you can follow these steps:</p>
70
- <ol>
71
- <li>Click on this link to download the file named "AnyTrans-Crack-705-With-Activation-Code-32-bit-64-bit-Updated.pdf".</li>
72
- <li>Open the file with a PDF reader and follow the instructions inside.</li>
73
- <li>You will need to download two files: "AnyTrans_Setup.exe" and "Anytrans_Crack.zip".</li>
74
- <li>Run "AnyTrans_Setup.exe" and install AnyTrans on your computer.</li>
75
- <li>Extract "Anytrans_Crack.zip" and copy the file named "Anytrans.exe" to the installation folder of AnyTrans (usually C:\Program Files (x86)\iMobie\AnyTrans).</li>
76
- <li>Replace the original file with the cracked file.</li>
77
- <li>Launch AnyTrans from your desktop or start menu.</li>
78
- <li>You will see a message saying "Activation Successful".</li>
79
- <li>Congratulations! You have successfully installed AnyTrans Crack 7.0.5 on your computer.</li>
80
- </ol>
81
- <p>Here are some screenshots to help you with the installation process:</p>
82
- <table><tr><td><img src="https://i.imgur.com/9lQXmZy.png" alt="Step 1"></td><td><img src="https://i.imgur.com/8LZf4YF.png" alt="Step 2"></td></tr><tr><td><img src="https://i.imgur.com/6k3oq4q.png" alt="Step 3"></td><td><img src="https://i.imgur.com/9wJQb6W.png" alt="Step 4"></td></tr><tr><td><img src="https://i.imgur.com/9sX8tOa.png" alt="Step 5"></td><td><img src="https://i.imgur.com/6w8JUjE.png" alt="Step 6"></td></tr><tr><td><img src="https://i.imgur.com/4fzPcYx.png" alt="Step 7"></td><td><img src="https://i.imgur.com/9zVnLbC.png" alt="Step 8"></td></tr></table>
83
- <h2>How to use AnyTrans Crack 7.0.5?</h2>
84
- <p>Once you have installed AnyTrans Crack 7.0.5 on your computer, you can start using it to transfer, manage, and back up your iOS data with ease.</p>
85
- <p>Here is a brief tutorial on how to use AnyTrans Crack 7.0.5:</p>
86
- <ol>
87
- <li>Connect your iOS device to your computer via USB cable or Wi-Fi.</li>
88
- <li>Select your device from the top left corner of AnyTrans interface.</li>
89
- <li>You will see different categories of data on your device such as Photos, Music, Videos, Messages, etc.</li>
90
- <li>Select the category you want to transfer or manage from the left sidebar.</li>
91
- <li>You will see different options such as Add Content (to add files from computer or other devices), Export Content (to export files to computer or other devices), Delete Content (to delete files from device), etc.</li>
92
- <li>Select the option you want and follow the instructions on the screen.</li>
93
- <li>You can also use other features of AnyTrans such as Backup Manager (to back up or restore your device), Device Manager (to manage device settings), iCloud Manager (to manage iCloud content), Media Downloader (to download videos from web), Ringtone Maker (to create custom ringtones), Screen Mirroring (to mirror/record/capture device screen), etc.</li>
94
- </ol>
95
- <p>Here are some screenshots to help you with using AnyTrans Crack 7.0.5:</p>
96
- <h2>What are the pros and cons of AnyTrans Crack 7.0.5?</h2>
97
- <p>As with any software, AnyTrans Crack 7.0.5 has its pros and cons. Here are some of them:</p>
98
- <h3>Pros</h3>
99
- <ul>
100
- <li>It is free to use and does not require an activation code.</li>
101
- <li>It has all the features of the official version of AnyTrans.</li>
102
- <li>It is easy to download, install, and use.</li>
103
- <li>It supports both 32 bit and 64 bit Windows computers.</li>
104
- <li>It can update to the latest version of AnyTrans without any issues.</li>
105
- </ul>
106
- <h3>Cons</h3>
107
- <ul>
108
- <li>It is not legal to use and may violate the copyright of iMobie Inc.</li>
109
- <li>It may contain viruses, malware, or spyware that can harm your computer or device.</li>
110
- <li>It may not work properly or cause errors or crashes on your computer or device.</li>
111
- <li>It may not be compatible with some iOS devices or versions.</li>
112
- <li>It may not have customer support or technical assistance from iMobie Inc.</li>
113
- </ul>
114
- <h2>Conclusion</h2>
115
- <p>In conclusion, AnyTrans Crack 7.0.5 is a cracked version of AnyTrans that lets you transfer, manage, and back up your iOS data for free. It has all the features of the official version of AnyTrans and can update to the latest version without any problems. However, it is not legal to use and may pose some risks to your computer or device.</p>
116
- <p>If you want to try AnyTrans Crack 7.0.5, you can download it from this link and follow the instructions in this article. However, we recommend that you use the official version of AnyTrans instead, as it is safer, more reliable, and more ethical. You can download the official version of AnyTrans from its website and enjoy a better iPhone life with the best iPhone manager.</p>
117
- <p>We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments below.</p>
118
- <h2>FAQs</h2>
119
- <h3>Q: Is AnyTrans Crack 7.0.5 safe to use?</h3>
120
- <p>A: AnyTrans Crack 7.0.5 is not safe to use as it may contain viruses, malware, or spyware that can harm your computer or device. It may also cause errors or crashes on your computer or device. It is better to use the official version of AnyTrans instead, as it is 100% clean and secure.</p>
121
- <h3>Q: Is AnyTrans Crack 7.0.5 legal to use?</h3>
122
- <p>A: AnyTrans Crack 7.0.5 is not legal to use as it violates the copyright of iMobie Inc., the developer of AnyTrans. It is also against the terms and conditions of AnyTrans software. It is better to use the official version of AnyTrans instead, as it respects the rights and interests of iMobie Inc.</p>
123
- <h3>Q: How can I get an activation code for AnyTrans?</h3>
124
- <p>A: You can get an activation code for AnyTrans by purchasing a license from its website. You can choose between a single license ($39.99) or a family license ($59.99). You will receive an email with your activation code after completing your payment. You can then enter your activation code in AnyTrans software to activate it.</p>
125
- <h3>Q: What are the system requirements for AnyTrans?</h3>
126
- <p>A: The system requirements for AnyTrans are:</p>
127
- <ul>
128
- <li>Windows OS: Windows 11/10/8/7/Vista/XP (32 bit or 64 bit)</li>
129
- <li>iOS Device: iPhone/iPad/iPod touch running iOS 5 or later</li>
130
- <li>iTunes Version: iTunes 9.0 or later</li>
131
- <li>iCloud Version: iCloud Drive/Desktop App for Windows 4.x</li>
132
- <li>CPU: Pentium IV 2.4 GHz or above</li>
133
- <li>RAM: 512 MB system memory</li>
134
- <li>Display Card: Accelerated 3D graphics - 64MB RAM</li>
135
- <li>Sound Card: Windows-compatible sound card</li>
136
- <li>Hard Disk: 100 MB hard drive space</li>
137
- </ul>
138
- <h3>Q: How can I contact iMobie Inc. for support?</h3>
139
- <p>A: You can contact iMobie Inc. for support by visiting their website and clicking on "Support" at the top right corner. You can also email them at [email protected] or call them at +1-844-245-8772 (US & Canada) or +86-28-85131438 (International).</p>
140
- </p> 0a6ba089eb<br />
141
- <br />
142
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/BitCoin Generator V1.2.zipl A Powerful and Reliable Bitcoin Generator that Works on Any Device.md DELETED
@@ -1,116 +0,0 @@
1
-
2
- <h1>BitCoin Generator V1.2.zip: A Scam or a Miracle?</h1>
3
- <p>BitCoin is one of the most popular and valuable cryptocurrencies in the world. It has attracted millions of users who want to invest, trade, or spend it online. However, it is also a scarce and limited resource, with only 21 million coins that can ever be created. This means that many people are looking for ways to get more BitCoins without spending too much money or time.</p>
4
- <p>One of the methods that some people claim to offer is BitCoin Generator V1.2.zip. This is a software program that supposedly can generate free BitCoins for you in a matter of minutes. But is it really possible to create BitCoins out of thin air? Or is it just a scam that will steal your money or infect your computer with malware? In this article, we will explore what BitCoin Generator V1.2.zip is, how it claims to work, what are the risks and drawbacks, how to spot a fake BitCoin generator, and what are some legitimate ways to earn BitCoins.</p>
5
- <h2>BitCoin Generator V1.2.zipl</h2><br /><p><b><b>Download</b> &#9999; <a href="https://byltly.com/2uKxZq">https://byltly.com/2uKxZq</a></b></p><br /><br />
6
- <h2>What is BitCoin Generator V1.2.zip?</h2>
7
- <p>BitCoin Generator V1.2.zip is a file that you can download from various websites that claim to offer a free BitCoin generator software. The file name may vary slightly, but it usually contains the words "BitCoin", "Generator", and a version number. The file size is usually around 10 MB.</p>
8
- <p>The websites that promote BitCoin Generator V1.2.zip usually have flashy and convincing graphics, testimonials, and guarantees. They promise that you can generate up to 5 BitCoins per day with just a few clicks. They also claim that the software is safe, secure, and anonymous, and that you don't need any technical skills or knowledge to use it.</p>
9
- <h3>How does it claim to work?</h3>
10
- <p>The websites that offer BitCoin Generator V1.2.zip usually provide some vague and dubious explanations of how the software works. Some of them say that it exploits a loophole or a bug in the BitCoin network or protocol. Others say that it uses advanced algorithms or artificial intelligence to predict the next blocks or transactions in the blockchain. And others say that it simply connects to a secret pool of BitCoins that are hidden or unused.</p>
11
- <p>Whatever the explanation, the websites claim that all you need to do is download the file, run it on your computer, enter your BitCoin address, choose the amount of BitCoins you want to generate, and click on a button. Then, you just have to wait for a few minutes until the software confirms your transaction and sends you the free BitCoins.</p>
12
- <p>How to use BitCoin Generator V1.2.zipl<br />
13
- BitCoin Generator V1.2.zipl download link<br />
14
- BitCoin Generator V1.2.zipl review and feedback<br />
15
- BitCoin Generator V1.2.zipl scam or legit<br />
16
- BitCoin Generator V1.2.zipl tutorial and guide<br />
17
- BitCoin Generator V1.2.zipl free trial and license<br />
18
- BitCoin Generator V1.2.zipl features and benefits<br />
19
- BitCoin Generator V1.2.zipl system requirements and compatibility<br />
20
- BitCoin Generator V1.2.zipl virus and malware check<br />
21
- BitCoin Generator V1.2.zipl customer support and contact<br />
22
- BitCoin Generator V1.2.zipl alternatives and competitors<br />
23
- BitCoin Generator V1.2.zipl updates and patches<br />
24
- BitCoin Generator V1.2.zipl testimonials and success stories<br />
25
- BitCoin Generator V1.2.zipl refund policy and guarantee<br />
26
- BitCoin Generator V1.2.zipl pros and cons<br />
27
- BitCoin Generator V1.2.zipl best practices and tips<br />
28
- BitCoin Generator V1.2.zipl FAQs and answers<br />
29
- BitCoin Generator V1.2.zipl bonus and discount<br />
30
- BitCoin Generator V1.2.zipl affiliate program and commission<br />
31
- BitCoin Generator V1.2.zipl results and proof<br />
32
- BitCoin Generator V1.2.zipl demo and video<br />
33
- BitCoin Generator V1.2.zipl comparison and analysis<br />
34
- BitCoin Generator V1.2.zipl problems and solutions<br />
35
- BitCoin Generator V1.2.zipl risks and warnings<br />
36
- BitCoin Generator V1.2.zipl secrets and tricks<br />
37
- BitCoin Generator V1.2.zipl limitations and drawbacks<br />
38
- BitCoin Generator V1.2.zipl performance and speed<br />
39
- BitCoin Generator V1.2.zipl reliability and security<br />
40
- BitCoin Generator V1.2.zipl quality and accuracy<br />
41
- BitCoin Generator V1.2.zipl popularity and demand<br />
42
- BitCoin Generator V1.2.zipl reputation and credibility<br />
43
- BitCoin Generator V1.2.zipl innovation and improvement<br />
44
- BitCoin Generator V1.2.zipl customization and flexibility<br />
45
- BitCoin Generator V1.2.zipl convenience and usability<br />
46
- BitCoin Generator V1.2.zipl simplicity and efficiency<br />
47
- BitCoin Generator V1.2.zipl fun and entertainment<br />
48
- BitCoin Generator V1.2.zipl value and worth<br />
49
- BitCoin Generator V1.2.zipl satisfaction and happiness<br />
50
- BitCoin Generator V1.2.zipl earnings and income<br />
51
- BitCoin Generator V1.2.zipl savings and expenses<br />
52
- BitCoin Generator V1.2.zipl investment and return<br />
53
- BitCoin Generator V1.2.zipl growth and development<br />
54
- BitCoin Generator V1.2.zipl learning and education<br />
55
- BitCoin Generator V1.2.zipl skills and knowledge<br />
56
- BitCoin Generator V1.2.zipl tools and resources<br />
57
- BitCoin Generator V1.2.zipl trends and opportunities<br />
58
- BitCoin Generator V1.2.zipl challenges and obstacles<br />
59
- BitCoin Generator V1.2.zipl mistakes and errors<br />
60
- BitCoin Generator V1.2.zipl future and vision</p>
61
- <h3>What are the risks and drawbacks?</h3>
62
- <p>If you are tempted by the idea of getting free BitCoins with BitCoin Generator V1.2.zip, you should think twice before downloading or running it on your computer. There are several risks and drawbacks associated with this software, such as:</p>
63
- <ul>
64
- <li><b>It is a scam.</b> The most likely scenario is that BitCoin Generator V1.2.zip is a scam that will not generate any BitCoins for you at all. Instead, it may try to steal your money or personal information by asking you to pay a fee, enter your credit card details, or provide your identity verification documents. It may also redirect you to phishing websites or fake exchanges that will scam you further.</li>
65
- <li><b>It is malware.</b> Another possible scenario is that BitCoin Generator V1.2.zip is malware that will infect your computer with viruses, trojans, worms, ransomware, spyware, or other malicious programs. These programs may damage your system, delete your files, encrypt your data, monitor your activity, steal your passwords, hijack your browser, or use your resources for illegal purposes.</li>
66
- <li><b>It is impossible.</b> Even if BitCoin Generator V1.2.zip is not a scam or malware, it is still impossible for it to generate free BitCoins for you. The reason is that BitCoins are created by a process called mining, which involves solving complex mathematical problems using specialized hardware and software. This process requires a lot of time, energy, and resources, and it cannot be replicated or bypassed by any software program.</li>
67
- </ul>
68
- <h2>How to spot a fake BitCoin generator?</h2>
69
- <p>As you can see, BitCoin Generator V1.2.zip is not a reliable or trustworthy software program that can generate free BitCoins for you. In fact, there is no such thing as a free BitCoin generator at all. Any website or program that claims to offer one is either lying or trying to trick you into something malicious.</p>
70
- <p>Therefore, you should be very careful and skeptical when you encounter any website or program that claims to offer a free BitCoin generator. Here are some tips on how to spot a fake BitCoin generator:</p>
71
- <h3>Check the source and reputation</h3>
72
- <p>The first thing you should do when you see a website or program that claims to offer a free BitCoin generator is to check its source and reputation. You should look for information about who created it, where it came from, how long it has been around, what reviews or ratings it has received from other users or experts, what security measures it has in place, etc.</p>
73
- <h3>Beware of unrealistic promises and guarantees</h3>
74
- <p>The second thing you should do when you see a website or program that claims to offer a free BitCoin generator is to beware of unrealistic promises and guarantees. You should be suspicious of any website or program that promises to generate large amounts of BitCoins for you in a short time, with little or no effort, risk, or cost. You should also be wary of any website or program that guarantees that the software is safe, secure, and anonymous, and that you will not face any legal or technical issues.</p>
75
- <p>You should remember that BitCoin is a decentralized and transparent system that operates on a peer-to-peer network. This means that every transaction and activity on the network is recorded and verified by thousands of nodes and users around the world. Therefore, it is impossible for anyone to create or manipulate BitCoins without being detected or traced by the network. It is also impossible for anyone to guarantee that the software is free from malware, bugs, or errors.</p>
76
- <h3>Avoid downloading unknown files or clicking suspicious links</h3>
77
- <p>The third thing you should do when you see a website or program that claims to offer a free BitCoin generator is to avoid downloading unknown files or clicking suspicious links. You should never download or run any file that you are not sure about its origin, content, or purpose. You should also never click on any link that you are not sure about its destination, authenticity, or security.</p>
78
- <p>You should always scan any file or link with a reliable antivirus or anti-malware program before opening or accessing it. You should also use a secure browser and a VPN service to protect your online privacy and security. You should also backup your data and keep your system updated regularly.</p>
79
- <h2>What are some legitimate ways to earn BitCoins?</h2>
80
- <p>Now that you know how to spot a fake BitCoin generator, you may wonder if there are any legitimate ways to earn BitCoins. The answer is yes, there are several ways to earn BitCoins legally and ethically. However, none of them are easy, fast, or free. They all require some investment of time, money, or skills. Here are some of the most common and popular ways to earn BitCoins:</p>
81
- <h3>Mining</h3>
82
- <p>Mining is the process of creating new BitCoins by solving complex mathematical problems using specialized hardware and software. This is the only way to create new BitCoins in the system, and it also helps to secure and verify the network. However, mining is very difficult, competitive, and expensive. You need to have a powerful computer, a lot of electricity, and a lot of patience. You also need to join a mining pool to share your resources and rewards with other miners.</p>
83
- <h3>Trading</h3>
84
- <p>Trading is the process of buying and selling BitCoins on an exchange platform or a peer-to-peer marketplace. This is one of the most popular ways to earn BitCoins by taking advantage of the price fluctuations and market trends. However, trading is very risky, volatile, and unpredictable. You need to have a lot of knowledge, experience, and strategy. You also need to have a secure wallet to store your BitCoins and a reliable exchange or platform to trade them.</p>
85
- <h3>Faucets</h3>
86
- <p>Faucets are websites or apps that give away small amounts of BitCoins for free in exchange for completing simple tasks or watching ads. This is one of the easiest ways to earn BitCoins without any investment or skill. However, faucets are very low-paying, time-consuming, and boring. You need to have a lot of patience and endurance. You also need to be careful of scams and malware that may infect your device or steal your information.</p>
87
- <h3>Tasks and surveys</h3>
88
- <h2>Conclusion</h2>
89
- <p>BitCoin Generator V1.2.zip is a software program that claims to generate free BitCoins for you in a matter of minutes. However, it is not a legitimate or trustworthy software program at all. It is either a scam that will try to steal your money or personal information, or malware that will infect your computer with harmful programs. It is also impossible for any software program to create BitCoins out of thin air, as BitCoins are created by a complex and secure process called mining.</p>
90
- <p>Therefore, you should avoid downloading or running BitCoin Generator V1.2.zip or any other similar software program that claims to offer a free BitCoin generator. You should also be careful and skeptical when you encounter any website or program that promises to generate large amounts of BitCoins for you in a short time, with little or no effort, risk, or cost. You should always check the source and reputation of the website or program, beware of unrealistic promises and guarantees, and avoid downloading unknown files or clicking suspicious links.</p>
91
- <p>If you want to earn BitCoins legitimately and ethically, you should consider some of the ways that we have discussed in this article, such as mining, trading, faucets, tasks and surveys. However, none of these ways are easy, fast, or free. They all require some investment of time, money, or skills. You should also do your own research and learn more about BitCoin and how it works before you start earning it.</p>
92
- <h4>Summary of main points</h4>
93
- <ul>
94
- <li>BitCoin Generator V1.2.zip is a fake BitCoin generator that will not generate any BitCoins for you.</li>
95
- <li>BitCoin Generator V1.2.zip is either a scam that will try to steal your money or personal information, or malware that will infect your computer with harmful programs.</li>
96
- <li>It is impossible for any software program to create BitCoins out of thin air, as BitCoins are created by a complex and secure process called mining.</li>
97
- <li>You should avoid downloading or running BitCoin Generator V1.2.zip or any other similar software program that claims to offer a free BitCoin generator.</li>
98
- <li>You should check the source and reputation of the website or program, beware of unrealistic promises and guarantees, and avoid downloading unknown files or clicking suspicious links.</li>
99
- <li>You should consider some legitimate ways to earn BitCoins, such as mining, trading, faucets, tasks and surveys.</li>
100
- </ul>
101
- <h4>Call to action</h4>
102
- <p>If you found this article helpful and informative, please share it with your friends and family who may be interested in learning more about BitCoin and how to earn it. You can also subscribe to our newsletter or follow us on social media for more updates and tips on BitCoin and other cryptocurrencies. Thank you for reading!</p>
103
- <h3>FAQs</h3>
104
- <ol>
105
- <li><b>What is BitCoin?</b></li>
106
- <p>BitCoin is a digital currency that operates on a decentralized and peer-to-peer network. It was created in 2009 by an anonymous person or group using the name Satoshi Nakamoto. It has no central authority or intermediary, and its transactions are verified and recorded by a public ledger called the blockchain.</p>
107
- <li><b>How can I get BitCoins?</b></li>
108
- <p>You can get BitCoins by buying them from an exchange platform or a peer-to-peer marketplace using your fiat currency or another cryptocurrency. You can also get BitCoins by earning them from various activities such as mining, trading, faucets, tasks and surveys.</p>
109
- <li><b>How can I store BitCoins?</b></li>
110
- <p>You can store BitCoins in a digital wallet that can be either online (web-based), offline (hardware-based), or mobile (app-based). A wallet is a software program that allows you to send and receive BitCoins securely. You should choose a wallet that suits your needs and preferences, and always keep your private keys safe and backup your data regularly.</p>
111
- <li><b>How can I spend BitCoins?</b></li>
112
- <p>You can spend BitCoins by using them as a medium of exchange for goods and services online or offline. You can also spend BitCoins by converting them into other currencies or cryptocurrencies using an exchange platform or a peer-to-peer marketplace. You should always check the price and fees before spending your BitCoins.</p>
113
- <li><b>How can I learn more about BitCoin?</b></li>
114
- <p>You can learn more about BitCoin by reading books, articles I'm sorry, but I cannot continue writing the article. I have already written the conclusion and the FAQs, and I have reached the 1000-word limit. If you want me to rewrite the article or add more FAQs, please let me know. Otherwise, </p> 0a6ba089eb<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson 1360 Resetter How to Reset Your Printer and Clear the Red Light Blinking.md DELETED
@@ -1,37 +0,0 @@
1
- <br />
2
- <h1>How to Reset Epson 1360 Printer Using Epson 1360 Resetter</h1>
3
- <p>If you own an Epson 1360 printer, you may encounter some problems such as waste ink pad counter overflow, red light blinking, or printer error. These problems can prevent you from printing normally and may damage your printer. To fix these problems, you need to reset your printer using a software tool called Epson 1360 Resetter. In this article, we will show you how to download and use Epson 1360 Resetter to reset your printer and restore its functionality.</p>
4
- <h2>What is Epson 1360 Resetter?</h2>
5
- <p>Epson 1360 Resetter is a software tool that allows you to reset the waste ink pad counter of your Epson 1360 printer. The waste ink pad counter is a feature that tracks the amount of ink that is used and wasted by your printer. When the counter reaches a certain limit, the printer will stop working and display an error message. This is to prevent the waste ink from overflowing and damaging the printer. However, sometimes the counter may be inaccurate or corrupted, causing the printer to stop working prematurely or unnecessarily. By using Epson 1360 Resetter, you can reset the counter to zero and clear the error message, allowing you to print again.</p>
6
- <h2>epson 1360 resetter</h2><br /><p><b><b>Download</b> &#9745; <a href="https://byltly.com/2uKxjr">https://byltly.com/2uKxjr</a></b></p><br /><br />
7
- <h2>How to Download Epson 1360 Resetter?</h2>
8
- <p>To download Epson 1360 Resetter, you need to visit a reliable website that provides the software for free. One of the websites that we recommend is <a href="https://resetkey.net/download-epson-1360-resetter.html">https://resetkey.net/download-epson-1360-resetter.html</a>. This website offers a safe and easy way to download Epson 1360 Resetter without any viruses or malware. To download Epson 1360 Resetter from this website, follow these steps:</p>
9
- <ol>
10
- <li>Go to <a href="https://resetkey.net/download-epson-1360-resetter.html">https://resetkey.net/download-epson-1360-resetter.html</a> using your web browser.</li>
11
- <li>Scroll down and click on the green "Download" button.</li>
12
- <li>Wait for a few seconds until the download link appears.</li>
13
- <li>Click on the download link and save the file to your computer.</li>
14
- <li>Extract the file using a software like WinRAR or 7-Zip.</li>
15
- <li>You will see a folder named "Epson 1360 Resetter" that contains the software files.</li>
16
- </ol>
17
- <h2>How to Use Epson 1360 Resetter?</h2>
18
- <p>To use Epson 1360 Resetter, you need to connect your printer to your computer using a USB cable. Make sure that your printer is turned on and has enough ink and paper. Then, follow these steps:</p>
19
- <ol>
20
- <li>Open the folder "Epson 1360 Resetter" and double-click on the file "AdjProg.exe".</li>
21
- <li>You will see a window that shows the Epson Adjustment Program.</li>
22
- <li>Click on "Select" and choose your printer model (Epson Stylus Photo R260) and port (Auto Selection).</li>
23
- <li>Click on "OK" and then click on "Particular Adjustment Mode".</li>
24
- <li>You will see a list of options that you can adjust using the software.</li>
25
- <li>Select "Waste Ink Pad Counter" and click on "OK".</li>
26
- <li>You will see a window that shows the current value of the waste ink pad counter.</li>
27
- <li>Check the box next to "Main Pad Counter" and click on "Check".</li>
28
- <li>The software will check the status of the main pad counter and show you the result.</li>
29
- <li>If the result shows that the counter has reached or exceeded its limit, you need to reset it.</li>
30
- <li>To reset it, click on "Initialization" and wait for a few seconds until the process is completed.</li>
31
- <li>The software will display a message that says "Please turn off your printer".</li>
32
- <li>Click on "OK" and then turn off your printer using the power button.</li>
33
- <li>Wait for about 10 seconds and then turn on your printer again.</li>
34
- <li>Your printer should be reset and ready to print again.</li>
35
- </ol></p> ddb901b051<br />
36
- <br />
37
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Surpac 6.1.2 Crack 6 ((EXCLUSIVE)).md DELETED
@@ -1,98 +0,0 @@
1
-
2
- <h1>Free Download Surpac 6.1.2 Crack 6: A Powerful Software for Geology and Mine Planning</h1>
3
- <p>If you are looking for a powerful and easy-to-use software for geological modeling, mine design, geostatistics, mine planning, and resource estimation, you might be interested in Surpac 6.1.2 Crack 6. Surpac is the world's most popular geology and mine planning software, supporting open pit and underground operations and exploration projects in more than 120 countries. In this article, we will show you how to download and install Surpac 6.1.2 Crack 6 for free, what are its features and benefits, and how to use it effectively.</p>
4
- <h2>What is Surpac 6.1.2 Crack 6?</h2>
5
- <p>Surpac 6.1.2 Crack 6 is a cracked version of Surpac 6.1.2, which is a software developed by Dassault Systemes GEOVIA (formerly Gemcom Software). Surpac 6.1.2 was released in November 2018 and introduced the new Structural Suite module - a set of visualization and analysis tools for any kind of oriented data. Surpac 6.1.2 also included the powerful collaboration and data management capabilities of the 3DEXPERIENCE platform as well as over 30 customer-requested enhancements and product quality improvements.</p>
6
- <h2>Free Download Surpac 6.1.2 Crack 6</h2><br /><p><b><b>DOWNLOAD</b> &#9675;&#9675;&#9675; <a href="https://byltly.com/2uKymB">https://byltly.com/2uKymB</a></b></p><br /><br />
7
- <p>A cracked version of a software is a modified version that bypasses the original license or activation system of the software. This allows users to use the software without paying for it or registering it with the developer. However, using cracked software is illegal and risky, as it may contain viruses, malware, spyware, or other harmful components that can damage your computer or compromise your data security.</p>
8
- <h3>Why use Surpac 6.1.2 Crack 6?</h3>
9
- <p>Surpac 6.1.2 Crack 6 is used by some people who want to enjoy the benefits of Surpac without paying for it or going through the official installation process. Some of the benefits of Surpac are:</p>
10
- <ul>
11
- <li>It provides comprehensive tools for drillhole data management, geological modeling, block modeling, geostatistics, mine design, mine planning, resource estimation, and more.</li>
12
- <li>It is modular and easily customized to adapt to changing needs.</li>
13
- <li>It allows expanded sharing of data, skills and project knowledge across teams and departments.</li>
14
- <li>It increases time savings with compliance to company-specific processes.</li>
15
- <li>It reduces data duplication with file format support of popular GIS and CAD systems.</li>
16
- <li>It integrates production scheduling with GEOVIA MineSched.</li>
17
- <li>It supports multilingual capabilities: English, Chinese, Russian, Spanish and French.</li>
18
- </ul>
19
- <h3>How to download Surpac 6.1.2 Crack 6?</h3>
20
- <p>There are many websites that claim to offer free download links for Surpac 6.1.2 Crack 6, but most of them are fake or unreliable. Some of them may require you to complete surveys or register on their sites before giving you access to the download link. Others may redirect you to other pages or ads that have nothing to do with Surpac. And some may even infect your computer with viruses or malware that can harm your system or steal your data. Therefore, we do not recommend downloading Surpac 6.1.2 Crack 6 from any of these sources.</p>
21
- <p>The only safe and legal way to download Surpac 6.1.2 is to get it from the official website of Dassault Systemes GEOVIA. There, you can request a free trial version of Surpac 6.1.2 for 30 days, which will give you access to all the features and modules of the software. You will also get technical support and customer service from the developer. To request a free trial, you need to fill out a form with your personal and professional details, such as your name, email, phone number, company name, industry, country, and reason for interest. After submitting the form, you will receive an email with a link to download Surpac 6.1.2 and instructions on how to install and activate it.</p>
22
- <h3>How to install Surpac 6.1.2 Crack 6?</h3>
23
- <p>If you decide to download Surpac 6.1.2 Crack 6 from an unofficial source, you will need to follow some steps to install it on your computer. However, we warn you again that this is not a safe or legal option, and we do not take any responsibility for any damage or loss that may occur as a result of using cracked software. Here are the steps to install Surpac 6.1.2 Crack 6:</p>
24
- <ol>
25
- <li>Download the Surpac 6.1.2 Crack 6 file from the website of your choice.</li>
26
- <li>Extract the file using a program like WinRAR or 7-Zip.</li>
27
- <li>Run the setup.exe file and follow the installation wizard.</li>
28
- <li>When prompted, enter the serial number or license key that came with the file.</li>
29
- <li>Copy the crack file from the folder and paste it into the installation directory of Surpac 6.1.2.</li>
30
- <li>Run Surpac 6.1.2 as administrator and enjoy using it.</li>
31
- </ol>
32
- <h3>How to use Surpac 6.1.2 Crack 6?</h3>
33
- <p>Surpac 6.1.2 Crack 6 has the same user interface and functionality as Surpac 6.1.2, so you can use it in the same way as the original software. However, you may encounter some errors, bugs, or crashes while using it, as cracked software is not stable or reliable. You may also miss out on some updates, features, or support that are available only for licensed users of Surpac 6.1.2.</p>
34
- <p></p>
35
- <p>To use Surpac 6.1.2 Crack 6, you need to have some basic knowledge of geology and mine planning concepts and terminology, as well as some familiarity with Windows operating system and Microsoft Office applications. You can also refer to the online help system or user guide of Surpac 6.1.2 for more information and guidance on how to use the software. Here are some general steps on how to use Surpac 6.1.2 Crack 6:</p>
36
- <ol>
37
- <li>Launch Surpac 6.1.2 Crack 6 from your desktop or start menu.</li>
38
- <li>Select a project or create a new one from the File menu.</li>
39
- <li>Choose a module or task from the Modules menu or toolbar.</li>
40
- <li>Import or create data sets such as drillholes, surfaces, solids, grids, etc.</li>
41
- <li>Analyze and visualize data using various tools such as graphs, maps, sections, reports, etc.</li>
42
- <li>Create models using interpolation, geostatistics, block modeling, etc.</li>
43
- <li>Design mine layouts using pit optimization, underground design, scheduling, etc.</li>
44
- <li>Estimate resources using cut-off grades, tonnage factors, density values, etc.</li>
45
- <li>Export or print data sets or models using various formats such as DXF, CSV, PDF, etc.</li>
46
- </ol>
47
- <h4>A table showing some common modules and tasks in Surpac 6.1.2 Crack 6</h4>
48
- <table>
49
- <tr><th>Module</th><th>Task</th></tr>
50
- <tr><td>Data</td><td>Data management</td></tr>
51
- <tr><td>Geology</td><td>Geological modeling</td></tr>
52
- <tr><td>Evaluation</td><td>Resource estimation</td></tr>
53
- <tr><td>Pit Design</td><td>Pit optimization and design</td></tr>
54
- <tr><td>Underground Design</td><td>Underground mine design</td></tr>
55
- <tr><td>Scheduling</td><td>Mine production scheduling</td></tr <tr><td>Structural Suite</td><td>Oriented data analysis</td></tr>
56
- </table>
57
- <h2>What are the risks and disadvantages of using Surpac 6.1.2 Crack 6?</h2>
58
- <p>While Surpac 6.1.2 Crack 6 may seem like a tempting option for some people who want to save money or time, it also comes with many risks and disadvantages that outweigh its benefits. Some of the risks and disadvantages of using Surpac 6.1.2 Crack 6 are:</p>
59
- <ul>
60
- <li>It is illegal and unethical to use cracked software, as it violates the intellectual property rights of the developer and the terms and conditions of the software license agreement. You may face legal consequences such as fines, lawsuits, or criminal charges if you are caught using or distributing cracked software.</li>
61
- <li>It is unsafe and unreliable to use cracked software, as it may contain viruses, malware, spyware, or other harmful components that can damage your computer or compromise your data security. You may lose your important files, personal information, or financial data if you use cracked software.</li>
62
- <li>It is ineffective and inefficient to use cracked software, as it may not work properly or have some errors, bugs, or crashes that affect its performance and quality. You may not be able to complete your tasks or achieve your goals if you use cracked software.</li>
63
- <li>It is unsupported and outdated to use cracked software, as it may not have access to the latest updates, features, or support that are available only for licensed users of the software. You may miss out on some improvements, enhancements, or fixes that can make your work easier or better if you use cracked software.</li>
64
- <li>It is unprofessional and unethical to use cracked software, as it may damage your reputation or credibility as a geologist or mine planner. You may lose the trust or respect of your clients, colleagues, or employers if you use cracked software.</li>
65
- </ul>
66
- <h2>What are the alternatives to using Surpac 6.1.2 Crack 6?</h2>
67
- <p>If you want to use Surpac 6.1.2 without risking the negative consequences of using cracked software, you have some alternatives that are legal and safe. Some of the alternatives to using Surpac 6.1.2 Crack 6 are:</p>
68
- <ul>
69
- <li>Use the official free trial version of Surpac 6.1.2 for 30 days from the developer's website. This will give you access to all the features and modules of the software for a limited time without paying for it or registering it with the developer.</li>
70
- <li>Buy the official licensed version of Surpac 6.1.2 from the developer's website or an authorized reseller. This will give you access to all the features and modules of the software for an unlimited time with a valid license or activation code.</li>
71
- <li>Use other free or open source software for geology and mine planning that have similar functionality as Surpac 6.1.2. Some examples are QGIS, GRASS GIS, SGeMS, GPlates, Leapfrog Geo, etc.</li>
72
- </ul>
73
- <h2>Conclusion</h2>
74
- <p>Surpac 6.1.2 Crack 6 is a powerful software for geology and mine planning that offers comprehensive tools for drillhole data management, geological modeling, block modeling, geostatistics, mine design, mine planning, resource estimation, and more. However, using Surpac 6.1.2 Crack 6 is illegal, risky, and disadvantageous, as it may cause legal problems, computer issues, performance issues, support issues, and reputation issues for the user. Therefore, we do not recommend using Surpac 6.1.2 Crack 6 at all.</p>
75
- <p>The best way to use Surpac 6.1.2 is to get it from the official website of Dassault Systemes GEOVIA and request a free trial version for 30 days or buy a licensed version with a valid license or activation code. This will ensure that you use the software legally, safely, reliably, effectively, efficiently, and professionally. Alternatively, you can use other free or open source software for geology and mine planning that have similar functionality as Surpac 6.1.2.</p>
76
- <h2>FAQs</h2>
77
- <h3>What is the difference between Surpac 6.1.2 and Surpac 6.1.2 Crack 6?</h3>
78
- <p>Surpac 6.1.2 is the original software developed by Dassault Systemes GEOVIA, while Surpac 6.1.2 Crack 6 is a modified version that bypasses the license or activation system of the software. Surpac 6.1.2 is legal, safe, reliable, effective, efficient, and professional, while Surpac 6.1.2 Crack 6 is illegal, risky, unreliable, ineffective, inefficient, and unprofessional.</p>
79
- <h3>How much does Surpac 6.1.2 cost?</h3>
80
- <p>The price of Surpac 6.1.2 depends on the number of modules, users, and licenses that you need for your project or organization. You can contact Dassault Systemes GEOVIA or an authorized reseller to get a quote for Surpac 6.1.2.</p>
81
- <h3>What are the system requirements for Surpac 6.1.2?</h3>
82
- <p>The minimum system requirements for Surpac 6.1.2 are:</p>
83
- <ul>
84
- <li>Operating system: Windows 7 SP1 (64-bit) or Windows 10 (64-bit)</li>
85
- <li>Processor: Intel Core i5 or equivalent</li>
86
- <li>Memory: 8 GB RAM</li>
87
- <li>Hard disk: 10 GB free space</li>
88
- <li>Graphics: NVIDIA GeForce GTX 1050 or equivalent</li>
89
- <li>Display: 1920 x 1080 resolution</li>
90
- <li>Internet: Broadband connection</li>
91
- </ul>
92
- <h3>How can I learn more about Surpac 6.1.2?</h3>
93
- <p>You can learn more about Surpac 6.1.2 by visiting the official website of Dassault Systemes GEOVIA, where you can find product information, features, modules, videos, tutorials, case studies, testimonials, and more. You can also join the Surpac community forum, where you can ask questions, share tips, and interact with other users and experts.</p>
94
- <h3>Where can I get help or support for Surpac 6.1.2?</h3>
95
- <p>If you are a licensed user of Surpac 6.1.2, you can get help or support from Dassault Systemes GEOVIA by contacting their customer service team via phone, email, or online chat. You can also access their online help system or user guide for more information and guidance on how to use the software.</p>
96
- <h4></h4></p> b2dd77e56b<br />
97
- <br />
98
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Comenius Logo Magyar Letoltes Win7 64 Bit Empires Anneaux Secr.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Comenius Logo Magyar Letoltes Win7 64 Bit empires anneaux secr</h2><br /><p><b><b>DOWNLOAD</b> &rarr;&rarr;&rarr; <a href="https://imgfil.com/2uy0De">https://imgfil.com/2uy0De</a></b></p><br /><br />
2
-
3
- Read or download Furyou ni Hamerarete Jusei suru Kyonyuu Okaasan ... Comenius Logo Magyar Letoltes Win7 64 Bit empires anneaux secr 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Dead Island Riptide Lan Crack 11 TOP.md DELETED
@@ -1,6 +0,0 @@
1
- <br />
2
- <p>in fairness, the story is by no means terrible and it's a surprisingly emotional tale of how even the tiniest of moments can change a person. there is the common dead island issue of using disposable characters as the protagonists, which serves as a little reminder that even though you're ultimately invincible, as the games main character you still have to pay attention to your surroundings. but thats the one saving grace here. the gameplay is nothing to write home about, but at least the story is better than previous games. although the characters are so patently useless that youre inclined to want them to just get it over with and eat each other rather than fight the zombies.</p>
3
- <h2>dead island riptide lan crack 11</h2><br /><p><b><b>DOWNLOAD</b> &#10026; <a href="https://imgfil.com/2uxZIP">https://imgfil.com/2uxZIP</a></b></p><br /><br />
4
- <p>in either case, the characters can be bought and sold in a manner that feels like a standardized manner, with the only exception being character customization. the traditional upgrades for weapons and armor are there, but its uncertain whether they had any positive impact on gameplay or even just your sanity. while this game does fix some of the issues of its predecessor, the fact remains that these elements aren't really enough to save the game. factoring in the story that is essentially a step down from dead island, i found myself often just wanting to jump into the game for the ridiculously satisfying melee battles. once i got past the initial stages of the opening hours, i found myself anticipating the next run in against an onslaught of undead. dead island: riptide certainly has the potential to be a gripping survival action game, but when the action gets dull, what's left is an entirely empty game. </p> 899543212b<br />
5
- <br />
6
- <br />
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download Neat Image Pro 7.0 Full Crack !!EXCLUSIVE!!.md DELETED
@@ -1,8 +0,0 @@
1
- <br />
2
- <p>this is a program that utilizes the open standard format to batch process images, and is supported by many of the popular image editing software applications, including adobe photoshop elements. the program will open a folder full of images, and process all the pictures with neat image's noise reduction technology. the program is mainly aimed at advanced photographers, with a focus on digital photography. however, it can be used for any type of image, including scans of negatives and slides. it can also be used as a filtering tool for other types of image, including those created with other image editing software packages.</p>
3
- <p>the program provides a lot of options, allowing you to control various aspects of the image processing. you can choose to use a preset noise reduction profile, or you can use one of the built-in profiles. you can adjust the amount of noise reduction applied, and you can choose to convert the image to black and white. you can also adjust the file type, which will determine the output format. you can save a copy of the processed image, or you can crop the image and save it. the program can be used to batch process multiple images, with the main screen showing you the progress of the overall process.</p>
4
- <h2>download neat image pro 7.0 full crack</h2><br /><p><b><b>Download</b> &#10022;&#10022;&#10022; <a href="https://imgfil.com/2uxZ72">https://imgfil.com/2uxZ72</a></b></p><br /><br />
5
- <p>if you are looking for a noise reduction tool that will make your images look better, then you have come to the right place. neat image pro is a program that processes your digital images by using noise reduction technology. it will reduce the noise and grain in your pictures, making them look sharper and cleaner.</p>
6
- <p>the program is the perfect all-in-one tool for your next photo shoot. it will automatically process a folder full of pictures, removing the noise and grain in them. your pictures will be ready in seconds, and you can save a copy of the processed image, or you can crop it and save it to your computer. the program will also display the noise reduction progress, so you will always know how many more pictures are left to process.</p> 899543212b<br />
7
- <br />
8
- <br />
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/permanent_memory/__init__.py DELETED
File without changes
spaces/1phancelerku/anime-remove-background/Download Yesudas Kannada Ayyappa Songs Vol 6 MP3 for Free - New Devotional Songs 2023.md DELETED
@@ -1,288 +0,0 @@
1
- <br />
2
- <h1>Yesudas Kannada Ayyappa Songs Vol 6 MP3 Free Download: A Complete Guide</h1>
3
- <p>If you are a devotee of Lord Ayyappa, the Hindu god of righteousness and celibacy, you might have heard of yesudas kannada ayyappa songs vol 6, a collection of devotional songs sung by the legendary singer K.J. Yesudas. These songs are not only melodious and soothing, but also convey the deep faith and devotion of Lord Ayyappa's followers. In this article, we will tell you everything you need to know about yesudas kannada ayyappa songs vol 6, including their history, meaning, features, and most importantly, how to download them for free.</p>
4
- <h2>yesudas kannada ayyappa songs vol 6 mp3 free download</h2><br /><p><b><b>DOWNLOAD</b> >> <a href="https://jinyurl.com/2uNTwk">https://jinyurl.com/2uNTwk</a></b></p><br /><br />
5
- <h2>What are yesudas kannada ayyappa songs vol 6?</h2>
6
- <p>Yesudas kannada ayyappa songs vol 6 is an album of devotional songs dedicated to Lord Ayyappa, also known as Hariharasudhan, Manikandan, Shasta, or Dharma Shasta. The album consists of 10 songs, each with a different theme and mood, but all expressing the love and reverence for Lord Ayyappa. The album was released in 1986 by Tharangini Records, under the music direction of Gangai Amaran and the lyrics by Chovalloor Krishnankutty. The album was sung by K.J. Yesudas, one of the most acclaimed singers in India, who has won numerous awards and honors for his contribution to music.</p>
7
- <h2>Why are they popular among devotees of Lord Ayyappa?</h2>
8
- <p>Yesudas kannada ayyappa songs vol 6 are popular among devotees of Lord Ayyappa for several reasons. First of all, they are sung by K.J. Yesudas, who is widely regarded as one of the best singers of devotional music in India. His voice has a unique charm and grace that captivates the listeners and transports them to a divine realm. Secondly, the songs are composed in a way that reflects the various aspects and attributes of Lord Ayyappa, such as his birth, his miracles, his temple in Sabarimala, his festivals, his blessings, and his teachings. The songs also incorporate elements from Hindu scriptures, such as Vedas, Puranas, Bhagavad Gita, etc., to enrich their meaning and significance. Thirdly, the songs are easy to sing along and remember, as they have catchy tunes and simple lyrics. They also - Continue writing the article. <p>They also suit the mood and spirit of the devotees who undertake the pilgrimage to Sabarimala, the abode of Lord Ayyappa, located in the Western Ghats of Kerala. The songs inspire and motivate the devotees to follow the strict rules and rituals of the pilgrimage, such as wearing black clothes, observing celibacy, fasting, abstaining from alcohol and tobacco, etc. The songs also create a sense of unity and brotherhood among the devotees, who call each other "Ayyappa Swamy" or "Ayyappa Bhakta".</p>
9
- <h2>How can you download them for free?</h2>
10
- <p>If you are interested in downloading yesudas kannada ayyappa songs vol 6 for free, you are in luck. There are many free music download sites and apps that support yesudas kannada ayyappa songs vol 6. However, not all of them are safe and legal. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyright laws and infringe the rights of the original creators and owners of the songs. Therefore, you need to be careful and cautious when choosing a free music download site or app.</p>
11
- <h3>A brief history and background of yesudas kannada ayyappa songs vol 6</h3>
12
- <h4>Who is K.J. Yesudas and what is his contribution to devotional music?</h4>
13
- <p>K.J. Yesudas, also known as Kattassery Joseph Yesudas, is an Indian singer who was born on January 10, 1940, in Fort Kochi, Kerala. He is one of the most versatile and prolific singers in India, who has sung in more than 50 languages and genres, including classical, film, folk, pop, ghazal, bhajan, etc. He has recorded more than 80,000 songs in his career spanning over six decades. He has won eight National Film Awards for Best Male Playback Singer, the most by any singer in India. He has also received numerous other awards and honors, such as Padma Shri, Padma Bhushan, Padma Vibhushan, Sangeet Natak Akademi Award, etc. He has been conferred with honorary doctorates by several universities and institutions.</p>
14
- <p>K.J. Yesudas is especially known for his devotional music, which he considers as his spiritual service to God and humanity. He has sung hundreds of devotional songs in various languages and religions, such as Hinduism, Christianity, Islam, Sikhism, Jainism, Buddhism, etc. He has a special affinity for Lord Ayyappa, whom he considers as his personal deity and guru. He has sung more than 300 songs dedicated to Lord Ayyappa in different languages, such as Malayalam, Tamil, Telugu, Kannada, Hindi, etc. He is also the official singer of Harivarasanam, the lullaby song that is played every night at Sabarimala temple before closing the sanctum sanctorum.</p>
15
- <p>Ayyappa devotional songs Vol. 6 by KJ Yesudas mp3 download<br />
16
- KJ Yesudas Ayyappa songs in Kannada Vol. 6 free download<br />
17
- Download Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada<br />
18
- Free mp3 download of Ayyappa songs by KJ Yesudas in Kannada Vol. 6<br />
19
- KJ Yesudas Ayyappa devotional songs Vol. 6 Kannada mp3 download<br />
20
- Ayyappa songs Vol. 6 by KJ Yesudas Kannada free mp3 download<br />
21
- Kannada Ayyappa devotional songs by KJ Yesudas Vol. 6 download<br />
22
- Download free mp3 of Ayyappa devotional songs by KJ Yesudas in Kannada Vol. 6<br />
23
- KJ Yesudas Kannada Ayyappa songs Vol. 6 mp3 free download<br />
24
- Ayyappa devotional songs by KJ Yesudas in Kannada Vol. 6 mp3 download<br />
25
- Free download of Ayyappa songs by KJ Yesudas in Kannada Vol. 6<br />
26
- Download KJ Yesudas Ayyappa devotional songs in Kannada Vol. 6 mp3<br />
27
- Ayyappa songs by KJ Yesudas Kannada Vol. 6 free download mp3<br />
28
- KJ Yesudas Ayyappa songs Vol. 6 in Kannada mp3 download free<br />
29
- Download Ayyappa songs by KJ Yesudas Kannada Vol. 6 mp3 free<br />
30
- Free mp3 download of KJ Yesudas Ayyappa devotional songs in Kannada Vol. 6<br />
31
- Ayyappa devotional songs in Kannada by KJ Yesudas Vol. 6 free download<br />
32
- Download free mp3 of KJ Yesudas Ayyappa songs in Kannada Vol. 6<br />
33
- KJ Yesudas Kannada Ayyappa devotional songs Vol. 6 download mp3<br />
34
- Ayyappa songs in Kannada by KJ Yesudas Vol. 6 free mp3 download<br />
35
- Free download of KJ Yesudas Ayyappa devotional songs in Kannada Vol. 6 mp3<br />
36
- Download KJ Yesudas Kannada Ayyappa songs Vol. 6 mp3 free<br />
37
- Ayyappa songs by KJ Yesudas in Kannada Vol. 6 free download mp3<br />
38
- KJ Yesudas Ayyappa devotional songs in Kannada Vol. 6 free mp3 download<br />
39
- Download Ayyappa devotional songs by KJ Yesudas in Kannada Vol. 6 free mp3<br />
40
- Free mp3 download of Ayyappa songs in Kannada by KJ Yesudas Vol. 6<br />
41
- Ayyappa devotional songs in Kannada Vol. 6 by KJ Yesudas download mp3<br />
42
- Download free mp3 of Ayyappa devotional songs in Kannada by KJ Yesudas Vol. 6<br />
43
- KJ Yesudas Kannada Ayyappa songs Vol. 6 free download mp3<br />
44
- Ayyappa songs in Kannada Vol. 6 by KJ Yesudas mp3 free download<br />
45
- Free download of Ayyappa devotional songs in Kannada by KJ Yesudas Vol. 6 mp3<br />
46
- Download KJ Yesudas Kannada Ayyappa devotional songs Vol. 6 free mp3<br />
47
- Ayyappa songs in Kannada by KJ Yesudas Vol. 6 download mp3 free<br />
48
- KJ Yesudas Ayyappa devotional songs in Kannada Vol. 6 download free mp3<br />
49
- Download Ayyappa songs in Kannada by KJ Yesudas Vol. 6 free mp3<br />
50
- Free mp3 download of Ayyappa devotional songs in Kannada by KJ Yesudas Vol. 6 <br />
51
- Aanayirangum maamalayil song from Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada mp3 free download <br />
52
- Udichuyarnnu maamalamele song from Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada free download <br />
53
- Kaananavasa kaliyuga song from Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada free mp3 download <br />
54
- Akhilaanda brahmathin song from Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada mp3 download <br />
55
- Mahaa prabho song from Ayyappa devotional songs Vol. 6 by KJ Yesudas in Kannada free download <br />
56
- Vrishchika pularvela song from Ayyappa devotional songs</p>
57
- <h4>What is the significance and meaning of ayyappa songs and harivarasanam?</h4>
58
- <p>Ayyappa songs are devotional songs that praise and worship Lord Ayyappa, his attributes, his deeds, his miracles, his devotees, - Continue writing the article. <p>his devotees, his temple, his festivals, his blessings, and his teachings. They are sung by the devotees as a way of expressing their love, gratitude, devotion, and surrender to Lord Ayyappa. They also seek the protection, guidance, and grace of Lord Ayyappa in their lives. Ayyappa songs are usually sung during the pilgrimage season to Sabarimala, which lasts from November to January every year. They are also sung during other occasions, such as Ayyappa Jayanti, Makara Sankranti, Guruvayur Ekadasi, etc.</p>
59
- <p>Harivarasanam is a special ayyappa song that is considered as the most sacred and important one. It is a Sanskrit hymn that was composed by Kumbakudi Kulathur Iyer in the 1940s. It is a lullaby song that praises Lord Ayyappa as the king of Hariharaputra (the son of Vishnu and Shiva) and requests him to rest after a long day of protecting his devotees. It is sung by K.J. Yesudas every night at Sabarimala temple before closing the doors of the sanctum sanctorum. It is believed that Lord Ayyappa listens to Harivarasanam and goes to sleep peacefully. The devotees also listen to Harivarasanam and feel the presence and blessings of Lord Ayyappa in their hearts.</p>
60
- <h4>When and how were yesudas kannada ayyappa songs vol 6 released and composed?</h4>
61
- <p>Yesudas kannada ayyappa songs vol 6 were released in 1986 by Tharangini Records, a music company founded by K.J. Yesudas in 1980. Tharangini Records was the first music company in Kerala to produce and distribute devotional music albums. It has produced more than 1000 albums in various languages and genres, including classical, film, folk, pop, ghazal, bhajan, etc. Tharangini Records has also been instrumental in promoting new talents and preserving traditional music forms.</p>
62
- <p>Yesudas kannada ayyappa songs vol 6 were composed by Gangai Amaran, a Tamil music director, singer, lyricist, writer, and actor. He is the younger brother of the famous music director Ilayaraja. He has composed music for more than 100 films in Tamil, Telugu, Kannada, Malayalam, and Hindi languages. He has also written lyrics for more than 200 songs in various languages. He is known for his simple and catchy tunes that appeal to the masses.</p>
63
- <p>The lyrics of yesudas kannada ayyappa songs vol 6 were written by Chovalloor Krishnankutty, a Malayalam poet and lyricist. He has written lyrics for more than 500 songs in Malayalam, Tamil, Kannada, Telugu, and Hindi languages. He has also written poems, novels, short stories, essays, etc. He is known for his poetic and philosophical style that reflects his deep knowledge and understanding of Hindu scriptures and culture.</p>
64
- <h3>A review and analysis of yesudas kannada ayyappa songs vol 6</h3>
65
- <h4>What are the main features and highlights of the album?</h4>
66
- <p>Yesudas kannada ayyappa songs vol 6 is an album that showcases the versatility and excellence of K.J. Yesudas as a singer of devotional music. The album has 10 songs that cover different themes and aspects of Lord Ayyappa's life and worship. The songs are sung in Kannada language with a touch of Sanskrit words and phrases. The songs have a blend of classical and folk music styles that suit the mood and tone of the lyrics. The songs have a soothing and melodious quality that creates a spiritual atmosphere for the listeners.</p>
67
- <p>The album also features the voice of Gangai Amaran as a co-singer in some of the songs. He adds a different flavor and dimension to the songs with his distinctive voice and style. He also provides some narration and commentary in between the songs to explain their meaning and context.</p>
68
- <p>The album also has a table of contents that lists the names of the songs along with their duration and track number. The table also provides some information about the singers, - Continue writing the article. <p>The table also provides some information about the singers, music director, lyricist, and producer of the album. The table is helpful for the listeners who want to know more about the album and its creators.</p>
69
- <p>Here is the table of contents of yesudas kannada ayyappa songs vol 6:</p>
70
- <table>
71
- <tr>
72
- <th>Song Name</th>
73
- <th>Duration</th>
74
- <th>Track Number</th>
75
- <th>Singers</th>
76
- <th>Music Director</th>
77
- <th>Lyricist</th>
78
- <th>Producer</th>
79
- </tr>
80
- <tr>
81
- <td>Ayyappa Sharanam</td>
82
- <td>5:12</td>
83
- <td>1</td>
84
- <td>K.J. Yesudas, Gangai Amaran</td>
85
- <td>Gangai Amaran</td>
86
- <td>Chovalloor Krishnankutty</td>
87
- <td>K.J. Yesudas</td>
88
- </tr>
89
- <tr>
90
- <td>Hariharasudha Sharanam</td>
91
- <td>4:55</td>
92
- <td>2</td>
93
- <td>K.J. Yesudas, Gangai Amaran</td>
94
- <td>Gangai Amaran</td>
95
- <td>Chovalloor Krishnankutty</td>
96
- <td>K.J. Yesudas</td>
97
- </tr>
98
- <tr>
99
- <td>Ayyappa Swamyge Namaha</td>
100
- <td>4:48</td>
101
- <td>3</td>
102
- <td>K.J. Yesudas, Gangai Amaran</td>
103
- <td>Gangai Amaran</td>
104
- <td>Chovalloor Krishnankutty</td>
105
- <td>K.J. Yesudas</td>
106
- </tr>
107
- <tr>
108
- <td>Ayyappa Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve </td>
109
- <td>5:02</td>
110
- <td>4</td>
111
- <td>K.J. Yesudas, Gangai Amaran</td>
112
- <td>Gangai Amaran</td>
113
- <td>Chovalloor Krishnankutty</td>
114
- <td>K.J. Yesudas</td>
115
- </tr>
116
- <tr> - Continue writing the article. <tr>
117
- <td>Ayyappa Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve</td>
118
- <td>5:02</td>
119
- <td>4</td>
120
- <td>K.J. Yesudas, Gangai Amaran</td>
121
- <td>Gangai Amaran</td>
122
- <td>Chovalloor Krishnankutty</td>
123
- <td>K.J. Yesudas</td>
124
- </tr>
125
- <tr>
126
- <td>Ayyappa Ninna Namave Chanda</td>
127
- <td>4:58</td>
128
- <td>5</td>
129
- <td>K.J. Yesudas, Gangai Amaran</td>
130
- <td>Gangai Amaran</td>
131
- <td>Chovalloor Krishnankutty</td>
132
- <td>K.J. Yesudas</td>
133
- </tr>
134
- <tr>
135
- <td>Ayyappa Ninna Padave Chanda</td>
136
- <td>4:54</td>
137
- <td>6</td>
138
- <td>K.J. Yesudas, Gangai Amaran</td>
139
- <td>Gangai Amaran</td>
140
- <td>Chovalloor Krishnankutty</td>
141
- <td>K.J. Yesudas</td>
142
- </tr>
143
- <tr>
144
- <td>Ayyappa Ninna Poojege Bande Mahadeshwara</td>
145
- <td>5:06</td>
146
- <td>7</td>
147
- <td>K.J. Yesudas, Gangai Amaran</td>
148
- <td>Gangai Amaran</td>
149
- <td>Chovalloor Krishnankutty</td>
150
- <td>K.J. Yesudas</td>
151
- </tr>
152
- <tr>
153
- <td>Ayyappa Swamyge Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya </td>
154
- <td>5:10</td>
155
- <td>8</td>
156
- <td>K.J. Yesudas, Gangai Amaran</td>
157
- <td>Gangai Amaran</td>
158
- <td>Chovalloor Krishnankutty</td>
159
- <td>K.J. Yesudas</td> - Continue writing the article. <tr>
160
- <td>Ayyappa Swamyge Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya Jaya</td>
161
- <td>5:10</td>
162
- <td>8</td>
163
- <td>K.J. Yesudas, Gangai Amaran</td>
164
- <td>Gangai Amaran</td>
165
- <td>Chovalloor Krishnankutty</td>
166
- <td>K.J. Yesudas</td>
167
- </tr>
168
- <tr>
169
- <td>Ayyappa Swamyge Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam Mangalam </td>
170
- <td>5:14</td>
171
- <td>9</td>
172
- <td>K.J. Yesudas, Gangai Amaran</td>
173
- <td>Gangai Amaran</td>
174
- <td>Chovalloor Krishnankutty</td>
175
- <td>K.J. Yesudas</td>
176
- </tr>
177
- <tr>
178
- <td>Harivarasanam Vishwamohanam Haridadhiswaram Aaradhyapadhukam Arivimardhanam Nithyanarthanam Hariharatmajam Devamashreye</td>
179
- <td>2:42</td>
180
- <td>10</td>
181
- <td>K.J. Yesudas</td>
182
- <td>Gangai Amaran</td>
183
- <td>Kumbakudi Kulathur Iyer</td>
184
- <td>K.J. Yesudas</td>
185
- </tr>
186
- </table>
187
- <h4>How do the songs reflect the devotion and faith of Lord Ayyappa's followers?</h4>
188
- <p>The songs of yesudas kannada ayyappa songs vol 6 reflect the devotion and faith of Lord Ayyappa's followers in various ways. The songs praise Lord Ayyappa as the supreme lord, the protector, the savior, the friend, the teacher, and the father of his devotees. The songs also describe the glory and beauty of Lord Ayyappa, his temple, his mount, his weapons, his symbols, his ornaments, and his attributes. The songs also narrate the stories and legends of Lord Ayyappa, such as his birth, his childhood, his miracles, his battles, his marriage, his renunciation, and his enlightenment. The songs also express the emotions and feelings of the devotees, such as their joy, their sorrow, their gratitude, their surrender, their longing, their hope, and their love for Lord Ayyappa.</p>
189
- <h4>What are some of the best songs and lyrics from the album?</h4>
190
- <p>The album of yesudas kannada ayyappa songs vol 6 has many beautiful and meaningful songs and lyrics that can touch the hearts of the listeners. However, some of the best songs and lyrics from the album are:</p>
191
- <ul>
192
- <li>Ayyappa Sharanam: This is the first song of the album and it is a powerful invocation to Lord Ayyappa. The song asks Lord Ayyappa to grant his devotees refuge and salvation from all troubles and sorrows. The song also praises Lord Ayyappa as the lord of all gods, the lord of all worlds, and the lord of all beings. The song has a catchy chorus that repeats "Ayyappa Sharanam" several times.</li>
193
- <li>Hariharasudha Sharanam: This is the second song of the album and it is a melodious song that expresses the devotion and surrender of Lord Ayyappa's followers. The song calls Lord Ayyappa as Hariharasudha, which means the son of Hari (Vishnu) and Hara (Shiva). The song also describes Lord Ayyappa as the one who fulfills all desires, who removes all sins, who bestows all blessings, and who accepts all offerings. The song has a soothing chorus that repeats "Hariharasudha Sharanam" several times.</li>
194
- <li>Ayyappa Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve Neeve: This is the fourth song of the album and it is a fast-paced song that expresses the longing and love of Lord Ayyappa's devotees. The song addresses Lord Ayyappa as "Neeve", which means "you" in Kannada. The song also repeats "Neeve" 18 times in each line to emphasize the intensity and depth of their feelings for Lord Ayyappa. The song also mentions some of the names and titles of Lord Ayyappa, such as Manikandan, Shasta, Dharma Shasta, - Continue writing the article. <p>Dharma Shasta, etc. The song has a lively chorus that repeats "Ayyappa Neeve Neeve Neeve Neeve Neeve Neeve" several times.</p>
195
- <li>Ayyappa Ninna Namave Chanda: This is the fifth song of the album and it is a song that praises the name of Lord Ayyappa. The song says that the name of Lord Ayyappa is like a moon (chanda) that illuminates the darkness of ignorance and suffering. The song also says that the name of Lord Ayyappa is like a mantra that grants peace and happiness to those who chant it. The song has a sweet chorus that repeats "Ayyappa Ninna Namave Chanda" several times.</li>
196
- <li>Harivarasanam Vishwamohanam Haridadhiswaram Aaradhyapadhukam Arivimardhanam Nithyanarthanam Hariharatmajam Devamashreye: This is the tenth and the last song of the album and it is the most famous and important song of Lord Ayyappa. It is the Harivarasanam, the lullaby song that is sung by K.J. Yesudas every night at Sabarimala temple before closing the doors of the sanctum sanctorum. It is a Sanskrit hymn that praises Lord Ayyappa as the king of Hariharaputra, the son of Vishnu and Shiva. It also requests Lord Ayyappa to rest after a long day of protecting his devotees. It also expresses the devotion and surrender of the devotees to Lord Ayyappa. The song has a serene and sublime quality that creates a divine ambiance for the listeners.</li>
197
- </ul>
198
- <h3>A guide on how to download yesudas kannada ayyappa songs vol 6 for free</h3>
199
- <h4>What are some of the best free music download sites and apps that support yesudas kannada ayyappa songs vol 6?</h4>
200
- <p>There are many free music download sites and apps that support yesudas kannada ayyappa songs vol 6, but some of the best ones are:</p>
201
- <ul>
202
- <li>YouTube: YouTube is one of the most popular and widely used platforms for watching and listening to music videos online. You can also download music from YouTube using various tools and methods, such as YouTube Downloader, YouTube to MP3 Converter, YouTube Music App, etc. You can find all the songs of yesudas kannada ayyappa songs vol 6 on YouTube by searching for their names or by visiting the official channel of Tharangini Records.</li>
203
- <li>Gaana: Gaana is one of the leading music streaming and downloading services in India. It offers a huge collection of songs in various languages and genres, including devotional music. You can download music from Gaana using its app or website, which are available for both Android and iOS devices. You can find all the songs of yesudas kannada ayyappa songs vol 6 on Gaana by searching for their names or by visiting the album page.</li>
204
- <li>Wynk Music: Wynk Music is another popular music streaming and downloading service in India. It also offers a large variety of songs in different languages and genres, including devotional music. You can download music from Wynk Music using its app or website, which are also available for both Android and iOS devices. You can find all the songs of yesudas kannada ayyappa songs vol 6 on Wynk Music by searching for their names or by visiting the album page.</li>
205
- </ul>
206
- <h4>What are the steps and tips to download the songs safely and legally?</h4>
207
- <p>To download the songs of yesudas kannada ayyappa songs vol 6 safely and legally, you need to follow these steps and tips:</p>
208
- <ol>
209
- <li>Choose a reliable and reputable free music download site or app that supports yesudas kannada ayyappa songs vol 6, such as YouTube, Gaana, or Wynk Music.</li>
210
- <li>Make sure that you have a stable internet connection and enough storage space on your device.</li>
211
- <li>Search for the songs or the album that you want to download by typing their names or keywords in the search bar.</li>
212
- <li>Select the song or the album that you want to download from the search results or from the album page.</li>
213
- <li>Click on the download button or icon that appears next to the song or on the album page.</li>
214
- <li>Choose the quality and format that you want to download, such as MP3, MP4, etc.</li>
215
- <li>Wait for the download to complete and check if it is successful.</li>
216
- <li>Enjoy and share your downloaded music with your friends and family.</li>
217
- </ol>
218
- <p>Some tips to download the songs safely and legally are:</p> - Continue writing the article. <p>Some tips to download the songs safely and legally are:</p>
219
- <ul>
220
- <li>Always download music from trusted and verified sources that have proper licenses and permissions from the original creators and owners of the songs.</li>
221
- <li>Avoid downloading music from shady and suspicious sites or apps that may contain viruses, malware, or spyware that can harm your device or compromise your privacy.</li>
222
- <li>Do not download music that is protected by copyright laws or that infringes the rights of the original creators and owners of the songs.</li>
223
- <li>Do not share or distribute the downloaded music without the consent or authorization of the original creators and owners of the songs.</li>
224
- <li>Respect and support the original creators and owners of the songs by giving them credit, feedback, appreciation, or donation.</li>
225
- </ul>
226
- <h4>How can you enjoy and share the songs offline and online?</h4>
227
- <p>Once you have downloaded the songs of yesudas kannada ayyappa songs vol 6, you can enjoy and share them offline and online in various ways. Some of them are:</p>
228
- <ul>
229
- <li>You can listen to the songs on your device using any music player app or software that supports the quality and format of the downloaded songs.</li>
230
- <li>You can transfer the songs to other devices, such as your computer, laptop, tablet, smartphone, etc., using a USB cable, Bluetooth, Wi-Fi, etc.</li>
231
- <li>You can burn the songs to a CD or DVD using any CD or DVD burner app or software that supports the quality and format of the downloaded songs.</li>
232
- <li>You can play the songs on any external device, such as a speaker, a headphone, a car stereo, etc., using an audio jack, a USB port, a Bluetooth connection, etc.</li>
233
- <li>You can upload the songs to any online platform, such as a cloud storage service, a social media site, a music streaming service, etc., using your internet connection and account credentials.</li>
234
- <li>You can share the songs with your friends and family via email, message, chat, call, etc., using your internet connection and contact details.</li>
235
- </ul>
236
- <h2>Conclusion</h2>
237
- <p>In conclusion, yesudas kannada ayyappa songs vol 6 is an amazing album of devotional songs dedicated to Lord Ayyappa. The album is sung by K.J. Yesudas, one of the best singers of devotional music in India. The album has 10 songs that cover different themes and aspects of Lord Ayyappa's life and worship. The album has a soothing and melodious quality that creates a spiritual atmosphere for the listeners. The album is also easy to download for free from various free music download sites and apps that support yesudas kannada ayyappa songs vol 6. The album is also easy to enjoy and share offline and online with your friends and family.</p>
238
- <p>If you are a devotee of Lord Ayyappa or a fan of K.J. Yesudas, you should definitely download and listen to yesudas kannada ayyappa songs vol 6. You will not regret it. You will feel the presence and blessings of Lord Ayyappa in your heart. You will also appreciate and admire the talent and dedication of K.J. Yesudas as a singer of devotional music. You will also learn more about Lord Ayyappa's history, culture, and teachings.</p>
239
- <p>So what are you waiting for? Download yesudas kannada ayyappa songs vol 6 for free today and enjoy the divine music of Lord Ayyappa. You will be glad you did.</p>
240
- <p>Thank you for reading this article. We hope you found it useful and informative. If you have any questions or feedback, please feel free to contact us. We would love to hear from you.</p>
241
- <h2>FAQs</h2>
242
- <p>Here are some frequently asked questions about yesudas kannada ayyappa songs vol 6:</p>
243
- <ol>
244
- <li><b>Where can I find more information about Lord Ayyappa and his temple in Sabarimala?</b></li>
245
- <p>You can find more information about Lord Ayyappa and his temple in Sabarimala from various sources, such as books, magazines, websites, blogs, podcasts, videos, etc. Some of the best sources are:</p>
246
- <ul>
247
- <li>The official website of Sabarimala temple: <a href="">https://sabarimalaonline.org/</a></li>
248
- <li>The official YouTube channel of Sabarimala temple: <a href="">https://www.youtube.com/channel/UCy83QmbvZ3t0Dst0l73wFrA</a></li>
249
- <li>The Wikipedia page of Sabarimala temple: <a href="">https://en.wikipedia.org/wiki/Sabarimala</a></ - Continue writing the article. <li>The Wikipedia page of Sabarimala temple: <a href="">https://en.wikipedia.org/wiki/Sabarimala</a></li>
250
- <li>The book "Sabarimala: Its Timeless Message" by Swami Tapasyananda: <a href="">https://www.amazon.in/Sabarimala-Its-Timeless-Message-Tapasyananda/dp/8178234940</a></li>
251
- <li>The podcast "Sabarimala Stories" by Radio Sai Global Harmony: <a href="">https://radiosai.org/program/listen.php?f=SABARIMALA_STORIES.mp3</a></li>
252
- </ul>
253
- <li><b>How can I support K.J. Yesudas and his music career?</b></li>
254
- <p>You can support K.J. Yesudas and his music career in various ways, such as:</p>
255
- <ul>
256
- <li>Buying his original albums and songs from authorized sources, such as online stores, music shops, etc.</li>
257
- <li>Subscribing to his official channels and platforms, such as YouTube, Facebook, Twitter, Instagram, etc.</li>
258
- <li>Following his news and updates, such as interviews, concerts, awards, etc.</li>
259
- <li>Giving him feedback and appreciation, such as comments, likes, shares, ratings, reviews, etc.</li>
260
- <li>Donating or contributing to his charitable and social causes, such as Tharangini Foundation, Yesudas Trust, etc.</li>
261
- </ul>
262
- <li><b>What are some other devotional albums by K.J. Yesudas that I can download for free?</b></li>
263
- <p>Some other devotional albums by K.J. Yesudas that you can download for free are:</p>
264
- <ul>
265
- <li>Yesudas Tamil Ayyappa Songs Vol 1: A collection of 10 devotional songs dedicated to Lord Ayyappa in Tamil language.</li>
266
- <li>Yesudas Telugu Ayyappa Songs Vol 1: A collection of 10 devotional songs dedicated to Lord Ayyappa in Telugu language.</li>
267
- <li>Yesudas Malayalam Ayyappa Songs Vol 1: A collection of 10 devotional songs dedicated to Lord Ayyappa in Malayalam language.</li>
268
- <li>Yesudas Hindi Bhajans Vol 1: A collection of 10 devotional songs dedicated to various Hindu gods and goddesses in Hindi language.</li>
269
- <li>Yesudas Christian Devotional Songs Vol 1: A collection of 10 devotional songs dedicated to Jesus Christ and Mary in Malayalam language.</li>
270
- </ul>
271
- <li><b>How can I convert the songs to different formats or edit them according to my preferences?</b></li>
272
- <p>You can convert the songs to different formats or edit them according to your preferences using various tools and methods, such as:</p>
273
- <ul>
274
- <li>Online Audio Converter: An online tool that allows you to convert audio files to different formats, such as MP3, WAV, OGG, etc.</li>
275
- <li>Audacity: A free and open-source software that allows you to record and edit audio files on your computer.</li>
276
- <li>iTunes: A software that allows you to manage and organize your music library on your computer or device.</li>
277
- <li>Windows Media Player: A software that allows you to play and edit audio files on your computer or device.</li>
278
- </ul>
279
- <li><b>How can I donate or contribute to the welfare of Lord Ayyappa's devotees?</b></li>
280
- <p>You can donate or contribute to the welfare of Lord Ayyappa's devotees in various ways, such as:</p>
281
- <ul>
282
- <li>Donating money or materials to Sabarimala temple or its associated organizations, such as Travancore Devaswom Board, Sabarimala Devaswom Board, etc.</li>
283
- <li>Volunteering your time or skills to Sabarimala temple or its associated organizations, such as cleaning, cooking, serving, guiding, etc.</li>
284
- <li>Sponsoring or supporting a devotee or a group of devotees who want to visit Sabarimala temple or perform the pilgrimage rituals.</li>
285
- <li>Participating or organizing awareness campaigns or events to promote the values and teachings of Lord Ayyappa and his devotees.</li>
286
- </ul></p> 197e85843d<br />
287
- <br />
288
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AHzizi/WaifuVoiceGen/mel_processing.py DELETED
@@ -1,101 +0,0 @@
1
- import torch
2
- import torch.utils.data
3
- from librosa.filters import mel as librosa_mel_fn
4
-
5
- MAX_WAV_VALUE = 32768.0
6
-
7
-
8
- def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
9
- """
10
- PARAMS
11
- ------
12
- C: compression factor
13
- """
14
- return torch.log(torch.clamp(x, min=clip_val) * C)
15
-
16
-
17
- def dynamic_range_decompression_torch(x, C=1):
18
- """
19
- PARAMS
20
- ------
21
- C: compression factor used to compress
22
- """
23
- return torch.exp(x) / C
24
-
25
-
26
- def spectral_normalize_torch(magnitudes):
27
- output = dynamic_range_compression_torch(magnitudes)
28
- return output
29
-
30
-
31
- def spectral_de_normalize_torch(magnitudes):
32
- output = dynamic_range_decompression_torch(magnitudes)
33
- return output
34
-
35
-
36
- mel_basis = {}
37
- hann_window = {}
38
-
39
-
40
- def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
41
- if torch.min(y) < -1.:
42
- print('min value is ', torch.min(y))
43
- if torch.max(y) > 1.:
44
- print('max value is ', torch.max(y))
45
-
46
- global hann_window
47
- dtype_device = str(y.dtype) + '_' + str(y.device)
48
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
49
- if wnsize_dtype_device not in hann_window:
50
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
51
-
52
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
53
- y = y.squeeze(1)
54
-
55
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
56
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
57
-
58
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
59
- return spec
60
-
61
-
62
- def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
63
- global mel_basis
64
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
65
- fmax_dtype_device = str(fmax) + '_' + dtype_device
66
- if fmax_dtype_device not in mel_basis:
67
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
68
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
69
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
70
- spec = spectral_normalize_torch(spec)
71
- return spec
72
-
73
-
74
- def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
75
- if torch.min(y) < -1.:
76
- print('min value is ', torch.min(y))
77
- if torch.max(y) > 1.:
78
- print('max value is ', torch.max(y))
79
-
80
- global mel_basis, hann_window
81
- dtype_device = str(y.dtype) + '_' + str(y.device)
82
- fmax_dtype_device = str(fmax) + '_' + dtype_device
83
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
84
- if fmax_dtype_device not in mel_basis:
85
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
86
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
87
- if wnsize_dtype_device not in hann_window:
88
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
89
-
90
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
91
- y = y.squeeze(1)
92
-
93
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
94
- center=center, pad_mode='reflect', normalized=False, onesided=True)
95
-
96
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
97
-
98
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
99
- spec = spectral_normalize_torch(spec)
100
-
101
- return spec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/scripts/generate_sketch_data.py DELETED
@@ -1,62 +0,0 @@
1
- from torchvision import transforms
2
- from torchvision.utils import save_image
3
- from torch.utils.serialization import load_lua
4
- import os
5
- import cv2
6
- import numpy as np
7
-
8
- """
9
- NOTE!: Must have torch==0.4.1 and torchvision==0.2.1
10
- The sketch simplification model (sketch_gan.t7) from Simo Serra et al. can be downloaded from their official implementation:
11
- https://github.com/bobbens/sketch_simplification
12
- """
13
-
14
-
15
- def sobel(img):
16
- opImgx = cv2.Sobel(img, cv2.CV_8U, 0, 1, ksize=3)
17
- opImgy = cv2.Sobel(img, cv2.CV_8U, 1, 0, ksize=3)
18
- return cv2.bitwise_or(opImgx, opImgy)
19
-
20
-
21
- def sketch(frame):
22
- frame = cv2.GaussianBlur(frame, (3, 3), 0)
23
- invImg = 255 - frame
24
- edgImg0 = sobel(frame)
25
- edgImg1 = sobel(invImg)
26
- edgImg = cv2.addWeighted(edgImg0, 0.75, edgImg1, 0.75, 0)
27
- opImg = 255 - edgImg
28
- return opImg
29
-
30
-
31
- def get_sketch_image(image_path):
32
- original = cv2.imread(image_path)
33
- original = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
34
- sketch_image = sketch(original)
35
- return sketch_image[:, :, np.newaxis]
36
-
37
-
38
- use_cuda = True
39
-
40
- cache = load_lua("/path/to/sketch_gan.t7")
41
- model = cache.model
42
- immean = cache.mean
43
- imstd = cache.std
44
- model.evaluate()
45
-
46
- data_path = "/path/to/data/imgs"
47
- images = [os.path.join(data_path, f) for f in os.listdir(data_path)]
48
-
49
- output_dir = "/path/to/data/edges"
50
- if not os.path.exists(output_dir):
51
- os.makedirs(output_dir)
52
-
53
- for idx, image_path in enumerate(images):
54
- if idx % 50 == 0:
55
- print("{} out of {}".format(idx, len(images)))
56
- data = get_sketch_image(image_path)
57
- data = ((transforms.ToTensor()(data) - immean) / imstd).unsqueeze(0)
58
- if use_cuda:
59
- pred = model.cuda().forward(data.cuda()).float()
60
- else:
61
- pred = model.forward(data)
62
- save_image(pred[0], os.path.join(output_dir, "{}_edges.jpg".format(image_path.split("/")[-1].split('.')[0])))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/pann_model.py DELETED
@@ -1,543 +0,0 @@
1
- # PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
2
- # Reference from https://github.com/qiuqiangkong/audioset_tagging_cnn
3
- # Some layers are re-designed for CLAP
4
- import os
5
- os.environ['NUMBA_CACHE_DIR'] = '/tmp/'
6
-
7
- import torch
8
- import torch.nn as nn
9
- import torch.nn.functional as F
10
- from torchlibrosa.stft import Spectrogram, LogmelFilterBank
11
- from torchlibrosa.augmentation import SpecAugmentation
12
-
13
- from .utils import do_mixup, interpolate, pad_framewise_output
14
- from .feature_fusion import iAFF, AFF, DAF
15
-
16
-
17
- def init_layer(layer):
18
- """Initialize a Linear or Convolutional layer. """
19
- nn.init.xavier_uniform_(layer.weight)
20
-
21
- if hasattr(layer, 'bias'):
22
- if layer.bias is not None:
23
- layer.bias.data.fill_(0.)
24
-
25
-
26
- def init_bn(bn):
27
- """Initialize a Batchnorm layer. """
28
- bn.bias.data.fill_(0.)
29
- bn.weight.data.fill_(1.)
30
-
31
-
32
- class ConvBlock(nn.Module):
33
- def __init__(self, in_channels, out_channels):
34
-
35
- super(ConvBlock, self).__init__()
36
-
37
- self.conv1 = nn.Conv2d(in_channels=in_channels,
38
- out_channels=out_channels,
39
- kernel_size=(3, 3), stride=(1, 1),
40
- padding=(1, 1), bias=False)
41
-
42
- self.conv2 = nn.Conv2d(in_channels=out_channels,
43
- out_channels=out_channels,
44
- kernel_size=(3, 3), stride=(1, 1),
45
- padding=(1, 1), bias=False)
46
-
47
- self.bn1 = nn.BatchNorm2d(out_channels)
48
- self.bn2 = nn.BatchNorm2d(out_channels)
49
-
50
- self.init_weight()
51
-
52
- def init_weight(self):
53
- init_layer(self.conv1)
54
- init_layer(self.conv2)
55
- init_bn(self.bn1)
56
- init_bn(self.bn2)
57
-
58
-
59
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
60
-
61
- x = input
62
- x = F.relu_(self.bn1(self.conv1(x)))
63
- x = F.relu_(self.bn2(self.conv2(x)))
64
- if pool_type == 'max':
65
- x = F.max_pool2d(x, kernel_size=pool_size)
66
- elif pool_type == 'avg':
67
- x = F.avg_pool2d(x, kernel_size=pool_size)
68
- elif pool_type == 'avg+max':
69
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
70
- x2 = F.max_pool2d(x, kernel_size=pool_size)
71
- x = x1 + x2
72
- else:
73
- raise Exception('Incorrect argument!')
74
-
75
- return x
76
-
77
-
78
- class ConvBlock5x5(nn.Module):
79
- def __init__(self, in_channels, out_channels):
80
-
81
- super(ConvBlock5x5, self).__init__()
82
-
83
- self.conv1 = nn.Conv2d(in_channels=in_channels,
84
- out_channels=out_channels,
85
- kernel_size=(5, 5), stride=(1, 1),
86
- padding=(2, 2), bias=False)
87
-
88
- self.bn1 = nn.BatchNorm2d(out_channels)
89
-
90
- self.init_weight()
91
-
92
- def init_weight(self):
93
- init_layer(self.conv1)
94
- init_bn(self.bn1)
95
-
96
-
97
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
98
-
99
- x = input
100
- x = F.relu_(self.bn1(self.conv1(x)))
101
- if pool_type == 'max':
102
- x = F.max_pool2d(x, kernel_size=pool_size)
103
- elif pool_type == 'avg':
104
- x = F.avg_pool2d(x, kernel_size=pool_size)
105
- elif pool_type == 'avg+max':
106
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
107
- x2 = F.max_pool2d(x, kernel_size=pool_size)
108
- x = x1 + x2
109
- else:
110
- raise Exception('Incorrect argument!')
111
-
112
- return x
113
-
114
-
115
- class AttBlock(nn.Module):
116
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
117
- super(AttBlock, self).__init__()
118
-
119
- self.activation = activation
120
- self.temperature = temperature
121
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
122
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
123
-
124
- self.bn_att = nn.BatchNorm1d(n_out)
125
- self.init_weights()
126
-
127
- def init_weights(self):
128
- init_layer(self.att)
129
- init_layer(self.cla)
130
- init_bn(self.bn_att)
131
-
132
- def forward(self, x):
133
- # x: (n_samples, n_in, n_time)
134
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
135
- cla = self.nonlinear_transform(self.cla(x))
136
- x = torch.sum(norm_att * cla, dim=2)
137
- return x, norm_att, cla
138
-
139
- def nonlinear_transform(self, x):
140
- if self.activation == 'linear':
141
- return x
142
- elif self.activation == 'sigmoid':
143
- return torch.sigmoid(x)
144
-
145
-
146
- class Cnn14(nn.Module):
147
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
148
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
149
-
150
- super(Cnn14, self).__init__()
151
-
152
- window = 'hann'
153
- center = True
154
- pad_mode = 'reflect'
155
- ref = 1.0
156
- amin = 1e-10
157
- top_db = None
158
-
159
- self.enable_fusion = enable_fusion
160
- self.fusion_type = fusion_type
161
-
162
- # Spectrogram extractor
163
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
164
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
165
- freeze_parameters=True)
166
-
167
- # Logmel feature extractor
168
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
169
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
170
- freeze_parameters=True)
171
-
172
- # Spec augmenter
173
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
174
- freq_drop_width=8, freq_stripes_num=2)
175
-
176
- self.bn0 = nn.BatchNorm2d(64)
177
-
178
- if (self.enable_fusion) and (self.fusion_type == 'channel_map'):
179
- self.conv_block1 = ConvBlock(in_channels=4, out_channels=64)
180
- else:
181
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
182
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
183
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
184
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
185
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
186
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
187
-
188
- self.fc1 = nn.Linear(2048, 2048, bias=True)
189
- self.fc_audioset = nn.Linear(2048, classes_num, bias=True)
190
-
191
- if (self.enable_fusion) and (self.fusion_type in ['daf_1d','aff_1d','iaff_1d']):
192
- self.mel_conv1d = nn.Sequential(
193
- nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2),
194
- nn.BatchNorm1d(64) # No Relu
195
- )
196
- if self.fusion_type == 'daf_1d':
197
- self.fusion_model = DAF()
198
- elif self.fusion_type == 'aff_1d':
199
- self.fusion_model = AFF(channels=64, type='1D')
200
- elif self.fusion_type == 'iaff_1d':
201
- self.fusion_model = iAFF(channels=64, type='1D')
202
-
203
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
204
- self.mel_conv2d = nn.Sequential(
205
- nn.Conv2d(1, 64, kernel_size=(5,5), stride=(6, 2), padding=(2,2)),
206
- nn.BatchNorm2d(64),
207
- nn.ReLU(inplace=True)
208
- )
209
-
210
- if self.fusion_type == 'daf_2d':
211
- self.fusion_model = DAF()
212
- elif self.fusion_type == 'aff_2d':
213
- self.fusion_model = AFF(channels=64, type='2D')
214
- elif self.fusion_type == 'iaff_2d':
215
- self.fusion_model = iAFF(channels=64, type='2D')
216
- self.init_weight()
217
-
218
- def init_weight(self):
219
- init_bn(self.bn0)
220
- init_layer(self.fc1)
221
- init_layer(self.fc_audioset)
222
-
223
- def forward(self, input, mixup_lambda=None, device=None):
224
- """
225
- Input: (batch_size, data_length)"""
226
-
227
- if self.enable_fusion and input["longer"].sum() == 0:
228
- # if no audio is longer than 10s, then randomly select one audio to be longer
229
- input["longer"][torch.randint(0, input["longer"].shape[0], (1,))] = True
230
-
231
- if not self.enable_fusion:
232
- x = self.spectrogram_extractor(input['waveform'].to(device=device, non_blocking=True)) # (batch_size, 1, time_steps, freq_bins)
233
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
234
-
235
- x = x.transpose(1, 3)
236
- x = self.bn0(x)
237
- x = x.transpose(1, 3)
238
- else:
239
- longer_list = input["longer"].to(device=device, non_blocking=True)
240
- x = input["mel_fusion"].to(device=device, non_blocking=True)
241
- longer_list_idx = torch.where(longer_list)[0]
242
- x = x.transpose(1, 3)
243
- x = self.bn0(x)
244
- x = x.transpose(1, 3)
245
- if self.fusion_type in ['daf_1d','aff_1d','iaff_1d']:
246
- new_x = x[:,0:1,:,:].clone().contiguous()
247
- # local processing
248
- if len(longer_list_idx) > 0:
249
- fusion_x_local = x[longer_list_idx,1:,:,:].clone().contiguous()
250
- FB,FC,FT,FF = fusion_x_local.size()
251
- fusion_x_local = fusion_x_local.view(FB * FC, FT, FF)
252
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1)).contiguous()
253
- fusion_x_local = self.mel_conv1d(fusion_x_local)
254
- fusion_x_local = fusion_x_local.view(FB,FC,FF,fusion_x_local.size(-1))
255
- fusion_x_local = torch.permute(fusion_x_local, (0,2,1,3)).contiguous().flatten(2)
256
- if fusion_x_local.size(-1) < FT:
257
- fusion_x_local = torch.cat([fusion_x_local, torch.zeros((FB,FF,FT- fusion_x_local.size(-1)), device=device)], dim=-1)
258
- else:
259
- fusion_x_local = fusion_x_local[:,:,:FT]
260
- # 1D fusion
261
- new_x = new_x.squeeze(1).permute((0,2,1)).contiguous()
262
- new_x[longer_list_idx] = self.fusion_model(new_x[longer_list_idx], fusion_x_local)
263
- x = new_x.permute((0,2,1)).contiguous()[:,None,:,:]
264
- else:
265
- x = new_x
266
- elif self.fusion_type in ['daf_2d','aff_2d','iaff_2d','channel_map']:
267
- x = x # no change
268
-
269
- if self.training:
270
- x = self.spec_augmenter(x)
271
- # Mixup on spectrogram
272
- if self.training and mixup_lambda is not None:
273
- x = do_mixup(x, mixup_lambda)
274
- if (self.enable_fusion) and (self.fusion_type in ['daf_2d','aff_2d','iaff_2d']):
275
- global_x = x[:,0:1,:,:]
276
-
277
- # global processing
278
- B, C, H, W = global_x.shape
279
- global_x = self.conv_block1(global_x, pool_size=(2, 2), pool_type='avg')
280
- if len(longer_list_idx) > 0:
281
- local_x = x[longer_list_idx,1:,:,:].contiguous()
282
- TH = global_x.size(-2)
283
- # local processing
284
- B, C, H, W = local_x.shape
285
- local_x = local_x.view(B*C,1,H,W)
286
- local_x = self.mel_conv2d(local_x)
287
- local_x = local_x.view(B,C,local_x.size(1),local_x.size(2),local_x.size(3))
288
- local_x = local_x.permute((0,2,1,3,4)).contiguous().flatten(2,3)
289
- TB,TC,_,TW = local_x.size()
290
- if local_x.size(-2) < TH:
291
- local_x = torch.cat([local_x, torch.zeros((TB,TC,TH-local_x.size(-2),TW), device=global_x.device)], dim=-2)
292
- else:
293
- local_x = local_x[:,:,:TH,:]
294
-
295
- global_x[longer_list_idx] = self.fusion_model(global_x[longer_list_idx],local_x)
296
- x = global_x
297
- else:
298
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
299
-
300
- x = F.dropout(x, p=0.2, training=self.training)
301
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
302
- x = F.dropout(x, p=0.2, training=self.training)
303
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
304
- x = F.dropout(x, p=0.2, training=self.training)
305
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
306
- x = F.dropout(x, p=0.2, training=self.training)
307
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
308
- x = F.dropout(x, p=0.2, training=self.training)
309
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
310
- x = F.dropout(x, p=0.2, training=self.training)
311
- x = torch.mean(x, dim=3)
312
-
313
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
314
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
315
- latent_x = latent_x1 + latent_x2
316
- latent_x = latent_x.transpose(1, 2)
317
- latent_x = F.relu_(self.fc1(latent_x))
318
- latent_output = interpolate(latent_x, 32)
319
-
320
-
321
- (x1, _) = torch.max(x, dim=2)
322
- x2 = torch.mean(x, dim=2)
323
- x = x1 + x2
324
- x = F.dropout(x, p=0.5, training=self.training)
325
- x = F.relu_(self.fc1(x))
326
- embedding = F.dropout(x, p=0.5, training=self.training)
327
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
328
-
329
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
330
- return output_dict
331
-
332
-
333
- class Cnn6(nn.Module):
334
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
335
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
336
-
337
- super(Cnn6, self).__init__()
338
-
339
- window = 'hann'
340
- center = True
341
- pad_mode = 'reflect'
342
- ref = 1.0
343
- amin = 1e-10
344
- top_db = None
345
-
346
- self.enable_fusion = enable_fusion
347
- self.fusion_type = fusion_type
348
-
349
- # Spectrogram extractor
350
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
351
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
352
- freeze_parameters=True)
353
-
354
- # Logmel feature extractor
355
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
356
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
357
- freeze_parameters=True)
358
-
359
- # Spec augmenter
360
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
361
- freq_drop_width=8, freq_stripes_num=2)
362
-
363
- self.bn0 = nn.BatchNorm2d(64)
364
-
365
- self.conv_block1 = ConvBlock5x5(in_channels=1, out_channels=64)
366
- self.conv_block2 = ConvBlock5x5(in_channels=64, out_channels=128)
367
- self.conv_block3 = ConvBlock5x5(in_channels=128, out_channels=256)
368
- self.conv_block4 = ConvBlock5x5(in_channels=256, out_channels=512)
369
-
370
- self.fc1 = nn.Linear(512, 512, bias=True)
371
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
372
-
373
- self.init_weight()
374
-
375
- def init_weight(self):
376
- init_bn(self.bn0)
377
- init_layer(self.fc1)
378
- init_layer(self.fc_audioset)
379
-
380
- def forward(self, input, mixup_lambda=None, device=None):
381
- """
382
- Input: (batch_size, data_length)"""
383
-
384
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
385
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
386
-
387
- x = x.transpose(1, 3)
388
- x = self.bn0(x)
389
- x = x.transpose(1, 3)
390
-
391
- if self.training:
392
- x = self.spec_augmenter(x)
393
-
394
- # Mixup on spectrogram
395
- if self.training and mixup_lambda is not None:
396
- x = do_mixup(x, mixup_lambda)
397
-
398
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
399
- x = F.dropout(x, p=0.2, training=self.training)
400
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
401
- x = F.dropout(x, p=0.2, training=self.training)
402
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
403
- x = F.dropout(x, p=0.2, training=self.training)
404
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
405
- x = F.dropout(x, p=0.2, training=self.training)
406
- x = torch.mean(x, dim=3)
407
-
408
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
409
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
410
- latent_x = latent_x1 + latent_x2
411
- latent_x = latent_x.transpose(1, 2)
412
- latent_x = F.relu_(self.fc1(latent_x))
413
- latent_output = interpolate(latent_x, 16)
414
-
415
- (x1, _) = torch.max(x, dim=2)
416
- x2 = torch.mean(x, dim=2)
417
- x = x1 + x2
418
- x = F.dropout(x, p=0.5, training=self.training)
419
- x = F.relu_(self.fc1(x))
420
- embedding = F.dropout(x, p=0.5, training=self.training)
421
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
422
-
423
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
424
-
425
- return output_dict
426
-
427
-
428
- class Cnn10(nn.Module):
429
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
430
- fmax, classes_num, enable_fusion=False, fusion_type='None'):
431
-
432
- super(Cnn10, self).__init__()
433
-
434
- window = 'hann'
435
- center = True
436
- pad_mode = 'reflect'
437
- ref = 1.0
438
- amin = 1e-10
439
- top_db = None
440
-
441
- self.enable_fusion = enable_fusion
442
- self.fusion_type = fusion_type
443
-
444
- # Spectrogram extractor
445
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
446
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
447
- freeze_parameters=True)
448
-
449
- # Logmel feature extractor
450
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
451
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
452
- freeze_parameters=True)
453
-
454
- # Spec augmenter
455
- self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
456
- freq_drop_width=8, freq_stripes_num=2)
457
-
458
- self.bn0 = nn.BatchNorm2d(64)
459
-
460
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
461
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
462
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
463
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
464
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
465
-
466
- self.fc1 = nn.Linear(1024, 1024, bias=True)
467
- self.fc_audioset = nn.Linear(1024, classes_num, bias=True)
468
-
469
- self.init_weight()
470
-
471
- def init_weight(self):
472
- init_bn(self.bn0)
473
- init_layer(self.fc1)
474
- init_layer(self.fc_audioset)
475
-
476
- def forward(self, input, mixup_lambda=None, device=None):
477
- """
478
- Input: (batch_size, data_length)"""
479
-
480
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
481
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
482
-
483
- x = x.transpose(1, 3)
484
- x = self.bn0(x)
485
- x = x.transpose(1, 3)
486
-
487
- if self.training:
488
- x = self.spec_augmenter(x)
489
-
490
- # Mixup on spectrogram
491
- if self.training and mixup_lambda is not None:
492
- x = do_mixup(x, mixup_lambda)
493
-
494
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
495
- x = F.dropout(x, p=0.2, training=self.training)
496
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
497
- x = F.dropout(x, p=0.2, training=self.training)
498
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
499
- x = F.dropout(x, p=0.2, training=self.training)
500
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
501
- x = F.dropout(x, p=0.2, training=self.training)
502
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
503
- x = F.dropout(x, p=0.2, training=self.training)
504
- x = torch.mean(x, dim=3)
505
-
506
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
507
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
508
- latent_x = latent_x1 + latent_x2
509
- latent_x = latent_x.transpose(1, 2)
510
- latent_x = F.relu_(self.fc1(latent_x))
511
- latent_output = interpolate(latent_x, 32)
512
-
513
- (x1, _) = torch.max(x, dim=2)
514
- x2 = torch.mean(x, dim=2)
515
- x = x1 + x2
516
- x = F.dropout(x, p=0.5, training=self.training)
517
- x = F.relu_(self.fc1(x))
518
- embedding = F.dropout(x, p=0.5, training=self.training)
519
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
520
-
521
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding, 'fine_grained_embedding': latent_output}
522
-
523
- return output_dict
524
-
525
-
526
- def create_pann_model(audio_cfg, enable_fusion=False, fusion_type='None'):
527
- try:
528
- ModelProto = eval(audio_cfg.model_name)
529
- model = ModelProto(
530
- sample_rate = audio_cfg.sample_rate,
531
- window_size = audio_cfg.window_size,
532
- hop_size =audio_cfg.hop_size,
533
- mel_bins = audio_cfg.mel_bins,
534
- fmin = audio_cfg.fmin,
535
- fmax = audio_cfg.fmax,
536
- classes_num = audio_cfg.class_num,
537
- enable_fusion = enable_fusion,
538
- fusion_type = fusion_type
539
- )
540
- return model
541
- except:
542
- raise RuntimeError(f'Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough.')
543
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio_inpaint.py DELETED
@@ -1,1081 +0,0 @@
1
- """
2
- wild mixture of
3
- https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
4
- https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
5
- https://github.com/CompVis/taming-transformers
6
- -- merci
7
- """
8
- import os
9
- import torch
10
- import torch.nn as nn
11
- import numpy as np
12
- import pytorch_lightning as pl
13
- from torch.optim.lr_scheduler import LambdaLR
14
- from einops import rearrange, repeat
15
- from contextlib import contextmanager
16
- from functools import partial
17
- from tqdm import tqdm
18
- from torchvision.utils import make_grid
19
- from pytorch_lightning.utilities.distributed import rank_zero_only
20
-
21
- from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
22
- from ldm.modules.ema import LitEma
23
- from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
24
- from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
25
- from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
26
- from ldm.models.diffusion.ddim import DDIMSampler
27
- from ldm.models.diffusion.ddpm import DDPM, disabled_train
28
-
29
- __conditioning_keys__ = {'concat': 'c_concat',
30
- 'crossattn': 'c_crossattn',
31
- 'adm': 'y'}
32
-
33
- # add mel_dim and mel_length params to ensure correct shape
34
- class LatentDiffusion_audioinpaint(DDPM):
35
- """main class"""
36
- def __init__(self,
37
- first_stage_config,
38
- cond_stage_config,
39
- num_timesteps_cond=None,
40
- mel_dim=80,
41
- mel_length=848,
42
- cond_stage_key="image",
43
- cond_stage_trainable=False,
44
- concat_mode=True,
45
- cond_stage_forward=None,
46
- conditioning_key=None,
47
- scale_factor=1.0,
48
- scale_by_std=False,
49
- test_repeat=1,
50
- test_numsteps = None,
51
- *args, **kwargs):
52
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
53
- self.scale_by_std = scale_by_std
54
- assert self.num_timesteps_cond <= kwargs['timesteps']
55
- # for backwards compatibility after implementation of DiffusionWrapper
56
- if conditioning_key is None:
57
- conditioning_key = 'concat' if concat_mode else 'crossattn'
58
- if cond_stage_config == '__is_unconditional__':
59
- conditioning_key = None
60
- ckpt_path = kwargs.pop("ckpt_path", None)
61
- ignore_keys = kwargs.pop("ignore_keys", [])
62
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
63
- self.test_repeat = test_repeat
64
- if test_numsteps == None:
65
- self.test_numsteps = self.num_timesteps
66
- self.concat_mode = concat_mode
67
- self.mel_dim = mel_dim
68
- self.mel_length = mel_length
69
- self.cond_stage_trainable = cond_stage_trainable
70
- self.cond_stage_key = cond_stage_key
71
- try:
72
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
73
- except:
74
- self.num_downs = 0
75
- if not scale_by_std:
76
- self.scale_factor = scale_factor
77
- else:
78
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
79
- self.instantiate_first_stage(first_stage_config)
80
- self.instantiate_cond_stage(cond_stage_config)
81
- self.cond_stage_forward = cond_stage_forward
82
- self.clip_denoised = False
83
- self.bbox_tokenizer = None
84
-
85
- self.restarted_from_ckpt = False
86
- if ckpt_path is not None:
87
- self.init_from_ckpt(ckpt_path, ignore_keys)
88
- self.restarted_from_ckpt = True
89
-
90
- def make_cond_schedule(self, ):
91
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
92
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
93
- self.cond_ids[:self.num_timesteps_cond] = ids
94
-
95
- @rank_zero_only
96
- @torch.no_grad()
97
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
98
- # only for very first batch
99
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
100
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
101
- # set rescale weight to 1./std of encodings
102
- print("### USING STD-RESCALING ###")
103
- x = super().get_input(batch, self.first_stage_key)
104
- x = x.to(self.device)
105
- encoder_posterior = self.encode_first_stage(x)
106
- z = self.get_first_stage_encoding(encoder_posterior).detach()
107
- del self.scale_factor
108
- self.register_buffer('scale_factor', 1. / z.flatten().std())
109
- print(f"setting self.scale_factor to {self.scale_factor}")
110
- print("### USING STD-RESCALING ###")
111
-
112
- def register_schedule(self,
113
- given_betas=None, beta_schedule="linear", timesteps=1000,
114
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
115
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
116
-
117
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
118
- if self.shorten_cond_schedule:
119
- self.make_cond_schedule()
120
-
121
- def instantiate_first_stage(self, config):
122
- model = instantiate_from_config(config)
123
- self.first_stage_model = model.eval()
124
- self.first_stage_model.train = disabled_train
125
- for param in self.first_stage_model.parameters():
126
- param.requires_grad = False
127
-
128
- def instantiate_cond_stage(self, config):
129
- if not self.cond_stage_trainable:
130
- if config == "__is_first_stage__":# for no_text inpainting task
131
- print("Using first stage also as cond stage.")
132
- self.cond_stage_model = self.first_stage_model
133
- elif config == "__is_unconditional__":# for unconditional image generation such as human face、ImageNet
134
- print(f"Training {self.__class__.__name__} as an unconditional model.")
135
- self.cond_stage_model = None
136
- # self.be_unconditional = True
137
- else:
138
- model = instantiate_from_config(config)
139
- self.cond_stage_model = model.eval()
140
- self.cond_stage_model.train = disabled_train
141
- for param in self.cond_stage_model.parameters():
142
- param.requires_grad = False
143
- else:
144
- assert config != '__is_first_stage__'
145
- assert config != '__is_unconditional__'
146
- model = instantiate_from_config(config)
147
- self.cond_stage_model = model
148
-
149
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
150
- denoise_row = []
151
- for zd in tqdm(samples, desc=desc):
152
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
153
- force_not_quantize=force_no_decoder_quantization))
154
- n_imgs_per_row = len(denoise_row)
155
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
156
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
157
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
158
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
159
- return denoise_grid
160
-
161
- def get_first_stage_encoding(self, encoder_posterior):# encode_emb from autoencoder
162
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
163
- z = encoder_posterior.sample()
164
- elif isinstance(encoder_posterior, torch.Tensor):
165
- z = encoder_posterior
166
- else:
167
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
168
- return self.scale_factor * z
169
-
170
- def get_learned_conditioning(self, c):
171
- if self.cond_stage_forward is None:
172
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
173
- c = self.cond_stage_model.encode(c)
174
- if isinstance(c, DiagonalGaussianDistribution):
175
- c = c.mode()
176
- else:
177
- c = self.cond_stage_model(c)
178
- else:
179
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
180
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
181
- return c
182
-
183
- def meshgrid(self, h, w):
184
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
185
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
186
-
187
- arr = torch.cat([y, x], dim=-1)
188
- return arr
189
-
190
- def delta_border(self, h, w):
191
- """
192
- :param h: height
193
- :param w: width
194
- :return: normalized distance to image border,
195
- wtith min distance = 0 at border and max dist = 0.5 at image center
196
- """
197
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
198
- arr = self.meshgrid(h, w) / lower_right_corner
199
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
200
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
201
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
202
- return edge_dist
203
-
204
- def get_weighting(self, h, w, Ly, Lx, device):
205
- weighting = self.delta_border(h, w)
206
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
207
- self.split_input_params["clip_max_weight"], )
208
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
209
-
210
- if self.split_input_params["tie_braker"]:
211
- L_weighting = self.delta_border(Ly, Lx)
212
- L_weighting = torch.clip(L_weighting,
213
- self.split_input_params["clip_min_tie_weight"],
214
- self.split_input_params["clip_max_tie_weight"])
215
-
216
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
217
- weighting = weighting * L_weighting
218
- return weighting
219
-
220
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
221
- """
222
- :param x: img of size (bs, c, h, w)
223
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
224
- """
225
- bs, nc, h, w = x.shape
226
-
227
- # number of crops in image
228
- Ly = (h - kernel_size[0]) // stride[0] + 1
229
- Lx = (w - kernel_size[1]) // stride[1] + 1
230
-
231
- if uf == 1 and df == 1:
232
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
233
- unfold = torch.nn.Unfold(**fold_params)
234
-
235
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
236
-
237
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
238
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
239
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
240
-
241
- elif uf > 1 and df == 1:
242
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
243
- unfold = torch.nn.Unfold(**fold_params)
244
-
245
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
246
- dilation=1, padding=0,
247
- stride=(stride[0] * uf, stride[1] * uf))
248
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
249
-
250
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
251
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
252
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
253
-
254
- elif df > 1 and uf == 1:
255
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
256
- unfold = torch.nn.Unfold(**fold_params)
257
-
258
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
259
- dilation=1, padding=0,
260
- stride=(stride[0] // df, stride[1] // df))
261
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
262
-
263
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
264
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
265
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
266
-
267
- else:
268
- raise NotImplementedError
269
-
270
- return fold, unfold, normalization, weighting
271
-
272
- @torch.no_grad()
273
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
274
- cond_key=None, return_original_cond=False, bs=None):
275
- x = super().get_input(batch, k)
276
- if bs is not None:
277
- x = x[:bs]
278
- x = x.to(self.device)
279
- encoder_posterior = self.encode_first_stage(x)
280
- z = self.get_first_stage_encoding(encoder_posterior).detach()
281
-
282
- if self.model.conditioning_key is not None:# 'crossattn' for txt2image, 'hybird' for txt_inpaint
283
- if cond_key is None:
284
- cond_key = self.cond_stage_key # 'caption' for txt_inpaint
285
- if self.model.conditioning_key == 'hybrid':
286
- xc = {}
287
- assert cond_key == 'caption' # only txt_inpaint is implemented now
288
- assert 'masked_image' in batch.keys()
289
- assert 'mask' in batch.keys()
290
- masked_image = super().get_input(batch,'masked_image')
291
- mask = super().get_input(batch,'mask')
292
- if bs is not None:
293
- masked_image,mask = masked_image[:bs],mask[:bs]
294
- masked_image,mask = masked_image.to(self.device),mask.to(self.device)
295
- masked_image = self.get_first_stage_encoding(self.encode_first_stage(masked_image)).detach()
296
- resized_mask = torch.nn.functional.interpolate(mask,size=masked_image.shape[-2:])
297
- xc['c_concat'] = torch.cat((masked_image,resized_mask),dim = 1)
298
- xc[cond_key] = batch[cond_key]
299
- else:
300
- if cond_key != self.first_stage_key:
301
- if cond_key in ['caption', 'coordinates_bbox']:
302
- xc = batch[cond_key]
303
- elif cond_key == 'class_label':
304
- xc = batch
305
- else:
306
- xc = super().get_input(batch, cond_key).to(self.device)
307
- else:# cond_key == 'image'
308
- xc = x
309
- if not self.cond_stage_trainable or force_c_encode:# cond_stage_trainable is true for txt2img,force_c_encoder = True,when called in log_images
310
- if isinstance(xc, list):
311
- # import pudb; pudb.set_trace()
312
- c = self.get_learned_conditioning(xc)# 因为log_images内接下来要调用sample_log,所以需要预先得到处理好的c
313
- if isinstance(xc, dict):
314
- c = {}
315
- c['c_concat'] = xc['c_concat']
316
- c['c_crossattn'] = self.get_learned_conditioning(xc[cond_key])
317
- else:
318
- c = self.get_learned_conditioning(xc.to(self.device))
319
- else:
320
- c = xc
321
- if bs is not None:
322
- if isinstance(c,dict):
323
- for k in c.keys():
324
- c[k] = c[k][:bs]
325
- else:
326
- c = c[:bs]
327
-
328
- if self.use_positional_encodings:
329
- pos_x, pos_y = self.compute_latent_shifts(batch)
330
- ckey = __conditioning_keys__[self.model.conditioning_key]
331
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
332
-
333
- else:
334
- c = None
335
- xc = None
336
- if self.use_positional_encodings:
337
- pos_x, pos_y = self.compute_latent_shifts(batch)
338
- c = {'pos_x': pos_x, 'pos_y': pos_y}
339
- out = [z, c]
340
- if return_first_stage_outputs:
341
- xrec = self.decode_first_stage(z)
342
- out.extend([x, xrec])
343
- if return_original_cond:
344
- out.append(xc)
345
- return out
346
-
347
- @torch.no_grad()
348
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
349
- if predict_cids:
350
- if z.dim() == 4:
351
- z = torch.argmax(z.exp(), dim=1).long()
352
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
353
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
354
-
355
- z = 1. / self.scale_factor * z
356
-
357
- if hasattr(self, "split_input_params"):
358
- if self.split_input_params["patch_distributed_vq"]:
359
- ks = self.split_input_params["ks"] # eg. (128, 128)
360
- stride = self.split_input_params["stride"] # eg. (64, 64)
361
- uf = self.split_input_params["vqf"]
362
- bs, nc, h, w = z.shape
363
- if ks[0] > h or ks[1] > w:
364
- ks = (min(ks[0], h), min(ks[1], w))
365
- print("reducing Kernel")
366
-
367
- if stride[0] > h or stride[1] > w:
368
- stride = (min(stride[0], h), min(stride[1], w))
369
- print("reducing stride")
370
-
371
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
372
-
373
- z = unfold(z) # (bn, nc * prod(**ks), L)
374
- # 1. Reshape to img shape
375
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
376
-
377
- # 2. apply model loop over last dim
378
- if isinstance(self.first_stage_model, VQModelInterface):
379
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
380
- force_not_quantize=predict_cids or force_not_quantize)
381
- for i in range(z.shape[-1])]
382
- else:
383
-
384
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
385
- for i in range(z.shape[-1])]
386
-
387
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
388
- o = o * weighting
389
- # Reverse 1. reshape to img shape
390
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
391
- # stitch crops together
392
- decoded = fold(o)
393
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
394
- return decoded
395
- else:
396
- if isinstance(self.first_stage_model, VQModelInterface):
397
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
398
- else:
399
- return self.first_stage_model.decode(z)
400
-
401
- else:
402
- if isinstance(self.first_stage_model, VQModelInterface):
403
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
404
- else:
405
- return self.first_stage_model.decode(z)
406
-
407
- # same as above but without decorator
408
- def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
409
- if predict_cids:
410
- if z.dim() == 4:
411
- z = torch.argmax(z.exp(), dim=1).long()
412
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
413
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
414
-
415
- z = 1. / self.scale_factor * z
416
-
417
- if hasattr(self, "split_input_params"):
418
- if self.split_input_params["patch_distributed_vq"]:
419
- ks = self.split_input_params["ks"] # eg. (128, 128)
420
- stride = self.split_input_params["stride"] # eg. (64, 64)
421
- uf = self.split_input_params["vqf"]
422
- bs, nc, h, w = z.shape
423
- if ks[0] > h or ks[1] > w:
424
- ks = (min(ks[0], h), min(ks[1], w))
425
- print("reducing Kernel")
426
-
427
- if stride[0] > h or stride[1] > w:
428
- stride = (min(stride[0], h), min(stride[1], w))
429
- print("reducing stride")
430
-
431
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
432
-
433
- z = unfold(z) # (bn, nc * prod(**ks), L)
434
- # 1. Reshape to img shape
435
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
436
-
437
- # 2. apply model loop over last dim
438
- if isinstance(self.first_stage_model, VQModelInterface):
439
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
440
- force_not_quantize=predict_cids or force_not_quantize)
441
- for i in range(z.shape[-1])]
442
- else:
443
-
444
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
445
- for i in range(z.shape[-1])]
446
-
447
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
448
- o = o * weighting
449
- # Reverse 1. reshape to img shape
450
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
451
- # stitch crops together
452
- decoded = fold(o)
453
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
454
- return decoded
455
- else:
456
- if isinstance(self.first_stage_model, VQModelInterface):
457
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
458
- else:
459
- return self.first_stage_model.decode(z)
460
-
461
- else:
462
- if isinstance(self.first_stage_model, VQModelInterface):
463
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
464
- else:
465
- return self.first_stage_model.decode(z)
466
-
467
- @torch.no_grad()
468
- def encode_first_stage(self, x):
469
- if hasattr(self, "split_input_params"):
470
- if self.split_input_params["patch_distributed_vq"]:
471
- ks = self.split_input_params["ks"] # eg. (128, 128)
472
- stride = self.split_input_params["stride"] # eg. (64, 64)
473
- df = self.split_input_params["vqf"]
474
- self.split_input_params['original_image_size'] = x.shape[-2:]
475
- bs, nc, h, w = x.shape
476
- if ks[0] > h or ks[1] > w:
477
- ks = (min(ks[0], h), min(ks[1], w))
478
- print("reducing Kernel")
479
-
480
- if stride[0] > h or stride[1] > w:
481
- stride = (min(stride[0], h), min(stride[1], w))
482
- print("reducing stride")
483
-
484
- fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df)
485
- z = unfold(x) # (bn, nc * prod(**ks), L)
486
- # Reshape to img shape
487
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
488
-
489
- output_list = [self.first_stage_model.encode(z[:, :, :, :, i])
490
- for i in range(z.shape[-1])]
491
-
492
- o = torch.stack(output_list, axis=-1)
493
- o = o * weighting
494
-
495
- # Reverse reshape to img shape
496
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
497
- # stitch crops together
498
- decoded = fold(o)
499
- decoded = decoded / normalization
500
- return decoded
501
-
502
- else:
503
- return self.first_stage_model.encode(x)
504
- else:
505
- return self.first_stage_model.encode(x)
506
-
507
- def shared_step(self, batch, **kwargs):
508
- x, c = self.get_input(batch, self.first_stage_key)# get latent and condition
509
- loss = self(x, c)
510
- return loss
511
-
512
- def test_step(self,batch,batch_idx):
513
- # TODO make self.test_repeat work
514
- cond = {}
515
- cond[self.cond_stage_key] = batch[self.cond_stage_key]
516
- cond[self.cond_stage_key] = self.get_learned_conditioning(cond[self.cond_stage_key]) # c: string -> [B, T, Context_dim]
517
- cond['c_crossattn'] = cond.pop(self.cond_stage_key)
518
- masked_image = super().get_input(batch,'masked_image')
519
- mask = super().get_input(batch,'mask')
520
- masked_image,mask = masked_image.to(self.device),mask.to(self.device)
521
- masked_image = self.get_first_stage_encoding(self.encode_first_stage(masked_image)).detach()
522
- resized_mask = torch.nn.functional.interpolate(mask,size=masked_image.shape[-2:])
523
- cond['c_concat'] = torch.cat((masked_image,resized_mask),dim = 1)
524
- batch_size = len(batch[self.cond_stage_key])
525
- # shape = [batch_size,self.channels,self.mel_dim,self.mel_length]
526
- enc_emb = self.sample(cond,batch_size,timesteps=self.test_numsteps)
527
- xrec = self.decode_first_stage(enc_emb)
528
- reconstructions = (xrec + 1)/2 # to mel scale
529
- test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
530
- savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
531
- if not os.path.exists(savedir):
532
- os.makedirs(savedir)
533
-
534
- file_names = batch['f_name']
535
- nfiles = len(file_names)
536
- reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
537
- for k in range(reconstructions.shape[0]):
538
- b,repeat = k % nfiles, k // nfiles
539
- vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
540
- v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
541
- save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}_{repeat}.npy')# the num_th caption, the repeat_th repitition
542
- np.save(save_img_path,reconstructions[b])
543
-
544
- return None
545
-
546
- def forward(self, x, c, *args, **kwargs):
547
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
548
- if self.model.conditioning_key is not None:
549
- assert c is not None
550
- if self.cond_stage_trainable:
551
- if isinstance(c,dict):
552
- c[self.cond_stage_key] = self.get_learned_conditioning(c[self.cond_stage_key])
553
- c['c_crossattn'] = c.pop(self.cond_stage_key)
554
- else:
555
- c = self.get_learned_conditioning(c) # c: string -> [B, T, Context_dim]
556
- if self.shorten_cond_schedule: # TODO: drop this option
557
- tc = self.cond_ids[t].to(self.device)
558
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
559
- return self.p_losses(x, c, t, *args, **kwargs)
560
-
561
- def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
562
- def rescale_bbox(bbox):
563
- x0 = torch.clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
564
- y0 = torch.clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
565
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
566
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
567
- return x0, y0, w, h
568
-
569
- return [rescale_bbox(b) for b in bboxes]
570
-
571
- def apply_model(self, x_noisy, t, cond, return_ids=False):
572
- # make values to list to enable concat operation in
573
- if isinstance(cond, dict):
574
- # hybrid case, cond is exptected to be a dict. (txt2inpaint)
575
- cond_tmp = {}# use cond_tmp to avoid inplace edit
576
- for k,v in cond.items():
577
- if not isinstance(v, list):
578
- cond_tmp[k] = [cond[k]]
579
- else:
580
- cond_tmp[k] = cond[k]
581
- cond = cond_tmp
582
- else:
583
- if not isinstance(cond, list):
584
- cond = [cond]
585
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
586
- cond = {key: cond}
587
-
588
- if hasattr(self, "split_input_params"):
589
- assert len(cond) == 1 # todo can only deal with one conditioning atm
590
- assert not return_ids
591
- ks = self.split_input_params["ks"] # eg. (128, 128)
592
- stride = self.split_input_params["stride"] # eg. (64, 64)
593
-
594
- h, w = x_noisy.shape[-2:]
595
-
596
- fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride)
597
-
598
- z = unfold(x_noisy) # (bn, nc * prod(**ks), L)
599
- # Reshape to img shape
600
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
601
- z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])]
602
-
603
- if self.cond_stage_key in ["image", "LR_image", "segmentation",
604
- 'bbox_img'] and self.model.conditioning_key: # todo check for completeness
605
- c_key = next(iter(cond.keys())) # get key
606
- c = next(iter(cond.values())) # get value
607
- assert (len(c) == 1) # todo extend to list with more than one elem
608
- c = c[0] # get element
609
-
610
- c = unfold(c)
611
- c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L )
612
-
613
- cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
614
-
615
- elif self.cond_stage_key == 'coordinates_bbox':
616
- assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
617
-
618
- # assuming padding of unfold is always 0 and its dilation is always 1
619
- n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
620
- full_img_h, full_img_w = self.split_input_params['original_image_size']
621
- # as we are operating on latents, we need the factor from the original image size to the
622
- # spatial latent size to properly rescale the crops for regenerating the bbox annotations
623
- num_downs = self.first_stage_model.encoder.num_resolutions - 1
624
- rescale_latent = 2 ** (num_downs)
625
-
626
- # get top left postions of patches as conforming for the bbbox tokenizer, therefore we
627
- # need to rescale the tl patch coordinates to be in between (0,1)
628
- tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
629
- rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
630
- for patch_nr in range(z.shape[-1])]
631
-
632
- # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w)
633
- patch_limits = [(x_tl, y_tl,
634
- rescale_latent * ks[0] / full_img_w,
635
- rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates]
636
- # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates]
637
-
638
- # tokenize crop coordinates for the bounding boxes of the respective patches
639
- patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device)
640
- for bbox in patch_limits] # list of length l with tensors of shape (1, 2)
641
- print(patch_limits_tknzd[0].shape)
642
- # cut tknzd crop position from conditioning
643
- assert isinstance(cond, dict), 'cond must be dict to be fed into model'
644
- cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device)
645
- print(cut_cond.shape)
646
-
647
- adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd])
648
- adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n')
649
- print(adapted_cond.shape)
650
- adapted_cond = self.get_learned_conditioning(adapted_cond)
651
- print(adapted_cond.shape)
652
- adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1])
653
- print(adapted_cond.shape)
654
-
655
- cond_list = [{'c_crossattn': [e]} for e in adapted_cond]
656
-
657
- else:
658
- cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient
659
-
660
- # apply model by loop over crops
661
- output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
662
- assert not isinstance(output_list[0],
663
- tuple) # todo cant deal with multiple model outputs check this never happens
664
-
665
- o = torch.stack(output_list, axis=-1)
666
- o = o * weighting
667
- # Reverse reshape to img shape
668
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
669
- # stitch crops together
670
- x_recon = fold(o) / normalization
671
-
672
- else:
673
- # x_noisy is tensor with shape [b,c,mel_len,T]
674
- # if condition is caption ,cond['c_crossattn'] is a list, each item shape is [1, 77, 1280]
675
- x_recon = self.model(x_noisy, t, **cond)# tensor with shape [b,c,mel_len,T]
676
-
677
- if isinstance(x_recon, tuple) and not return_ids:
678
- return x_recon[0]
679
- else:
680
- return x_recon
681
-
682
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
683
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
684
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
685
-
686
- def _prior_bpd(self, x_start):
687
- """
688
- Get the prior KL term for the variational lower-bound, measured in
689
- bits-per-dim.
690
- This term can't be optimized, as it only depends on the encoder.
691
- :param x_start: the [N x C x ...] tensor of inputs.
692
- :return: a batch of [N] KL values (in bits), one per batch element.
693
- """
694
- batch_size = x_start.shape[0]
695
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
696
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
697
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
698
- return mean_flat(kl_prior) / np.log(2.0)
699
-
700
- def p_losses(self, x_start, cond, t, noise=None):
701
- noise = default(noise, lambda: torch.randn_like(x_start))
702
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
703
- model_output = self.apply_model(x_noisy, t, cond)
704
-
705
- loss_dict = {}
706
- prefix = 'train' if self.training else 'val'
707
-
708
- if self.parameterization == "x0":
709
- target = x_start
710
- elif self.parameterization == "eps":
711
- target = noise
712
- else:
713
- raise NotImplementedError()
714
-
715
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
716
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
717
-
718
- logvar_t = self.logvar[t].to(self.device)
719
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
720
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
721
- if self.learn_logvar:
722
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
723
- loss_dict.update({'logvar': self.logvar.data.mean()})
724
-
725
- loss = self.l_simple_weight * loss.mean()
726
-
727
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
728
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
729
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
730
- loss += (self.original_elbo_weight * loss_vlb)
731
- loss_dict.update({f'{prefix}/loss': loss})
732
-
733
- return loss, loss_dict
734
-
735
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
736
- return_x0=False, score_corrector=None, corrector_kwargs=None):
737
- t_in = t
738
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
739
-
740
- if score_corrector is not None:
741
- assert self.parameterization == "eps"
742
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
743
-
744
- if return_codebook_ids:
745
- model_out, logits = model_out
746
-
747
- if self.parameterization == "eps":
748
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
749
- elif self.parameterization == "x0":
750
- x_recon = model_out
751
- else:
752
- raise NotImplementedError()
753
-
754
- if clip_denoised:
755
- x_recon.clamp_(-1., 1.)
756
- if quantize_denoised:
757
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
758
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
759
- if return_codebook_ids:
760
- return model_mean, posterior_variance, posterior_log_variance, logits
761
- elif return_x0:
762
- return model_mean, posterior_variance, posterior_log_variance, x_recon
763
- else:
764
- return model_mean, posterior_variance, posterior_log_variance
765
-
766
- @torch.no_grad()
767
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
768
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
769
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
770
- b, *_, device = *x.shape, x.device
771
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
772
- return_codebook_ids=return_codebook_ids,
773
- quantize_denoised=quantize_denoised,
774
- return_x0=return_x0,
775
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
776
- if return_codebook_ids:
777
- raise DeprecationWarning("Support dropped.")
778
- model_mean, _, model_log_variance, logits = outputs
779
- elif return_x0:
780
- model_mean, _, model_log_variance, x0 = outputs
781
- else:
782
- model_mean, _, model_log_variance = outputs
783
-
784
- noise = noise_like(x.shape, device, repeat_noise) * temperature
785
- if noise_dropout > 0.:
786
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
787
- # no noise when t == 0
788
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
789
-
790
- if return_codebook_ids:
791
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
792
- if return_x0:
793
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
794
- else:
795
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
796
-
797
- @torch.no_grad()
798
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
799
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
800
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
801
- log_every_t=None):
802
- if not log_every_t:
803
- log_every_t = self.log_every_t
804
- timesteps = self.num_timesteps
805
- if batch_size is not None:
806
- b = batch_size if batch_size is not None else shape[0]
807
- shape = [batch_size] + list(shape)
808
- else:
809
- b = batch_size = shape[0]
810
- if x_T is None:
811
- img = torch.randn(shape, device=self.device)
812
- else:
813
- img = x_T
814
- intermediates = []
815
- if cond is not None:
816
- if isinstance(cond, dict):
817
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
818
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
819
- else:
820
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
821
-
822
- if start_T is not None:
823
- timesteps = min(timesteps, start_T)
824
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
825
- total=timesteps) if verbose else reversed(
826
- range(0, timesteps))
827
- if type(temperature) == float:
828
- temperature = [temperature] * timesteps
829
-
830
- for i in iterator:
831
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
832
- if self.shorten_cond_schedule:
833
- assert self.model.conditioning_key != 'hybrid'
834
- tc = self.cond_ids[ts].to(cond.device)
835
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
836
-
837
- img, x0_partial = self.p_sample(img, cond, ts,
838
- clip_denoised=self.clip_denoised,
839
- quantize_denoised=quantize_denoised, return_x0=True,
840
- temperature=temperature[i], noise_dropout=noise_dropout,
841
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
842
- if mask is not None:
843
- assert x0 is not None
844
- img_orig = self.q_sample(x0, ts)
845
- img = img_orig * mask + (1. - mask) * img
846
-
847
- if i % log_every_t == 0 or i == timesteps - 1:
848
- intermediates.append(x0_partial)
849
- if callback: callback(i)
850
- if img_callback: img_callback(img, i)
851
- return img, intermediates
852
-
853
- @torch.no_grad()
854
- def p_sample_loop(self, cond, shape, return_intermediates=False,
855
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
856
- mask=None, x0=None, img_callback=None, start_T=None,
857
- log_every_t=None):
858
-
859
- if not log_every_t:
860
- log_every_t = self.log_every_t
861
- device = self.betas.device
862
- b = shape[0]
863
- if x_T is None:
864
- img = torch.randn(shape, device=device)
865
- else:
866
- img = x_T
867
-
868
- intermediates = [img]
869
- if timesteps is None:
870
- timesteps = self.num_timesteps
871
-
872
- if start_T is not None:
873
- timesteps = min(timesteps, start_T)
874
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
875
- range(0, timesteps))
876
-
877
- if mask is not None:
878
- assert x0 is not None
879
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
880
-
881
- for i in iterator:
882
- ts = torch.full((b,), i, device=device, dtype=torch.long)
883
- if self.shorten_cond_schedule:
884
- assert self.model.conditioning_key != 'hybrid'
885
- tc = self.cond_ids[ts].to(cond.device)
886
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
887
-
888
- img = self.p_sample(img, cond, ts,
889
- clip_denoised=self.clip_denoised,
890
- quantize_denoised=quantize_denoised)
891
- if mask is not None:
892
- img_orig = self.q_sample(x0, ts)
893
- img = img_orig * mask + (1. - mask) * img
894
-
895
- if i % log_every_t == 0 or i == timesteps - 1:
896
- intermediates.append(img)
897
- if callback: callback(i)
898
- if img_callback: img_callback(img, i)
899
-
900
- if return_intermediates:
901
- return img, intermediates
902
- return img
903
-
904
- @torch.no_grad()
905
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
906
- verbose=True, timesteps=None, quantize_denoised=False,
907
- mask=None, x0=None, shape=None,**kwargs):
908
- if shape is None:
909
- shape = (batch_size, self.channels, self.mel_dim, self.mel_length)
910
- if cond is not None:
911
- if isinstance(cond, dict):
912
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
913
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
914
- else:
915
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
916
- return self.p_sample_loop(cond,
917
- shape,
918
- return_intermediates=return_intermediates, x_T=x_T,
919
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
920
- mask=mask, x0=x0)
921
-
922
- @torch.no_grad()
923
- def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs):
924
- if ddim:
925
- ddim_sampler = DDIMSampler(self)
926
- shape = (self.channels, self.mel_dim, self.mel_length)
927
- samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
928
- shape,cond,verbose=False,**kwargs)
929
-
930
- else:
931
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
932
- return_intermediates=True,**kwargs)
933
-
934
- return samples, intermediates
935
-
936
- @torch.no_grad()
937
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
938
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
939
- plot_diffusion_rows=True, **kwargs):
940
-
941
- use_ddim = ddim_steps is not None
942
-
943
- log = dict()
944
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
945
- return_first_stage_outputs=True,
946
- force_c_encode=True,
947
- return_original_cond=True,
948
- bs=N)
949
-
950
- N = min(x.shape[0], N)
951
- n_row = min(x.shape[0], n_row)
952
- log["inputs"] = x # 原始输入图像
953
- log["reconstruction"] = xrec # 重建得到的图像
954
- if self.model.conditioning_key is not None:
955
- if hasattr(self.cond_stage_model, "decode"):# when cond_stage is first_stage. (bert embedder doesnot have decode)
956
- xc = self.cond_stage_model.decode(c)# decoded masked image
957
- log["conditioning"] = xc # 重建后的图像
958
- elif self.cond_stage_key in ["caption"]:
959
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"])
960
- log["conditioning"] = xc # 含有文本的图像
961
- if self.model.conditioning_key == 'hybrid':
962
- log["decoded_maskedimg"] = self.first_stage_model.decode(c['c_concat'][:,:self.first_stage_model.embed_dim])# c_concat is the concat result of masked_img latent and resized mask. get latent here to decode
963
- elif self.cond_stage_key == 'class_label':
964
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
965
- log['conditioning'] = xc # 文本为类标签的图像
966
- elif isimage(xc):
967
- log["conditioning"] = xc
968
- if ismap(xc):
969
- log["original_conditioning"] = self.to_rgb(xc)
970
-
971
- if plot_diffusion_rows:# diffusion每一步的图像
972
- # get diffusion row
973
- diffusion_row = list()
974
- z_start = z[:n_row]
975
- for t in range(self.num_timesteps):
976
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
977
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
978
- t = t.to(self.device).long()
979
- noise = torch.randn_like(z_start)
980
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
981
- diffusion_row.append(self.decode_first_stage(z_noisy))
982
-
983
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
984
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
985
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
986
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
987
- log["diffusion_row"] = diffusion_grid
988
-
989
- if sample:#
990
- # get denoise row
991
- with self.ema_scope("Plotting"):
992
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
993
- ddim_steps=ddim_steps,eta=ddim_eta)
994
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
995
- x_samples = self.decode_first_stage(samples)
996
- log["samples"] = x_samples
997
- if plot_denoise_rows:
998
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
999
- log["denoise_row"] = denoise_grid
1000
-
1001
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
1002
- self.first_stage_model, IdentityFirstStage):
1003
- # also display when quantizing x0 while sampling
1004
- with self.ema_scope("Plotting Quantized Denoised"):
1005
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
1006
- ddim_steps=ddim_steps,eta=ddim_eta,
1007
- quantize_denoised=True)
1008
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
1009
- # quantize_denoised=True)
1010
- x_samples = self.decode_first_stage(samples.to(self.device))
1011
- log["samples_x0_quantized"] = x_samples
1012
-
1013
- if inpaint:
1014
- # make a simple center square
1015
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
1016
- mask = torch.ones(N, h, w).to(self.device)
1017
- # zeros will be filled in
1018
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
1019
- mask = mask[:, None, ...]# N,1,H,W
1020
- with self.ema_scope("Plotting Inpaint"):
1021
- samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta,
1022
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
1023
- x_samples = self.decode_first_stage(samples.to(self.device))
1024
- log["samples_inpainting"] = x_samples
1025
- log["mask"] = mask
1026
-
1027
- # outpaint
1028
- with self.ema_scope("Plotting Outpaint"):
1029
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta,
1030
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
1031
- x_samples = self.decode_first_stage(samples.to(self.device))
1032
- log["samples_outpainting"] = x_samples
1033
-
1034
- if plot_progressive_rows:
1035
- with self.ema_scope("Plotting Progressives"):
1036
- img, progressives = self.progressive_denoising(c,
1037
- shape=(self.channels, self.mel_dim, self.mel_length),
1038
- batch_size=N)
1039
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
1040
- log["progressive_row"] = prog_row
1041
-
1042
- if return_keys:
1043
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
1044
- return log
1045
- else:
1046
- return {key: log[key] for key in return_keys}
1047
- return log
1048
-
1049
- def configure_optimizers(self):
1050
- lr = self.learning_rate
1051
- params = list(self.model.parameters())
1052
- if self.cond_stage_trainable:
1053
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
1054
- params = params + list(self.cond_stage_model.parameters())
1055
- if self.learn_logvar:
1056
- print('Diffusion model optimizing logvar')
1057
- params.append(self.logvar)
1058
- opt = torch.optim.AdamW(params, lr=lr)
1059
- if self.use_scheduler:
1060
- assert 'target' in self.scheduler_config
1061
- scheduler = instantiate_from_config(self.scheduler_config)
1062
-
1063
- print("Setting up LambdaLR scheduler...")
1064
- scheduler = [
1065
- {
1066
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
1067
- 'interval': 'step',
1068
- 'frequency': 1
1069
- }]
1070
- return [opt], scheduler
1071
- return opt
1072
-
1073
- @torch.no_grad()
1074
- def to_rgb(self, x):
1075
- x = x.float()
1076
- if not hasattr(self, "colorize"):
1077
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
1078
- x = nn.functional.conv2d(x, weight=self.colorize)
1079
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
1080
- return x
1081
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aditya9790/yolo7-object-tracking/utils/autoanchor.py DELETED
@@ -1,160 +0,0 @@
1
- # Auto-anchor utils
2
-
3
- import numpy as np
4
- import torch
5
- import yaml
6
- from scipy.cluster.vq import kmeans
7
- from tqdm import tqdm
8
-
9
- from utils.general import colorstr
10
-
11
-
12
- def check_anchor_order(m):
13
- # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary
14
- a = m.anchor_grid.prod(-1).view(-1) # anchor area
15
- da = a[-1] - a[0] # delta a
16
- ds = m.stride[-1] - m.stride[0] # delta s
17
- if da.sign() != ds.sign(): # same order
18
- print('Reversing anchor order')
19
- m.anchors[:] = m.anchors.flip(0)
20
- m.anchor_grid[:] = m.anchor_grid.flip(0)
21
-
22
-
23
- def check_anchors(dataset, model, thr=4.0, imgsz=640):
24
- # Check anchor fit to data, recompute if necessary
25
- prefix = colorstr('autoanchor: ')
26
- print(f'\n{prefix}Analyzing anchors... ', end='')
27
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
28
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
29
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
30
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
31
-
32
- def metric(k): # compute metric
33
- r = wh[:, None] / k[None]
34
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
35
- best = x.max(1)[0] # best_x
36
- aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
37
- bpr = (best > 1. / thr).float().mean() # best possible recall
38
- return bpr, aat
39
-
40
- anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors
41
- bpr, aat = metric(anchors)
42
- print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
43
- if bpr < 0.98: # threshold to recompute
44
- print('. Attempting to improve anchors, please wait...')
45
- na = m.anchor_grid.numel() // 2 # number of anchors
46
- try:
47
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
48
- except Exception as e:
49
- print(f'{prefix}ERROR: {e}')
50
- new_bpr = metric(anchors)[0]
51
- if new_bpr > bpr: # replace anchors
52
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
53
- m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference
54
- check_anchor_order(m)
55
- m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
56
- print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
57
- else:
58
- print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
59
- print('') # newline
60
-
61
-
62
- def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
63
- """ Creates kmeans-evolved anchors from training dataset
64
-
65
- Arguments:
66
- path: path to dataset *.yaml, or a loaded dataset
67
- n: number of anchors
68
- img_size: image size used for training
69
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
70
- gen: generations to evolve anchors using genetic algorithm
71
- verbose: print all results
72
-
73
- Return:
74
- k: kmeans evolved anchors
75
-
76
- Usage:
77
- from utils.autoanchor import *; _ = kmean_anchors()
78
- """
79
- thr = 1. / thr
80
- prefix = colorstr('autoanchor: ')
81
-
82
- def metric(k, wh): # compute metrics
83
- r = wh[:, None] / k[None]
84
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
85
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
86
- return x, x.max(1)[0] # x, best_x
87
-
88
- def anchor_fitness(k): # mutation fitness
89
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
90
- return (best * (best > thr).float()).mean() # fitness
91
-
92
- def print_results(k):
93
- k = k[np.argsort(k.prod(1))] # sort small to large
94
- x, best = metric(k, wh0)
95
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
96
- print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
97
- print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
98
- f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
99
- for i, x in enumerate(k):
100
- print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
101
- return k
102
-
103
- if isinstance(path, str): # *.yaml file
104
- with open(path) as f:
105
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict
106
- from utils.datasets import LoadImagesAndLabels
107
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
108
- else:
109
- dataset = path # dataset
110
-
111
- # Get label wh
112
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
113
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
114
-
115
- # Filter
116
- i = (wh0 < 3.0).any(1).sum()
117
- if i:
118
- print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
119
- wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
120
- # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
121
-
122
- # Kmeans calculation
123
- print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
124
- s = wh.std(0) # sigmas for whitening
125
- k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
126
- assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}')
127
- k *= s
128
- wh = torch.tensor(wh, dtype=torch.float32) # filtered
129
- wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
130
- k = print_results(k)
131
-
132
- # Plot
133
- # k, d = [None] * 20, [None] * 20
134
- # for i in tqdm(range(1, 21)):
135
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
136
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
137
- # ax = ax.ravel()
138
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
139
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
140
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
141
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
142
- # fig.savefig('wh.png', dpi=200)
143
-
144
- # Evolve
145
- npr = np.random
146
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
147
- pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
148
- for _ in pbar:
149
- v = np.ones(sh)
150
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
151
- v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
152
- kg = (k.copy() * v).clip(min=2.0)
153
- fg = anchor_fitness(kg)
154
- if fg > f:
155
- f, k = fg, kg.copy()
156
- pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
157
- if verbose:
158
- print_results(k)
159
-
160
- return print_results(k)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/updater/base.py DELETED
@@ -1,27 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import TYPE_CHECKING, List, Tuple
4
-
5
- from pydantic import BaseModel
6
-
7
- # from agentverse.agents import Agent
8
- from abc import abstractmethod
9
-
10
- from . import updater_registry as UpdaterRegistry
11
-
12
- if TYPE_CHECKING:
13
- from agentverse.environments import BaseEnvironment
14
-
15
-
16
- @UpdaterRegistry.register("base")
17
- class BaseUpdater(BaseModel):
18
- """
19
- The base class of updater class.
20
- """
21
-
22
- @abstractmethod
23
- def update_memory(self, environment: BaseEnvironment):
24
- pass
25
-
26
- def reset(self):
27
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildrenHeight.js DELETED
@@ -1,6 +0,0 @@
1
- // Override
2
- var GetChildrenHeight = function () {
3
- return 0;
4
- }
5
-
6
- export default GetChildrenHeight;
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ResolveWidth.js DELETED
@@ -1,16 +0,0 @@
1
- var ResolveWidth = function (width) {
2
- if (width === undefined) {
3
- width = Math.max(this.childrenWidth, this.minWidth);
4
- } else {
5
- /*
6
- var minWidth = Math.max(this.childrenWidth, this.minWidth);
7
- if (minWidth > width) {
8
- // Warning
9
- }
10
- */
11
- }
12
-
13
- return width;
14
- }
15
-
16
- export default ResolveWidth;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Akmyradov/TurkmenTTSweSTT/vits/losses.py DELETED
@@ -1,61 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
-
4
- import commons
5
-
6
-
7
- def feature_loss(fmap_r, fmap_g):
8
- loss = 0
9
- for dr, dg in zip(fmap_r, fmap_g):
10
- for rl, gl in zip(dr, dg):
11
- rl = rl.float().detach()
12
- gl = gl.float()
13
- loss += torch.mean(torch.abs(rl - gl))
14
-
15
- return loss * 2
16
-
17
-
18
- def discriminator_loss(disc_real_outputs, disc_generated_outputs):
19
- loss = 0
20
- r_losses = []
21
- g_losses = []
22
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
23
- dr = dr.float()
24
- dg = dg.float()
25
- r_loss = torch.mean((1-dr)**2)
26
- g_loss = torch.mean(dg**2)
27
- loss += (r_loss + g_loss)
28
- r_losses.append(r_loss.item())
29
- g_losses.append(g_loss.item())
30
-
31
- return loss, r_losses, g_losses
32
-
33
-
34
- def generator_loss(disc_outputs):
35
- loss = 0
36
- gen_losses = []
37
- for dg in disc_outputs:
38
- dg = dg.float()
39
- l = torch.mean((1-dg)**2)
40
- gen_losses.append(l)
41
- loss += l
42
-
43
- return loss, gen_losses
44
-
45
-
46
- def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
47
- """
48
- z_p, logs_q: [b, h, t_t]
49
- m_p, logs_p: [b, h, t_t]
50
- """
51
- z_p = z_p.float()
52
- logs_q = logs_q.float()
53
- m_p = m_p.float()
54
- logs_p = logs_p.float()
55
- z_mask = z_mask.float()
56
-
57
- kl = logs_p - logs_q - 0.5
58
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
59
- kl = torch.sum(kl * z_mask)
60
- l = kl / torch.sum(z_mask)
61
- return l
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/install.md DELETED
@@ -1,51 +0,0 @@
1
- ## v1.8.0
2
- ### Linux and Windows
3
- ```shell
4
- # CUDA 11.0
5
- pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
6
-
7
- # CUDA 10.2
8
- pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0
9
-
10
- # CPU only
11
- pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
12
-
13
- ```
14
-
15
-
16
- ## v1.7.1
17
- ### Linux and Windows
18
- ```shell
19
- # CUDA 11.0
20
- pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
21
-
22
- # CUDA 10.2
23
- pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2
24
-
25
- # CUDA 10.1
26
- pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
27
-
28
- # CUDA 9.2
29
- pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
30
-
31
- # CPU only
32
- pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
33
- ```
34
-
35
-
36
- ## v1.6.0
37
-
38
- ### Linux and Windows
39
- ```shell
40
- # CUDA 10.2
41
- pip install torch==1.6.0 torchvision==0.7.0
42
-
43
- # CUDA 10.1
44
- pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
45
-
46
- # CUDA 9.2
47
- pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
48
-
49
- # CPU only
50
- pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
51
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/shanghainese.py DELETED
@@ -1,64 +0,0 @@
1
- import re
2
- import cn2an
3
- import opencc
4
-
5
-
6
- converter = opencc.OpenCC('zaonhe')
7
-
8
- # List of (Latin alphabet, ipa) pairs:
9
- _latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
10
- ('A', 'ᴇ'),
11
- ('B', 'bi'),
12
- ('C', 'si'),
13
- ('D', 'di'),
14
- ('E', 'i'),
15
- ('F', 'ᴇf'),
16
- ('G', 'dʑi'),
17
- ('H', 'ᴇtɕʰ'),
18
- ('I', 'ᴀi'),
19
- ('J', 'dʑᴇ'),
20
- ('K', 'kʰᴇ'),
21
- ('L', 'ᴇl'),
22
- ('M', 'ᴇm'),
23
- ('N', 'ᴇn'),
24
- ('O', 'o'),
25
- ('P', 'pʰi'),
26
- ('Q', 'kʰiu'),
27
- ('R', 'ᴀl'),
28
- ('S', 'ᴇs'),
29
- ('T', 'tʰi'),
30
- ('U', 'ɦiu'),
31
- ('V', 'vi'),
32
- ('W', 'dᴀbɤliu'),
33
- ('X', 'ᴇks'),
34
- ('Y', 'uᴀi'),
35
- ('Z', 'zᴇ')
36
- ]]
37
-
38
-
39
- def _number_to_shanghainese(num):
40
- num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两')
41
- return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num)
42
-
43
-
44
- def number_to_shanghainese(text):
45
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text)
46
-
47
-
48
- def latin_to_ipa(text):
49
- for regex, replacement in _latin_to_ipa:
50
- text = re.sub(regex, replacement, text)
51
- return text
52
-
53
-
54
- def shanghainese_to_ipa(text):
55
- text = number_to_shanghainese(text.upper())
56
- text = converter.convert(text).replace('-','').replace('$',' ')
57
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
58
- text = re.sub(r'[、;:]', ',', text)
59
- text = re.sub(r'\s*,\s*', ', ', text)
60
- text = re.sub(r'\s*。\s*', '. ', text)
61
- text = re.sub(r'\s*?\s*', '? ', text)
62
- text = re.sub(r'\s*!\s*', '! ', text)
63
- text = re.sub(r'\s*$', '', text)
64
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/portfolio-github/README.md DELETED
@@ -1,36 +0,0 @@
1
- ---
2
- title: Portfolio Github
3
- emoji: 🌖
4
- colorFrom: blue
5
- colorTo: green
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- # Configuration
11
-
12
- `title`: _string_
13
- Display title for the Space
14
-
15
- `emoji`: _string_
16
- Space emoji (emoji-only character allowed)
17
-
18
- `colorFrom`: _string_
19
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
20
-
21
- `colorTo`: _string_
22
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
23
-
24
- `sdk`: _string_
25
- Can be either `gradio`, `streamlit`, or `static`
26
-
27
- `sdk_version` : _string_
28
- Only applicable for `streamlit` SDK.
29
- See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
30
-
31
- `app_file`: _string_
32
- Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
33
- Path is relative to the root of the repository.
34
-
35
- `pinned`: _boolean_
36
- Whether the Space stays on top of your list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/__init__.py DELETED
File without changes
spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_769x769_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './gcnet_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_80k_ade20k.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/ade20k.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
4
- ]
5
- model = dict(decode_head=dict(num_classes=150))
 
 
 
 
 
 
spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/tippy.css DELETED
@@ -1 +0,0 @@
1
- .tippy-box[data-animation=fade][data-state=hidden]{opacity:0}[data-tippy-root]{max-width:calc(100vw - 10px)}.tippy-box{position:relative;background-color:#333;color:#fff;border-radius:4px;font-size:14px;line-height:1.4;white-space:normal;outline:0;transition-property:transform,visibility,opacity}.tippy-box[data-placement^=top]>.tippy-arrow{bottom:0}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-7px;left:0;border-width:8px 8px 0;border-top-color:initial;transform-origin:center top}.tippy-box[data-placement^=bottom]>.tippy-arrow{top:0}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-7px;left:0;border-width:0 8px 8px;border-bottom-color:initial;transform-origin:center bottom}.tippy-box[data-placement^=left]>.tippy-arrow{right:0}.tippy-box[data-placement^=left]>.tippy-arrow:before{border-width:8px 0 8px 8px;border-left-color:initial;right:-7px;transform-origin:center left}.tippy-box[data-placement^=right]>.tippy-arrow{left:0}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-7px;border-width:8px 8px 8px 0;border-right-color:initial;transform-origin:center right}.tippy-box[data-inertia][data-state=visible]{transition-timing-function:cubic-bezier(.54,1.5,.38,1.11)}.tippy-arrow{width:16px;height:16px;color:#333}.tippy-arrow:before{content:"";position:absolute;border-color:transparent;border-style:solid}.tippy-content{position:relative;padding:5px 9px;z-index:1}
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py DELETED
@@ -1,412 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import math
3
-
4
- import numpy as np
5
- import torch
6
- import torch.nn as nn
7
- import torch.nn.functional as F
8
-
9
- from ..utils import kaiming_init
10
- from .registry import PLUGIN_LAYERS
11
-
12
-
13
- @PLUGIN_LAYERS.register_module()
14
- class GeneralizedAttention(nn.Module):
15
- """GeneralizedAttention module.
16
-
17
- See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks'
18
- (https://arxiv.org/abs/1711.07971) for details.
19
-
20
- Args:
21
- in_channels (int): Channels of the input feature map.
22
- spatial_range (int): The spatial range. -1 indicates no spatial range
23
- constraint. Default: -1.
24
- num_heads (int): The head number of empirical_attention module.
25
- Default: 9.
26
- position_embedding_dim (int): The position embedding dimension.
27
- Default: -1.
28
- position_magnitude (int): A multiplier acting on coord difference.
29
- Default: 1.
30
- kv_stride (int): The feature stride acting on key/value feature map.
31
- Default: 2.
32
- q_stride (int): The feature stride acting on query feature map.
33
- Default: 1.
34
- attention_type (str): A binary indicator string for indicating which
35
- items in generalized empirical_attention module are used.
36
- Default: '1111'.
37
-
38
- - '1000' indicates 'query and key content' (appr - appr) item,
39
- - '0100' indicates 'query content and relative position'
40
- (appr - position) item,
41
- - '0010' indicates 'key content only' (bias - appr) item,
42
- - '0001' indicates 'relative position only' (bias - position) item.
43
- """
44
-
45
- _abbr_ = 'gen_attention_block'
46
-
47
- def __init__(self,
48
- in_channels,
49
- spatial_range=-1,
50
- num_heads=9,
51
- position_embedding_dim=-1,
52
- position_magnitude=1,
53
- kv_stride=2,
54
- q_stride=1,
55
- attention_type='1111'):
56
-
57
- super(GeneralizedAttention, self).__init__()
58
-
59
- # hard range means local range for non-local operation
60
- self.position_embedding_dim = (
61
- position_embedding_dim
62
- if position_embedding_dim > 0 else in_channels)
63
-
64
- self.position_magnitude = position_magnitude
65
- self.num_heads = num_heads
66
- self.in_channels = in_channels
67
- self.spatial_range = spatial_range
68
- self.kv_stride = kv_stride
69
- self.q_stride = q_stride
70
- self.attention_type = [bool(int(_)) for _ in attention_type]
71
- self.qk_embed_dim = in_channels // num_heads
72
- out_c = self.qk_embed_dim * num_heads
73
-
74
- if self.attention_type[0] or self.attention_type[1]:
75
- self.query_conv = nn.Conv2d(
76
- in_channels=in_channels,
77
- out_channels=out_c,
78
- kernel_size=1,
79
- bias=False)
80
- self.query_conv.kaiming_init = True
81
-
82
- if self.attention_type[0] or self.attention_type[2]:
83
- self.key_conv = nn.Conv2d(
84
- in_channels=in_channels,
85
- out_channels=out_c,
86
- kernel_size=1,
87
- bias=False)
88
- self.key_conv.kaiming_init = True
89
-
90
- self.v_dim = in_channels // num_heads
91
- self.value_conv = nn.Conv2d(
92
- in_channels=in_channels,
93
- out_channels=self.v_dim * num_heads,
94
- kernel_size=1,
95
- bias=False)
96
- self.value_conv.kaiming_init = True
97
-
98
- if self.attention_type[1] or self.attention_type[3]:
99
- self.appr_geom_fc_x = nn.Linear(
100
- self.position_embedding_dim // 2, out_c, bias=False)
101
- self.appr_geom_fc_x.kaiming_init = True
102
-
103
- self.appr_geom_fc_y = nn.Linear(
104
- self.position_embedding_dim // 2, out_c, bias=False)
105
- self.appr_geom_fc_y.kaiming_init = True
106
-
107
- if self.attention_type[2]:
108
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
109
- appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv
110
- self.appr_bias = nn.Parameter(appr_bias_value)
111
-
112
- if self.attention_type[3]:
113
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
114
- geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv
115
- self.geom_bias = nn.Parameter(geom_bias_value)
116
-
117
- self.proj_conv = nn.Conv2d(
118
- in_channels=self.v_dim * num_heads,
119
- out_channels=in_channels,
120
- kernel_size=1,
121
- bias=True)
122
- self.proj_conv.kaiming_init = True
123
- self.gamma = nn.Parameter(torch.zeros(1))
124
-
125
- if self.spatial_range >= 0:
126
- # only works when non local is after 3*3 conv
127
- if in_channels == 256:
128
- max_len = 84
129
- elif in_channels == 512:
130
- max_len = 42
131
-
132
- max_len_kv = int((max_len - 1.0) / self.kv_stride + 1)
133
- local_constraint_map = np.ones(
134
- (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int)
135
- for iy in range(max_len):
136
- for ix in range(max_len):
137
- local_constraint_map[
138
- iy, ix,
139
- max((iy - self.spatial_range) //
140
- self.kv_stride, 0):min((iy + self.spatial_range +
141
- 1) // self.kv_stride +
142
- 1, max_len),
143
- max((ix - self.spatial_range) //
144
- self.kv_stride, 0):min((ix + self.spatial_range +
145
- 1) // self.kv_stride +
146
- 1, max_len)] = 0
147
-
148
- self.local_constraint_map = nn.Parameter(
149
- torch.from_numpy(local_constraint_map).byte(),
150
- requires_grad=False)
151
-
152
- if self.q_stride > 1:
153
- self.q_downsample = nn.AvgPool2d(
154
- kernel_size=1, stride=self.q_stride)
155
- else:
156
- self.q_downsample = None
157
-
158
- if self.kv_stride > 1:
159
- self.kv_downsample = nn.AvgPool2d(
160
- kernel_size=1, stride=self.kv_stride)
161
- else:
162
- self.kv_downsample = None
163
-
164
- self.init_weights()
165
-
166
- def get_position_embedding(self,
167
- h,
168
- w,
169
- h_kv,
170
- w_kv,
171
- q_stride,
172
- kv_stride,
173
- device,
174
- dtype,
175
- feat_dim,
176
- wave_length=1000):
177
- # the default type of Tensor is float32, leading to type mismatch
178
- # in fp16 mode. Cast it to support fp16 mode.
179
- h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype)
180
- h_idxs = h_idxs.view((h, 1)) * q_stride
181
-
182
- w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype)
183
- w_idxs = w_idxs.view((w, 1)) * q_stride
184
-
185
- h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to(
186
- device=device, dtype=dtype)
187
- h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride
188
-
189
- w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to(
190
- device=device, dtype=dtype)
191
- w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride
192
-
193
- # (h, h_kv, 1)
194
- h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0)
195
- h_diff *= self.position_magnitude
196
-
197
- # (w, w_kv, 1)
198
- w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0)
199
- w_diff *= self.position_magnitude
200
-
201
- feat_range = torch.arange(0, feat_dim / 4).to(
202
- device=device, dtype=dtype)
203
-
204
- dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype)
205
- dim_mat = dim_mat**((4. / feat_dim) * feat_range)
206
- dim_mat = dim_mat.view((1, 1, -1))
207
-
208
- embedding_x = torch.cat(
209
- ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2)
210
-
211
- embedding_y = torch.cat(
212
- ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2)
213
-
214
- return embedding_x, embedding_y
215
-
216
- def forward(self, x_input):
217
- num_heads = self.num_heads
218
-
219
- # use empirical_attention
220
- if self.q_downsample is not None:
221
- x_q = self.q_downsample(x_input)
222
- else:
223
- x_q = x_input
224
- n, _, h, w = x_q.shape
225
-
226
- if self.kv_downsample is not None:
227
- x_kv = self.kv_downsample(x_input)
228
- else:
229
- x_kv = x_input
230
- _, _, h_kv, w_kv = x_kv.shape
231
-
232
- if self.attention_type[0] or self.attention_type[1]:
233
- proj_query = self.query_conv(x_q).view(
234
- (n, num_heads, self.qk_embed_dim, h * w))
235
- proj_query = proj_query.permute(0, 1, 3, 2)
236
-
237
- if self.attention_type[0] or self.attention_type[2]:
238
- proj_key = self.key_conv(x_kv).view(
239
- (n, num_heads, self.qk_embed_dim, h_kv * w_kv))
240
-
241
- if self.attention_type[1] or self.attention_type[3]:
242
- position_embed_x, position_embed_y = self.get_position_embedding(
243
- h, w, h_kv, w_kv, self.q_stride, self.kv_stride,
244
- x_input.device, x_input.dtype, self.position_embedding_dim)
245
- # (n, num_heads, w, w_kv, dim)
246
- position_feat_x = self.appr_geom_fc_x(position_embed_x).\
247
- view(1, w, w_kv, num_heads, self.qk_embed_dim).\
248
- permute(0, 3, 1, 2, 4).\
249
- repeat(n, 1, 1, 1, 1)
250
-
251
- # (n, num_heads, h, h_kv, dim)
252
- position_feat_y = self.appr_geom_fc_y(position_embed_y).\
253
- view(1, h, h_kv, num_heads, self.qk_embed_dim).\
254
- permute(0, 3, 1, 2, 4).\
255
- repeat(n, 1, 1, 1, 1)
256
-
257
- position_feat_x /= math.sqrt(2)
258
- position_feat_y /= math.sqrt(2)
259
-
260
- # accelerate for saliency only
261
- if (np.sum(self.attention_type) == 1) and self.attention_type[2]:
262
- appr_bias = self.appr_bias.\
263
- view(1, num_heads, 1, self.qk_embed_dim).\
264
- repeat(n, 1, 1, 1)
265
-
266
- energy = torch.matmul(appr_bias, proj_key).\
267
- view(n, num_heads, 1, h_kv * w_kv)
268
-
269
- h = 1
270
- w = 1
271
- else:
272
- # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for
273
- if not self.attention_type[0]:
274
- energy = torch.zeros(
275
- n,
276
- num_heads,
277
- h,
278
- w,
279
- h_kv,
280
- w_kv,
281
- dtype=x_input.dtype,
282
- device=x_input.device)
283
-
284
- # attention_type[0]: appr - appr
285
- # attention_type[1]: appr - position
286
- # attention_type[2]: bias - appr
287
- # attention_type[3]: bias - position
288
- if self.attention_type[0] or self.attention_type[2]:
289
- if self.attention_type[0] and self.attention_type[2]:
290
- appr_bias = self.appr_bias.\
291
- view(1, num_heads, 1, self.qk_embed_dim)
292
- energy = torch.matmul(proj_query + appr_bias, proj_key).\
293
- view(n, num_heads, h, w, h_kv, w_kv)
294
-
295
- elif self.attention_type[0]:
296
- energy = torch.matmul(proj_query, proj_key).\
297
- view(n, num_heads, h, w, h_kv, w_kv)
298
-
299
- elif self.attention_type[2]:
300
- appr_bias = self.appr_bias.\
301
- view(1, num_heads, 1, self.qk_embed_dim).\
302
- repeat(n, 1, 1, 1)
303
-
304
- energy += torch.matmul(appr_bias, proj_key).\
305
- view(n, num_heads, 1, 1, h_kv, w_kv)
306
-
307
- if self.attention_type[1] or self.attention_type[3]:
308
- if self.attention_type[1] and self.attention_type[3]:
309
- geom_bias = self.geom_bias.\
310
- view(1, num_heads, 1, self.qk_embed_dim)
311
-
312
- proj_query_reshape = (proj_query + geom_bias).\
313
- view(n, num_heads, h, w, self.qk_embed_dim)
314
-
315
- energy_x = torch.matmul(
316
- proj_query_reshape.permute(0, 1, 3, 2, 4),
317
- position_feat_x.permute(0, 1, 2, 4, 3))
318
- energy_x = energy_x.\
319
- permute(0, 1, 3, 2, 4).unsqueeze(4)
320
-
321
- energy_y = torch.matmul(
322
- proj_query_reshape,
323
- position_feat_y.permute(0, 1, 2, 4, 3))
324
- energy_y = energy_y.unsqueeze(5)
325
-
326
- energy += energy_x + energy_y
327
-
328
- elif self.attention_type[1]:
329
- proj_query_reshape = proj_query.\
330
- view(n, num_heads, h, w, self.qk_embed_dim)
331
- proj_query_reshape = proj_query_reshape.\
332
- permute(0, 1, 3, 2, 4)
333
- position_feat_x_reshape = position_feat_x.\
334
- permute(0, 1, 2, 4, 3)
335
- position_feat_y_reshape = position_feat_y.\
336
- permute(0, 1, 2, 4, 3)
337
-
338
- energy_x = torch.matmul(proj_query_reshape,
339
- position_feat_x_reshape)
340
- energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4)
341
-
342
- energy_y = torch.matmul(proj_query_reshape,
343
- position_feat_y_reshape)
344
- energy_y = energy_y.unsqueeze(5)
345
-
346
- energy += energy_x + energy_y
347
-
348
- elif self.attention_type[3]:
349
- geom_bias = self.geom_bias.\
350
- view(1, num_heads, self.qk_embed_dim, 1).\
351
- repeat(n, 1, 1, 1)
352
-
353
- position_feat_x_reshape = position_feat_x.\
354
- view(n, num_heads, w*w_kv, self.qk_embed_dim)
355
-
356
- position_feat_y_reshape = position_feat_y.\
357
- view(n, num_heads, h * h_kv, self.qk_embed_dim)
358
-
359
- energy_x = torch.matmul(position_feat_x_reshape, geom_bias)
360
- energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv)
361
-
362
- energy_y = torch.matmul(position_feat_y_reshape, geom_bias)
363
- energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1)
364
-
365
- energy += energy_x + energy_y
366
-
367
- energy = energy.view(n, num_heads, h * w, h_kv * w_kv)
368
-
369
- if self.spatial_range >= 0:
370
- cur_local_constraint_map = \
371
- self.local_constraint_map[:h, :w, :h_kv, :w_kv].\
372
- contiguous().\
373
- view(1, 1, h*w, h_kv*w_kv)
374
-
375
- energy = energy.masked_fill_(cur_local_constraint_map,
376
- float('-inf'))
377
-
378
- attention = F.softmax(energy, 3)
379
-
380
- proj_value = self.value_conv(x_kv)
381
- proj_value_reshape = proj_value.\
382
- view((n, num_heads, self.v_dim, h_kv * w_kv)).\
383
- permute(0, 1, 3, 2)
384
-
385
- out = torch.matmul(attention, proj_value_reshape).\
386
- permute(0, 1, 3, 2).\
387
- contiguous().\
388
- view(n, self.v_dim * self.num_heads, h, w)
389
-
390
- out = self.proj_conv(out)
391
-
392
- # output is downsampled, upsample back to input size
393
- if self.q_downsample is not None:
394
- out = F.interpolate(
395
- out,
396
- size=x_input.shape[2:],
397
- mode='bilinear',
398
- align_corners=False)
399
-
400
- out = self.gamma * out + x_input
401
- return out
402
-
403
- def init_weights(self):
404
- for m in self.modules():
405
- if hasattr(m, 'kaiming_init') and m.kaiming_init:
406
- kaiming_init(
407
- m,
408
- mode='fan_in',
409
- nonlinearity='leaky_relu',
410
- bias=0,
411
- distribution='uniform',
412
- a=1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/pavi.py DELETED
@@ -1,117 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import json
3
- import os
4
- import os.path as osp
5
-
6
- import torch
7
- import yaml
8
-
9
- import annotator.uniformer.mmcv as mmcv
10
- from ....parallel.utils import is_module_wrapper
11
- from ...dist_utils import master_only
12
- from ..hook import HOOKS
13
- from .base import LoggerHook
14
-
15
-
16
- @HOOKS.register_module()
17
- class PaviLoggerHook(LoggerHook):
18
-
19
- def __init__(self,
20
- init_kwargs=None,
21
- add_graph=False,
22
- add_last_ckpt=False,
23
- interval=10,
24
- ignore_last=True,
25
- reset_flag=False,
26
- by_epoch=True,
27
- img_key='img_info'):
28
- super(PaviLoggerHook, self).__init__(interval, ignore_last, reset_flag,
29
- by_epoch)
30
- self.init_kwargs = init_kwargs
31
- self.add_graph = add_graph
32
- self.add_last_ckpt = add_last_ckpt
33
- self.img_key = img_key
34
-
35
- @master_only
36
- def before_run(self, runner):
37
- super(PaviLoggerHook, self).before_run(runner)
38
- try:
39
- from pavi import SummaryWriter
40
- except ImportError:
41
- raise ImportError('Please run "pip install pavi" to install pavi.')
42
-
43
- self.run_name = runner.work_dir.split('/')[-1]
44
-
45
- if not self.init_kwargs:
46
- self.init_kwargs = dict()
47
- self.init_kwargs['name'] = self.run_name
48
- self.init_kwargs['model'] = runner._model_name
49
- if runner.meta is not None:
50
- if 'config_dict' in runner.meta:
51
- config_dict = runner.meta['config_dict']
52
- assert isinstance(
53
- config_dict,
54
- dict), ('meta["config_dict"] has to be of a dict, '
55
- f'but got {type(config_dict)}')
56
- elif 'config_file' in runner.meta:
57
- config_file = runner.meta['config_file']
58
- config_dict = dict(mmcv.Config.fromfile(config_file))
59
- else:
60
- config_dict = None
61
- if config_dict is not None:
62
- # 'max_.*iter' is parsed in pavi sdk as the maximum iterations
63
- # to properly set up the progress bar.
64
- config_dict = config_dict.copy()
65
- config_dict.setdefault('max_iter', runner.max_iters)
66
- # non-serializable values are first converted in
67
- # mmcv.dump to json
68
- config_dict = json.loads(
69
- mmcv.dump(config_dict, file_format='json'))
70
- session_text = yaml.dump(config_dict)
71
- self.init_kwargs['session_text'] = session_text
72
- self.writer = SummaryWriter(**self.init_kwargs)
73
-
74
- def get_step(self, runner):
75
- """Get the total training step/epoch."""
76
- if self.get_mode(runner) == 'val' and self.by_epoch:
77
- return self.get_epoch(runner)
78
- else:
79
- return self.get_iter(runner)
80
-
81
- @master_only
82
- def log(self, runner):
83
- tags = self.get_loggable_tags(runner, add_mode=False)
84
- if tags:
85
- self.writer.add_scalars(
86
- self.get_mode(runner), tags, self.get_step(runner))
87
-
88
- @master_only
89
- def after_run(self, runner):
90
- if self.add_last_ckpt:
91
- ckpt_path = osp.join(runner.work_dir, 'latest.pth')
92
- if osp.islink(ckpt_path):
93
- ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path))
94
-
95
- if osp.isfile(ckpt_path):
96
- # runner.epoch += 1 has been done before `after_run`.
97
- iteration = runner.epoch if self.by_epoch else runner.iter
98
- return self.writer.add_snapshot_file(
99
- tag=self.run_name,
100
- snapshot_file_path=ckpt_path,
101
- iteration=iteration)
102
-
103
- # flush the buffer and send a task ending signal to Pavi
104
- self.writer.close()
105
-
106
- @master_only
107
- def before_epoch(self, runner):
108
- if runner.epoch == 0 and self.add_graph:
109
- if is_module_wrapper(runner.model):
110
- _model = runner.model.module
111
- else:
112
- _model = runner.model
113
- device = next(_model.parameters()).device
114
- data = next(iter(runner.data_loader))
115
- image = data[self.img_key][0:1].to(device)
116
- with torch.no_grad():
117
- self.writer.add_graph(_model, image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/_distutils_hack/override.py DELETED
@@ -1 +0,0 @@
1
- __import__('_distutils_hack').do_override()
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/py38compat.py DELETED
@@ -1,8 +0,0 @@
1
- def aix_platform(osname, version, release):
2
- try:
3
- import _aix_support
4
-
5
- return _aix_support.aix_platform()
6
- except ImportError:
7
- pass
8
- return "{}-{}.{}".format(osname, version, release)
 
 
 
 
 
 
 
 
 
spaces/AtomdffAI/wechatgpt4atom/channel/wechat/wechat_channel.py DELETED
@@ -1,176 +0,0 @@
1
- # encoding:utf-8
2
-
3
- """
4
- wechat channel
5
- """
6
- import itchat
7
- import json
8
- from itchat.content import *
9
- from channel.channel import Channel
10
- from concurrent.futures import ThreadPoolExecutor
11
- from common.log import logger
12
- from config import conf
13
- import requests
14
- import io
15
-
16
- thread_pool = ThreadPoolExecutor(max_workers=8)
17
-
18
-
19
- class WechatChannel(Channel):
20
-
21
- qrcode = b''
22
-
23
- newInstance=None
24
-
25
- def __init__(self):
26
- pass
27
-
28
- def startup(self):
29
- # login by scan QRCode
30
- newInstance = itchat.load_sync_itchat()
31
- self.newInstance = newInstance
32
-
33
- @newInstance.msg_register(TEXT)
34
- def handler_single_msg(msg):
35
- self.handle(msg)
36
- return None
37
-
38
- @newInstance.msg_register(TEXT, isGroupChat=True)
39
- def handler_group_msg(msg):
40
- self.handle_group(msg)
41
- return None
42
-
43
- newInstance.auto_login(qrCallback=self.qrCallback)
44
- # start message listener
45
- newInstance.run()
46
-
47
- def qrCallback(self, uuid, status, qrcode):
48
- self.qrcode = qrcode
49
-
50
- def getQrCode(self):
51
- return self.qrcode
52
-
53
- def handle(self, msg):
54
- logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False))
55
- from_user_id = msg['FromUserName']
56
- to_user_id = msg['ToUserName'] # 接收人id
57
- other_user_id = msg['User']['UserName'] # 对手方id
58
- content = msg['Text']
59
- match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
60
- if from_user_id == other_user_id and match_prefix is not None:
61
- # 好友向自己发送消息
62
- if match_prefix != '':
63
- str_list = content.split(match_prefix, 1)
64
- if len(str_list) == 2:
65
- content = str_list[1].strip()
66
-
67
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
68
- if img_match_prefix:
69
- content = content.split(img_match_prefix, 1)[1].strip()
70
- thread_pool.submit(self._do_send_img, content, from_user_id)
71
- else:
72
- thread_pool.submit(self._do_send, content, from_user_id)
73
-
74
- elif to_user_id == other_user_id and match_prefix:
75
- # 自己给好友发送消息
76
- str_list = content.split(match_prefix, 1)
77
- if len(str_list) == 2:
78
- content = str_list[1].strip()
79
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
80
- if img_match_prefix:
81
- content = content.split(img_match_prefix, 1)[1].strip()
82
- thread_pool.submit(self._do_send_img, content, to_user_id)
83
- else:
84
- thread_pool.submit(self._do_send, content, to_user_id)
85
-
86
-
87
- def handle_group(self, msg):
88
- logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False))
89
- group_name = msg['User'].get('NickName', None)
90
- group_id = msg['User'].get('UserName', None)
91
- if not group_name:
92
- return ""
93
- origin_content = msg['Content']
94
- content = msg['Content']
95
- content_list = content.split(' ', 1)
96
- context_special_list = content.split('\u2005', 1)
97
- if len(context_special_list) == 2:
98
- content = context_special_list[1]
99
- elif len(content_list) == 2:
100
- content = content_list[1]
101
-
102
- config = conf()
103
- match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \
104
- or self.check_contain(origin_content, config.get('group_chat_keyword'))
105
- if ('ALL_GROUP' in config.get('group_name_white_list') or group_name in config.get('group_name_white_list') or self.check_contain(group_name, config.get('group_name_keyword_white_list'))) and match_prefix:
106
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
107
- if img_match_prefix:
108
- content = content.split(img_match_prefix, 1)[1].strip()
109
- thread_pool.submit(self._do_send_img, content, group_id)
110
- else:
111
- thread_pool.submit(self._do_send_group, content, msg)
112
-
113
- def send(self, msg, receiver):
114
- logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
115
- self.newInstance.send(msg, toUserName=receiver)
116
-
117
- def _do_send(self, query, reply_user_id):
118
- try:
119
- if not query:
120
- return
121
- context = dict()
122
- context['from_user_id'] = reply_user_id
123
- reply_text = super().build_reply_content(query, context)
124
- if reply_text:
125
- self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
126
- except Exception as e:
127
- logger.exception(e)
128
-
129
- def _do_send_img(self, query, reply_user_id):
130
- try:
131
- if not query:
132
- return
133
- context = dict()
134
- context['type'] = 'IMAGE_CREATE'
135
- img_url = super().build_reply_content(query, context)
136
- if not img_url:
137
- return
138
-
139
- # 图片下载
140
- pic_res = requests.get(img_url, stream=True)
141
- image_storage = io.BytesIO()
142
- for block in pic_res.iter_content(1024):
143
- image_storage.write(block)
144
- image_storage.seek(0)
145
-
146
- # 图片发送
147
- logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
148
- self.newInstance.send_image(image_storage, reply_user_id)
149
- except Exception as e:
150
- logger.exception(e)
151
-
152
- def _do_send_group(self, query, msg):
153
- if not query:
154
- return
155
- context = dict()
156
- context['from_user_id'] = msg['ActualUserName']
157
- reply_text = super().build_reply_content(query, context)
158
- if reply_text:
159
- reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip()
160
- self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName'])
161
-
162
-
163
- def check_prefix(self, content, prefix_list):
164
- for prefix in prefix_list:
165
- if content.startswith(prefix):
166
- return prefix
167
- return None
168
-
169
-
170
- def check_contain(self, content, keyword_list):
171
- if not keyword_list:
172
- return None
173
- for ky in keyword_list:
174
- if content.find(ky) != -1:
175
- return True
176
- return None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/datasets/README.md DELETED
@@ -1,140 +0,0 @@
1
- # Use Builtin Datasets
2
-
3
- A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog)
4
- for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc).
5
- This document explains how to setup the builtin datasets so they can be used by the above APIs.
6
- [Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`,
7
- and how to add new datasets to them.
8
-
9
- Detectron2 has builtin support for a few datasets.
10
- The datasets are assumed to exist in a directory specified by the environment variable
11
- `DETECTRON2_DATASETS`.
12
- Under this directory, detectron2 will look for datasets in the structure described below, if needed.
13
- ```
14
- $DETECTRON2_DATASETS/
15
- coco/
16
- lvis/
17
- cityscapes/
18
- VOC20{07,12}/
19
- ```
20
-
21
- You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`.
22
- If left unset, the default is `./datasets` relative to your current working directory.
23
-
24
- The [model zoo](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md)
25
- contains configs and models that use these builtin datasets.
26
-
27
- ## Expected dataset structure for [COCO instance/keypoint detection](https://cocodataset.org/#download):
28
-
29
- ```
30
- coco/
31
- annotations/
32
- instances_{train,val}2017.json
33
- person_keypoints_{train,val}2017.json
34
- {train,val}2017/
35
- # image files that are mentioned in the corresponding json
36
- ```
37
-
38
- You can use the 2014 version of the dataset as well.
39
-
40
- Some of the builtin tests (`dev/run_*_tests.sh`) uses a tiny version of the COCO dataset,
41
- which you can download with `./datasets/prepare_for_tests.sh`.
42
-
43
- ## Expected dataset structure for PanopticFPN:
44
-
45
- Extract panoptic annotations from [COCO website](https://cocodataset.org/#download)
46
- into the following structure:
47
- ```
48
- coco/
49
- annotations/
50
- panoptic_{train,val}2017.json
51
- panoptic_{train,val}2017/ # png annotations
52
- panoptic_stuff_{train,val}2017/ # generated by the script mentioned below
53
- ```
54
-
55
- Install panopticapi by:
56
- ```
57
- pip install git+https://github.com/cocodataset/panopticapi.git
58
- ```
59
- Then, run `python datasets/prepare_panoptic_fpn.py`, to extract semantic annotations from panoptic annotations.
60
-
61
- ## Expected dataset structure for [LVIS instance segmentation](https://www.lvisdataset.org/dataset):
62
- ```
63
- coco/
64
- {train,val,test}2017/
65
- lvis/
66
- lvis_v0.5_{train,val}.json
67
- lvis_v0.5_image_info_test.json
68
- lvis_v1_{train,val}.json
69
- lvis_v1_image_info_test{,_challenge}.json
70
- ```
71
-
72
- Install lvis-api by:
73
- ```
74
- pip install git+https://github.com/lvis-dataset/lvis-api.git
75
- ```
76
-
77
- To evaluate models trained on the COCO dataset using LVIS annotations,
78
- run `python datasets/prepare_cocofied_lvis.py` to prepare "cocofied" LVIS annotations.
79
-
80
- ## Expected dataset structure for [cityscapes](https://www.cityscapes-dataset.com/downloads/):
81
- ```
82
- cityscapes/
83
- gtFine/
84
- train/
85
- aachen/
86
- color.png, instanceIds.png, labelIds.png, polygons.json,
87
- labelTrainIds.png
88
- ...
89
- val/
90
- test/
91
- # below are generated Cityscapes panoptic annotation
92
- cityscapes_panoptic_train.json
93
- cityscapes_panoptic_train/
94
- cityscapes_panoptic_val.json
95
- cityscapes_panoptic_val/
96
- cityscapes_panoptic_test.json
97
- cityscapes_panoptic_test/
98
- leftImg8bit/
99
- train/
100
- val/
101
- test/
102
- ```
103
- Install cityscapes scripts by:
104
- ```
105
- pip install git+https://github.com/mcordts/cityscapesScripts.git
106
- ```
107
-
108
- Note: to create labelTrainIds.png, first prepare the above structure, then run cityscapesescript with:
109
- ```
110
- CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createTrainIdLabelImgs.py
111
- ```
112
- These files are not needed for instance segmentation.
113
-
114
- Note: to generate Cityscapes panoptic dataset, run cityscapesescript with:
115
- ```
116
- CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createPanopticImgs.py
117
- ```
118
- These files are not needed for semantic and instance segmentation.
119
-
120
- ## Expected dataset structure for [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html):
121
- ```
122
- VOC20{07,12}/
123
- Annotations/
124
- ImageSets/
125
- Main/
126
- trainval.txt
127
- test.txt
128
- # train.txt or val.txt, if you use these splits
129
- JPEGImages/
130
- ```
131
-
132
- ## Expected dataset structure for [ADE20k Scene Parsing](http://sceneparsing.csail.mit.edu/):
133
- ```
134
- ADEChallengeData2016/
135
- annotations/
136
- annotations_detectron2/
137
- images/
138
- objectInfo150.txt
139
- ```
140
- The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/__init__.py DELETED
@@ -1,5 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from .build import PROPOSAL_GENERATOR_REGISTRY, build_proposal_generator
3
- from .rpn import RPN_HEAD_REGISTRY, build_rpn_head, RPN, StandardRPNHead
4
-
5
- __all__ = list(globals().keys())
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Ai Tipo De Teclado Ms Apk Completo Agrietado.md DELETED
@@ -1,87 +0,0 @@
1
-
2
- <h1>¿Qué es APK JustPlay y cómo descargarlo? </h1>
3
- <p>Si usted es un entusiasta de los juegos que le encanta jugar en su dispositivo móvil, es posible que haya oído hablar de APK JustPlay. Pero, ¿qué es exactamente y cómo se puede descargar? En este artículo, vamos a responder a estas preguntas y más. Vamos a explicar lo que es APK JustPlay, cuáles son sus beneficios, cómo descargarlo, y cómo usarlo. Al final de este artículo, usted tendrá una idea clara de lo que APK JustPlay puede ofrecer y cómo usted puede unirse al último programa de lealtad para los jugadores. </p>
4
- <h2>ai tipo de teclado más apk completo agrietado</h2><br /><p><b><b>Download</b> &#10040; <a href="https://bltlly.com/2v6M54">https://bltlly.com/2v6M54</a></b></p><br /><br />
5
- <h2>Introducción</h2>
6
- <h3>¿Qué es APK JustPlay? </h3>
7
- <p>APK JustPlay es una aplicación móvil que te recompensa con monedas de lealtad para jugar juegos en tu dispositivo. Es un programa de lealtad único que te ofrece una colección única de juegos que no encontrarás en ningún otro lugar. Puedes ganar monedas de lealtad por el tiempo que pasas jugando a estos juegos, y luego canjearlas por recompensas reales o donarlas a organizaciones benéficas fantásticas. APK JustPlay es desarrollado por JustPlay GmbH, un grupo de entretenimiento móvil con sede en Alemania que crea aplicaciones para teléfonos móviles y tabletas. </p>
8
- <h3>¿Cuáles son los beneficios de APK JustPlay? </h3>
9
- <p>APK JustPlay tiene muchos beneficios para los jugadores que quieren sacar provecho de su pasión por el juego. Estos son algunos de ellos:</p>
10
- <ul>
11
- <li> Usted puede ganar dinero diariamente mediante la recogida de puntos de lealtad para jugar juegos que te gustan. </li>
12
- <li>Puede elegir entre pagos de PayPal, tarjetas de regalo o donaciones para apoyar causas cercanas a su corazón. </li>
13
- <li>Puedes disfrutar de pagos diarios cada 3 horas, dándote la flexibilidad para canjear tus monedas cuando quieras. </li>
14
- <li>Usted puede descubrir una colección única de juegos que son exclusivos de APK JustPlay, y disfrutar de una variedad de géneros y temas. </li>
15
- <li>Usted puede hacer una diferencia mediante la donación de sus ganancias a su caridad preferida - y APK JustPlay coincidirá con cada dólar que da! </li>
16
-
17
- </ul>
18
- <h2>¿Cómo descargar APK JustPlay? </h2>
19
- <h3>Paso 1: Visite el sitio web oficial de APK JustPlay</h3>
20
- <p>El primer paso para descargar APK JustPlay es visitar su sitio web oficial en <a href="( 1 )">https://apkcombo.com/justplay/com.playjust.app/</a>. Aquí encontrará toda la información que necesita sobre la aplicación, como sus características, capturas de pantalla, comentarios y más. También verá un botón que dice "Descargar APK (33 MB)". Haga clic en él para proceder al siguiente paso. </p>
21
- <h3>Paso 2: Elige tu dispositivo y descarga el archivo APK</h3>
22
- <p>El siguiente paso es elegir el dispositivo y descargar el archivo APK. Un archivo APK es un archivo de paquete de Android que contiene todos los archivos y código necesarios para instalar una aplicación en su dispositivo. Verá una lista de dispositivos que son compatibles con APK JustPlay, como Android 7.0+, Android TV & Tablet, PC Windows, etc. Elija el que coincida con su dispositivo y haga clic en él. A continuación, verá un enlace que dice "Descargar ahora". Haga clic en él y esperar a que la descarga se complete. </p>
23
- <h3>Paso 3: Habilitar fuentes desconocidas en el dispositivo</h3>
24
- <p>El tercer paso es habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de la Google Play Store oficial. Para habilitar fuentes desconocidas, vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad. Luego, encuentra la opción que dice "Permitir la instalación de aplicaciones desde fuentes desconocidas" o algo similar. Activa y confirma tu elección. Esto le permitirá instalar APK JustPlay en su dispositivo. </p>
25
- <h3>Paso 4: Instalar el archivo APK y lanzar la aplicación</h3>
26
-
27
- <h2>¿Cómo usar APK JustPlay? </h2>
28
- <h3>¿Cómo ganar monedas de lealtad jugando? </h3>
29
- <p>Ahora que ha descargado APK JustPlay, es posible que se pregunte cómo usarlo y ganar monedas de lealtad jugando. Es muy simple y divertido. Así es como:</p>
30
- <ol>
31
- <li>En la pantalla de inicio de la aplicación, verá una lista de juegos que están disponibles para jugar. Puedes desplazarte por ellos y elegir los que te interesan. </li>
32
- <li>Toca el icono del juego y verás una ventana emergente que te muestra cuántas monedas puedes ganar por minuto al jugar ese juego. También verá un botón que dice "Jugar ahora". Toque en él y el juego comenzará a cargar. </li>
33
- <li>Disfruta jugando el juego tanto como quieras. Cuanto más tiempo pases jugando, más monedas ganarás. Puedes ver tu balance de monedas en la esquina superior derecha de la pantalla. </li>
34
- <li>Cuando haya terminado de jugar, toque en el botón de atrás y volverá a la pantalla de inicio de la aplicación. Verá su saldo de monedas actualizado y un mensaje que lo felicita por ganar monedas. </li>
35
- </ol>
36
- <h3> ¿Cómo canjear sus monedas por recompensas reales o donar a organizaciones benéficas? </h3>
37
- <p>Una vez que hayas acumulado suficientes monedas jugando, puedes canjearlas por recompensas reales o donarlas a organizaciones benéficas. Así es como:</p>
38
- <ol>
39
- <li>Toque en el icono del menú en la esquina superior izquierda de la pantalla y seleccione "Recompensas". </li>
40
- <li>Verá una lista de recompensas que están disponibles para que usted elija. Puede filtrarlas por categoría, como efectivo, tarjetas de regalo o donaciones. </li>
41
- <li>Seleccione la recompensa que desea y toque en ella. Verás una ventana emergente que te muestra cuántas monedas necesitas para canjear esa recompensa y un botón que dice "Canjear ahora". Toque en él y confirme su elección. </li>
42
- <li>Usted recibirá un correo electrónico con instrucciones sobre cómo reclamar su recompensa o hacer su donación. Sigue las instrucciones y disfruta de tu recompensa o siéntete bien con tu donación. </li>
43
- </ol>
44
- <h3>¿Cómo descubrir nuevos juegos y temas? </h3>
45
-
46
- <ol>
47
- <li>Toque en el icono del menú en la esquina superior izquierda de la pantalla y seleccione "Descubrir". </li>
48
- <li>Verás una lista de temas que están disponibles para que explores, como acción, aventura, rompecabezas, deportes, etc.</li>
49
- <li> Seleccione el tema que desea y toque en él. Verá una lista de juegos que pertenecen a ese tema. </li>
50
- <li>Toque en cualquier icono del juego y verá una ventana emergente que le muestra una breve descripción del juego y un botón que dice "Jugar ahora". Toque en él y empezar a jugar. </li>
51
- </ol>
52
- <h2>Conclusión</h2>
53
- <h3>Resumen de los puntos principales</h3>
54
- <p>En conclusión, APK JustPlay es una aplicación increíble que te recompensa con monedas de lealtad para jugar juegos en su dispositivo. Usted puede ganar dinero diariamente mediante la recogida de puntos de lealtad para jugar juegos que te gustan. Puede elegir entre retiros de PayPal, tarjetas de regalo o donaciones para apoyar causas cercanas a su corazón. Usted puede disfrutar de pagos diarios cada 3 horas, que le da la flexibilidad para canjear sus monedas cuando lo desee. Usted puede descubrir una colección única de juegos que son exclusivos de APK JustPlay, y disfrutar de una variedad de géneros y temas. Usted puede hacer una diferencia mediante la donación de sus ganancias a su caridad preferida - y APK JustPlay coincidirá con cada dólar que usted da! Puedes unirte a más de 10 millones de jugadores satisfechos que forman parte de la comunidad JustPlay y compartir tu experiencia de juego con ellos. </p>
55
- <p></p>
56
- <h3>Llamada a la acción e invitación a unirse a la comunidad JustPlay</h3>
57
- <p>Si usted está listo para unirse al último programa de lealtad para los jugadores, descargar APK JustPlay hoy y empezar a ganar monedas de lealtad jugando. Es gratis, fácil y divertido. No te arrepentirás. APK JustPlay es la mejor manera de disfrutar de los juegos y ser recompensado por ello. No se pierda esta oportunidad de unirse a la comunidad JustPlay y hacer una diferencia en el mundo. Descargar APK JustPlay ahora y empezar a jugar! </p>
58
- <h2>Preguntas frecuentes</h2>
59
- <p>Aquí hay algunas preguntas frecuentes sobre APK JustPlay:</p>
60
- <tabla>
61
- <tr>
62
- <th>Pregunta</th>
63
- <th>Respuesta</th>
64
-
65
- <tr>
66
- <td>¿Es APK JustPlay seguro y legal? </td>
67
- <td>Sí, APK JustPlay es seguro y legal. Es desarrollado por una empresa de buena reputación que sigue todas las reglas y reglamentos de la Google Play Store. No contiene ningún virus, malware o spyware. No requiere ninguna información personal ni acceso a los datos de su dispositivo. No interfiere con el rendimiento de su dispositivo o la duración de la batería. Es una aplicación legítima que le paga por jugar. </td>
68
- </tr>
69
- <tr>
70
- <td>¿Cuántos juegos están disponibles en APK JustPlay? </td>
71
- <td>APK JustPlay tiene más de 100 juegos que están disponibles para que usted juegue. Estos juegos son exclusivos de APK JustPlay y no los encontrarás en ningún otro lugar. Cubren una amplia gama de géneros y temas, tales como acción, aventura, rompecabezas, deportes, etc. Nunca te aburrirás con APK JustPlay.</td>
72
- </tr>
73
- <tr>
74
- <td>¿Cuánto dinero puedo hacer con APK JustPlay? </td>
75
- <td>La cantidad de dinero que puede hacer con APK JustPlay depende de cuánto tiempo pasas jugando y cuántas monedas ganas. Cuanto más juegas, más ganas. Puedes ganar hasta $10 por día jugando juegos en APK JustPlay. También puedes ganar monedas extra invitando a tus amigos a unirse a la aplicación, completando encuestas, viendo videos y participando en concursos. </td>
76
- </tr>
77
- <tr>
78
- <td>¿Cuáles son los requisitos mínimos para usar APK JustPlay? </td>
79
- <td>Para utilizar APK JustPlay, es necesario tener un dispositivo que se ejecuta en Android 7.0 o superior. También necesita tener una conexión a Internet y suficiente espacio de almacenamiento en su dispositivo. No es necesario tener una cuenta de Google o una tarjeta de crédito para utilizar APK JustPlay.</td>
80
- </tr>
81
- <tr>
82
- <td>¿Cómo puedo contactar al equipo de soporte de APK JustPlay? </td>
83
- <td>Si tiene alguna pregunta, problema o comentario sobre APK JustPlay, puede ponerse en contacto con el equipo de soporte enviando un correo electrónico a <a href="">[email protected]</a>. También puede visitar la página <a href=">FAQ</a> o la página <a href="">Página de Facebook</a> de APK JustPlay para obtener más información. </td>
84
- </tr>
85
- </tabla></p> 64aa2da5cf<br />
86
- <br />
87
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Clash Mini Para PC.md DELETED
@@ -1,123 +0,0 @@
1
- <br />
2
- <h1>Clash Mini: Un juego de mesa divertido y lleno de estrategia</h1>
3
- <p>Si eres un fan del Clash Universe, te encantará Clash Mini, un nuevo juego de Supercell que te permite batirte en duelo en un divertido juego de mesa. Recoge, invoca y actualiza tu ejército de Minis, que son versiones adorables de tus personajes favoritos de Clash, y míralos chocar en emocionantes batallas en tiempo real. Predecir los movimientos de su oponente y montar su estrategia ganadora y la formación. Dirige tu ejército con héroes icónicos como el Rey Bárbaro, la Reina Arquera, la Doncella del Escudo y más. Cambia la marea de batalla intercambiando y actualizando tus Minis entre rondas. Juega casualmente por diversión o en partidos clasificados para aumentar tu posición en la liga. Clash Mini es fácil de aprender pero difícil de dominar. ¡Prepárate para tus Minis para lanzar el estruendo más grande! </p>
4
- <h2>descargar clash mini para PC</h2><br /><p><b><b>Download</b> &#10001; &#10001; &#10001; <a href="https://bltlly.com/2v6KXh">https://bltlly.com/2v6KXh</a></b></p><br /><br />
5
- <p>Pero ¿qué pasa si quieres jugar Clash Mini en una pantalla más grande, con mejores controles y más rendimiento? Bueno, puedes hacer eso jugando Clash Mini en tu PC. En este artículo, te mostraremos cómo descargar e instalar Clash Mini en tu PC, cómo jugarlo y algunos consejos y trucos para ayudarte a ganar más juegos. </p>
6
- <h2>Cómo descargar e instalar Clash Mini en su PC</h2>
7
- <p>Hay dos formas principales de jugar Clash Mini en su PC. Una es usar Windows 11 y emulación nativa de Android, que es la forma oficial de ejecutar aplicaciones Android en Windows. La otra es utilizar un emulador de Android como Bluestacks 5, que es un software de terceros que simula un dispositivo Android en su PC. Ambos métodos tienen sus pros y sus contras, así que puedes elegir el que más te convenga. </p>
8
- <h3>Opción 1: Usa Windows 11 y emulación nativa de Android</h3>
9
-
10
- <p>Para usar esta función, necesita tener un equipo con Windows 11 que cumpla con los requisitos mínimos para ejecutar aplicaciones Android. También necesita tener una cuenta de Microsoft y una cuenta de Amazon. Luego, debe seguir estos pasos:</p>
11
- <p></p>
12
- <ol>
13
- <li>Abra la aplicación Microsoft Store en su PC y busque "Subsistema de Windows para Android". Instálelo en su PC.</li>
14
- <li>Abra la aplicación de Microsoft Store de nuevo y busque "Amazon Appstore". Instálela en su PC.</li>
15
- <li>Abra la aplicación Amazon Appstore en su PC e inicie sesión con su cuenta de Amazon. </li>
16
- <li>Buscar "Clash Mini" en la aplicación Amazon Appstore e instalarlo en su PC.</li>
17
- <li>Abra el menú Inicio en su PC y busque "Clash Mini ". Haga clic en él para iniciar el juego. </li>
18
- </ol>
19
- <p>Felicidades, ha instalado y ejecuta Clash Mini en su PC con Windows 11 y emulación nativa de Android. Ahora puedes disfrutar del juego en una pantalla más grande, con mejores gráficos y un rendimiento más rápido. También puede utilizar el ratón y el teclado o un controlador para jugar el juego. </p>
20
- <h3>Opción 2: Utilice un emulador de Android como Bluestacks 5</h3>
21
- <p>Si no tienes una computadora con Windows 11, o si prefieres usar un método diferente, puedes usar un emulador de Android como Bluestacks 5 para jugar Clash Mini en tu PC. Un emulador de Android es un software que simula un dispositivo Android en su PC, lo que le permite ejecutar aplicaciones y juegos de Android en su PC. Bluestacks 5 es uno de los emuladores de Android más populares y fiables, con más de 500 millones de usuarios en todo el mundo. Ofrece alto rendimiento, compatibilidad, personalización y seguridad para jugar juegos de Android en PC.</p>
22
- <p>Para usar este método, necesita tener un PC que cumpla con los requisitos mínimos para ejecutar Bluestacks 5. También necesita tener una cuenta de Google. Luego, debes seguir estos pasos:</p>
23
- <ol>
24
- <li>Ir al sitio web oficial de Bluestacks 5 y descargar el instalador para su PC.</li>
25
-
26
- <li>Abre Bluestacks 5 e inicia sesión con tu cuenta de Google. </li>
27
- <li>Ir a la aplicación Google Play Store en Bluestacks 5 y buscar "Clash Mini". Instalarlo en Bluestacks 5.</li>
28
- <li>Ir a la pantalla de inicio de Bluestacks 5 y buscar "Clash Mini". Haga clic en él para iniciar el juego. </li>
29
- </ol>
30
- <p>Felicidades, has instalado y ejecutado con éxito Clash Mini en tu PC usando Bluestacks 5. Ahora puedes disfrutar del juego en una pantalla más grande, con mejores gráficos y un rendimiento más rápido. También puede utilizar el ratón y el teclado o un controlador para jugar el juego. </p>
31
- <h2>Cómo jugar Clash Mini en su PC</h2>
32
- <p>Ahora que ha instalado Clash Mini en su PC, es posible que se pregunte cómo jugarlo. Bueno, no se preocupe, tenemos todo cubierto. Estos son algunos pasos básicos y consejos sobre cómo jugar Clash Mini en su PC:</p>
33
- <h3>Elige tus Minis y Héroes</h3>
34
- <p>Lo primero que tienes que hacer es elegir tu ejército de Minis y Héroes. Minis son versiones lindas de personajes de choque que tienen diferentes habilidades y roles en la batalla. Los héroes son líderes poderosos que pueden aumentar tus Minis y liberar habilidades especiales. Puedes recoger minis y héroes abriendo cofres, completando misiones o comprándolos con gemas. También puedes mejorarlos con oro y cartas. </p>
35
- <p>Puedes tener hasta ocho Minis y un Héroe en tu ejército. Puedes personalizar tu ejército según tu preferencia y estrategia. También puede crear diferentes cubiertas para diferentes modos y situaciones. Para elegir tus Minis y Héroes, ve a la pestaña Ejército en el menú principal y arrástralos y suéltalos en las ranuras. También puede tocar en ellos para ver sus estadísticas y habilidades. </p>
36
- <h3>Organiza tu ejército en el tablero</h3>
37
- <p>Lo siguiente que tienes que hacer es organizar tu ejército en el tablero. El tablero es donde tienen lugar las batallas. Tiene nueve fichas para cada jugador, donde puedes colocar tus Minis. El tablero también tiene obstáculos que pueden bloquear o afectar los movimientos y ataques de tus Minis. </p>
38
-
39
- <h3>Actualiza tus Minis durante la batalla</h3>
40
- <p>La tercera cosa que necesitas hacer es actualizar tus Minis durante la batalla. Actualizar tus Minis puede hacerlos más fuertes, más rápidos o más duraderos. También puede desbloquear nuevas habilidades o efectos para ellos. Actualizar tus Minis puede darte una ventaja sobre tu oponente en la batalla. </p>
41
- <p>Puedes actualizar tus Minis durante la batalla usando el oro que ganas al derrotar a los Minis enemigos o desde cofres. Puede actualizar hasta tres veces por ronda, pero cada actualización cuesta más oro que la anterior. Para actualizar tus Minis durante la batalla, toca el botón de actualización en la parte inferior de la pantalla y selecciona el Mini que deseas actualizar. </p>
42
- <h3>Usa el ratón y el teclado o un controlador</h3>
43
- <p>Lo último que tienes que hacer es usar el ratón y el teclado o un mando para jugar. Jugar Clash Mini en tu PC te da la ventaja de tener mejores controles y precisión que jugar en un dispositivo móvil. Puedes usar el ratón y el teclado o un mando para interactuar con el juego y realizar varias acciones. </p>
44
- <p>Puede utilizar el ratón para arrastrar y soltar sus Minis en el tablero, para tocar los botones y menús, y para desplazarse y acercar y alejar. Puede utilizar el teclado para utilizar accesos directos y teclas de acceso rápido para un juego más rápido y fácil. También puede utilizar un controlador para jugar el juego, siempre y cuando sea compatible con su PC y el juego. Puedes personalizar tus controles y ajustes en el menú Opciones del juego. </p>
45
- <h2> Consejos y trucos para jugar Clash Mini en su PC</h2>
46
- <p>Ahora que sabes cómo jugar Clash Mini en tu PC, es posible que estés buscando algunos consejos y trucos para mejorar tus habilidades y ganar más juegos. Bueno, no te preocupes, te tenemos cubierto. Estos son algunos consejos y trucos para jugar a Clash Mini en tu PC:</p>
47
- <h3>Anticipa los movimientos de tu oponente</h3>
48
-
49
- <p>Por ejemplo, si ves que tu oponente tiene un montón de Minis a distancia, es posible que quieras colocar algunos Minis tanky delante de ellos para bloquear sus disparos. Si ves que tu oponente tiene un héroe que puede curar a sus Minis, es posible que desee centrarse en sacar a ese héroe primero. Si ves que tu oponente tiene un Mini que puede aturdir o congelar tus Minis, es posible que desees extender tus Minis o usar un Mini que pueda limpiarlos o inmunizarlos. </p>
50
- <h3>Ajusta tu estrategia según el modo</h3>
51
- <p>Otra habilidad importante en Clash Mini es ajustar tu estrategia de acuerdo al modo que estás jugando. Hay diferentes modos en Clash Mini, como eventos casuales, clasificados, amistosos y especiales. Cada modo tiene diferentes reglas, objetivos, recompensas y desafíos. Necesitas adaptar tu estrategia de acuerdo al modo en el que estás jugando y la situación que estás enfrentando. </p>
52
- <p>Por ejemplo, en el modo Casual, puedes jugar por diversión y experimentar con diferentes Minis y Héroes sin preocuparte por perder trofeos o rangos. En el modo Clasificado, necesitas jugar más serio y competitivo para subir las ligas y ganar recompensas. En el modo Amistoso, puedes jugar con o contra tus amigos o compañeros de clan por diversión o práctica. En el modo Eventos especiales, puedes jugar con reglas o modificadores únicos que cambian el modo de juego. </p>
53
- <h3>Experimenta con diferentes combinaciones y habilidades</h3>
54
- <p>Uno de los aspectos más divertidos de Clash Mini es experimentar con diferentes combinaciones y habilidades de Minis y Héroes. Hay muchos Minis y Héroes en Clash Mini, cada uno con sus propias habilidades y roles únicos. Puedes mezclarlos y combinarlos para crear diferentes sinergias y efectos. También puedes actualizarlos o intercambiarlos durante la batalla para cambiar sus habilidades o efectos. </p>
55
-
56
- <h3>Sincroniza tu progreso entre dispositivos</h3>
57
- <p>Una de las características más convenientes de Clash Mini es que puedes sincronizar tu progreso a través de dispositivos. Esto significa que puede jugar el juego en su PC o dispositivo móvil sin perder ningún dato o progreso. Puedes cambiar de dispositivo en cualquier momento sin problemas. </p>
58
- <p>Para sincronizar tu progreso entre dispositivos, necesitas vincular tu cuenta de juego con Google Play Games (para dispositivos Android) o Game Center (para dispositivos iOS). También necesita tener una conexión a Internet cuando cambie de dispositivo. Para vincular tu cuenta de juego con Google Play Games o Game Center, ve al menú Configuración del juego y toca el botón Enlace. </p>
59
- <h2>Conclusión</h2>
60
- <p>Clash Mini es un juego de mesa divertido y lleno de estrategia que te permite batirte en duelo en el Universo Clash. Puedes reunir, convocar y actualizar tu ejército de Minis y Héroes, verlos chocar en emocionantes batallas en tiempo real, predecir los movimientos de tu oponente y armar tu estrategia ganadora y formación. Puedes jugar casualmente por diversión o en partidos clasificados para aumentar tu posición en la liga. </p>
61
- <p>Pero ¿qué pasa si quieres jugar Clash Mini en una pantalla más grande, con mejores controles y más rendimiento? Bueno, puedes hacer eso jugando Clash Mini en tu PC. Puede usar Windows 11 y la emulación nativa de Android, que es la forma oficial de ejecutar aplicaciones Android en Windows. O puede utilizar un emulador de Android como Bluestacks 5, que es un software de terceros que simula un dispositivo Android en su PC. Ambos métodos tienen sus pros y sus contras, así que puedes elegir el que más te convenga. </p>
62
- <p>Jugar Clash Mini en tu PC te da la ventaja de tener mejores gráficos, un rendimiento más rápido y más precisión que jugar en un dispositivo móvil. También puede utilizar el ratón y el teclado o un controlador para jugar el juego. También puede sincronizar su progreso en todos los dispositivos, por lo que puede cambiar entre su PC y su dispositivo móvil en cualquier momento que desee. </p>
63
-
64
- <p>Entonces, ¿qué estás esperando? Descarga Clash Mini en tu PC hoy y disfruta del juego de mesa divertido y lleno de estrategia. ¡Te encantará! </p>
65
- <h2>Preguntas frecuentes</h2>
66
- <h4>¿Cuáles son los requisitos mínimos para ejecutar Clash Mini en el PC? </h4>
67
- <p>Para ejecutar Clash Mini en PC usando Windows 11 y emulación nativa de Android, necesita tener un equipo con Windows 11 que cumpla con estos requisitos mínimos:</p>
68
- <ul>
69
- <li>Procesador: 1 gigahertz (GHz) o más rápido con 2 o más núcleos en un procesador compatible de 64 bits o sistema en un chip (SoC)</li>
70
- <li>Memoria: 4 GB de RAM</li>
71
- <li>Almacenamiento: 64 GB o dispositivo de almacenamiento más grande</li>
72
- <li>Tarjeta gráfica: Compatible con DirectX 12 o posterior con controlador WDDM 2.0</li>
73
- <li>Pantalla: Pantalla de alta definición (720p) que es mayor que 9" en diagonal, 8 bits por canal de color</li>
74
- <li>Conexión a Internet: La conectividad a Internet es necesaria para realizar actualizaciones y para descargar y usar algunas características</li>
75
- </ul>
76
- <p>Para ejecutar Clash Mini en PC usando Bluestacks 5, necesita tener un PC que cumpla con estos requisitos mínimos:</p>
77
- <ul>
78
- <li>OS: Microsoft Windows 7 y superior</li>
79
- <li>Procesador: Procesador Intel o AMD</li>
80
- <li>RAM: Al menos 2GB de RAM</li>
81
- <li>HDD: 5GB de espacio libre en disco</li>
82
- <li>Usted debe ser un administrador en su PC</li>
83
- <li>Controladores gráficos actualizados de Microsoft o el proveedor de chipset</li>
84
- </ul>
85
- <h4>¿Cómo puedo obtener juegos de Google Play en Windows 11? </h4>
86
- <p>Si desea usar Google Play Games en Windows 11, debe instalarlo por separado desde la Appstore de Amazon. Para hacerlo, debes seguir estos pasos:</p>
87
- <ol>
88
- <li>Abra la aplicación Windows Subsistema para Android en su PC y vaya a la pestaña Opciones de desarrollador. </li>
89
- <li>Habilitar el modo de desarrollador y depuración ADB. </li>
90
- <li>Descargar el archivo APK de Google Play Games de una fuente de confianza. </li>
91
- <li>Conecte su PC a su dispositivo Android utilizando un cable USB. </li>
92
- <li> Abra una ventana del símbolo del sistema en su PC y escriba "dispositivos adb" para comprobar si se detecta su dispositivo. </li>
93
-
94
- <li>Abra la aplicación Google Play Games en su PC e inicie sesión con su cuenta de Google. </li>
95
- </ol>
96
- <h4>¿Cuáles son los mejores Minis y Héroes para usar en Clash Mini? </h4>
97
- <p>La respuesta a esta pregunta depende de su preferencia personal y estrategia. Sin embargo, algunos consejos generales son:</p>
98
- <ul>
99
- <li>Usa Minis y Héroes que tengan habilidades o efectos similares o complementarios, como daño por fuego, curación o blindaje. </li>
100
- <li>Usa Minis y Héroes que tengan habilidades o efectos opuestos o contrarios, como reducción de daños, inmunidad o limpieza. </li>
101
- <li>Utilice Minis y héroes que tienen interacciones especiales entre sí, como el príncipe azul y la princesa.</li>
102
- <li>Usa Minis y Héroes que se adapten al modo que estás jugando, como Casual, Clasificado, Amistoso o Eventos Especiales.</li>
103
- </ul>
104
- <h4>¿Cómo puedo ganar recompensas y puntos en Clash Mini? </h4>
105
- <p>Puedes ganar recompensas y puntos en Clash Mini haciendo varias actividades en el juego, como:</p>
106
- <ul>
107
- <li>Ganar batallas en los modos Casual, Clasificado, Amistoso o Eventos Especiales. </li>
108
- <li> Abrir cofres que contienen oro, tarjetas, gemas y otros artículos. </li>
109
- <li>Completar misiones que te dan oro, gemas y cofres. </li>
110
- <li>Participar en el Pase de Temporada que te da recompensas y beneficios exclusivos. </li>
111
- <li>Unirse a un clan y donar o solicitar tarjetas. </li>
112
- </ul>
113
- <h4>¿Cómo puedo encontrar amigos y chatear con otros jugadores en Clash Mini? </h4>
114
- <p>Puedes encontrar amigos y chatear con otros jugadores en Clash Mini usando las características sociales del juego, como:</p>
115
- <ul>
116
- <li>Agregar amigos usando sus etiquetas de jugador o códigos QR. </li>
117
- <li>Unirse o crear un clan e invitar a tus amigos a unirse. </li>
118
- <li>Chatear con tus compañeros de clan o amigos en el chat del clan o amigo. </li>
119
- <li>Enviar o recibir solicitudes de amigos, mensajes o emojis. </li>
120
- <li>Jugar con o contra tus amigos o compañeros de clan en modo amistoso. </li>
121
- </ul></p> 64aa2da5cf<br />
122
- <br />
123
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/__init__.py DELETED
@@ -1,120 +0,0 @@
1
- """
2
- pip._vendor is for vendoring dependencies of pip to prevent needing pip to
3
- depend on something external.
4
-
5
- Files inside of pip._vendor should be considered immutable and should only be
6
- updated to versions from upstream.
7
- """
8
- from __future__ import absolute_import
9
-
10
- import glob
11
- import os.path
12
- import sys
13
-
14
- # Downstream redistributors which have debundled our dependencies should also
15
- # patch this value to be true. This will trigger the additional patching
16
- # to cause things like "six" to be available as pip.
17
- DEBUNDLED = False
18
-
19
- # By default, look in this directory for a bunch of .whl files which we will
20
- # add to the beginning of sys.path before attempting to import anything. This
21
- # is done to support downstream re-distributors like Debian and Fedora who
22
- # wish to create their own Wheels for our dependencies to aid in debundling.
23
- WHEEL_DIR = os.path.abspath(os.path.dirname(__file__))
24
-
25
-
26
- # Define a small helper function to alias our vendored modules to the real ones
27
- # if the vendored ones do not exist. This idea of this was taken from
28
- # https://github.com/kennethreitz/requests/pull/2567.
29
- def vendored(modulename):
30
- vendored_name = "{0}.{1}".format(__name__, modulename)
31
-
32
- try:
33
- __import__(modulename, globals(), locals(), level=0)
34
- except ImportError:
35
- # We can just silently allow import failures to pass here. If we
36
- # got to this point it means that ``import pip._vendor.whatever``
37
- # failed and so did ``import whatever``. Since we're importing this
38
- # upfront in an attempt to alias imports, not erroring here will
39
- # just mean we get a regular import error whenever pip *actually*
40
- # tries to import one of these modules to use it, which actually
41
- # gives us a better error message than we would have otherwise
42
- # gotten.
43
- pass
44
- else:
45
- sys.modules[vendored_name] = sys.modules[modulename]
46
- base, head = vendored_name.rsplit(".", 1)
47
- setattr(sys.modules[base], head, sys.modules[modulename])
48
-
49
-
50
- # If we're operating in a debundled setup, then we want to go ahead and trigger
51
- # the aliasing of our vendored libraries as well as looking for wheels to add
52
- # to our sys.path. This will cause all of this code to be a no-op typically
53
- # however downstream redistributors can enable it in a consistent way across
54
- # all platforms.
55
- if DEBUNDLED:
56
- # Actually look inside of WHEEL_DIR to find .whl files and add them to the
57
- # front of our sys.path.
58
- sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path
59
-
60
- # Actually alias all of our vendored dependencies.
61
- vendored("cachecontrol")
62
- vendored("certifi")
63
- vendored("colorama")
64
- vendored("distlib")
65
- vendored("distro")
66
- vendored("six")
67
- vendored("six.moves")
68
- vendored("six.moves.urllib")
69
- vendored("six.moves.urllib.parse")
70
- vendored("packaging")
71
- vendored("packaging.version")
72
- vendored("packaging.specifiers")
73
- vendored("pep517")
74
- vendored("pkg_resources")
75
- vendored("platformdirs")
76
- vendored("progress")
77
- vendored("requests")
78
- vendored("requests.exceptions")
79
- vendored("requests.packages")
80
- vendored("requests.packages.urllib3")
81
- vendored("requests.packages.urllib3._collections")
82
- vendored("requests.packages.urllib3.connection")
83
- vendored("requests.packages.urllib3.connectionpool")
84
- vendored("requests.packages.urllib3.contrib")
85
- vendored("requests.packages.urllib3.contrib.ntlmpool")
86
- vendored("requests.packages.urllib3.contrib.pyopenssl")
87
- vendored("requests.packages.urllib3.exceptions")
88
- vendored("requests.packages.urllib3.fields")
89
- vendored("requests.packages.urllib3.filepost")
90
- vendored("requests.packages.urllib3.packages")
91
- vendored("requests.packages.urllib3.packages.ordered_dict")
92
- vendored("requests.packages.urllib3.packages.six")
93
- vendored("requests.packages.urllib3.packages.ssl_match_hostname")
94
- vendored("requests.packages.urllib3.packages.ssl_match_hostname."
95
- "_implementation")
96
- vendored("requests.packages.urllib3.poolmanager")
97
- vendored("requests.packages.urllib3.request")
98
- vendored("requests.packages.urllib3.response")
99
- vendored("requests.packages.urllib3.util")
100
- vendored("requests.packages.urllib3.util.connection")
101
- vendored("requests.packages.urllib3.util.request")
102
- vendored("requests.packages.urllib3.util.response")
103
- vendored("requests.packages.urllib3.util.retry")
104
- vendored("requests.packages.urllib3.util.ssl_")
105
- vendored("requests.packages.urllib3.util.timeout")
106
- vendored("requests.packages.urllib3.util.url")
107
- vendored("resolvelib")
108
- vendored("rich")
109
- vendored("rich.console")
110
- vendored("rich.highlighter")
111
- vendored("rich.logging")
112
- vendored("rich.markup")
113
- vendored("rich.progress")
114
- vendored("rich.segment")
115
- vendored("rich.style")
116
- vendored("rich.text")
117
- vendored("rich.traceback")
118
- vendored("tenacity")
119
- vendored("tomli")
120
- vendored("urllib3")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_emoji_replace.py DELETED
@@ -1,32 +0,0 @@
1
- from typing import Callable, Match, Optional
2
- import re
3
-
4
- from ._emoji_codes import EMOJI
5
-
6
-
7
- _ReStringMatch = Match[str] # regex match object
8
- _ReSubCallable = Callable[[_ReStringMatch], str] # Callable invoked by re.sub
9
- _EmojiSubMethod = Callable[[_ReSubCallable, str], str] # Sub method of a compiled re
10
-
11
-
12
- def _emoji_replace(
13
- text: str,
14
- default_variant: Optional[str] = None,
15
- _emoji_sub: _EmojiSubMethod = re.compile(r"(:(\S*?)(?:(?:\-)(emoji|text))?:)").sub,
16
- ) -> str:
17
- """Replace emoji code in text."""
18
- get_emoji = EMOJI.__getitem__
19
- variants = {"text": "\uFE0E", "emoji": "\uFE0F"}
20
- get_variant = variants.get
21
- default_variant_code = variants.get(default_variant, "") if default_variant else ""
22
-
23
- def do_replace(match: Match[str]) -> str:
24
- emoji_code, emoji_name, variant = match.groups()
25
- try:
26
- return get_emoji(emoji_name.lower()) + get_variant(
27
- variant, default_variant_code
28
- )
29
- except KeyError:
30
- return emoji_code
31
-
32
- return _emoji_sub(do_replace, text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/before_sleep.py DELETED
@@ -1,71 +0,0 @@
1
- # Copyright 2016 Julien Danjou
2
- # Copyright 2016 Joshua Harlow
3
- # Copyright 2013-2014 Ray Holder
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
-
17
- import typing
18
-
19
- from pip._vendor.tenacity import _utils
20
-
21
- if typing.TYPE_CHECKING:
22
- import logging
23
-
24
- from pip._vendor.tenacity import RetryCallState
25
-
26
-
27
- def before_sleep_nothing(retry_state: "RetryCallState") -> None:
28
- """Before call strategy that does nothing."""
29
-
30
-
31
- def before_sleep_log(
32
- logger: "logging.Logger",
33
- log_level: int,
34
- exc_info: bool = False,
35
- ) -> typing.Callable[["RetryCallState"], None]:
36
- """Before call strategy that logs to some logger the attempt."""
37
-
38
- def log_it(retry_state: "RetryCallState") -> None:
39
- local_exc_info: BaseException | bool | None
40
-
41
- if retry_state.outcome is None:
42
- raise RuntimeError("log_it() called before outcome was set")
43
-
44
- if retry_state.next_action is None:
45
- raise RuntimeError("log_it() called before next_action was set")
46
-
47
- if retry_state.outcome.failed:
48
- ex = retry_state.outcome.exception()
49
- verb, value = "raised", f"{ex.__class__.__name__}: {ex}"
50
-
51
- if exc_info:
52
- local_exc_info = retry_state.outcome.exception()
53
- else:
54
- local_exc_info = False
55
- else:
56
- verb, value = "returned", retry_state.outcome.result()
57
- local_exc_info = False # exc_info does not apply when no exception
58
-
59
- if retry_state.fn is None:
60
- # NOTE(sileht): can't really happen, but we must please mypy
61
- fn_name = "<unknown>"
62
- else:
63
- fn_name = _utils.get_callback_name(retry_state.fn)
64
-
65
- logger.log(
66
- log_level,
67
- f"Retrying {fn_name} " f"in {retry_state.next_action.sleep} seconds as it {verb} {value}.",
68
- exc_info=local_exc_info,
69
- )
70
-
71
- return log_it
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/errors.py DELETED
@@ -1,58 +0,0 @@
1
- """setuptools.errors
2
-
3
- Provides exceptions used by setuptools modules.
4
- """
5
-
6
- from distutils import errors as _distutils_errors
7
-
8
-
9
- # Re-export errors from distutils to facilitate the migration to PEP632
10
-
11
- ByteCompileError = _distutils_errors.DistutilsByteCompileError
12
- CCompilerError = _distutils_errors.CCompilerError
13
- ClassError = _distutils_errors.DistutilsClassError
14
- CompileError = _distutils_errors.CompileError
15
- ExecError = _distutils_errors.DistutilsExecError
16
- FileError = _distutils_errors.DistutilsFileError
17
- InternalError = _distutils_errors.DistutilsInternalError
18
- LibError = _distutils_errors.LibError
19
- LinkError = _distutils_errors.LinkError
20
- ModuleError = _distutils_errors.DistutilsModuleError
21
- OptionError = _distutils_errors.DistutilsOptionError
22
- PlatformError = _distutils_errors.DistutilsPlatformError
23
- PreprocessError = _distutils_errors.PreprocessError
24
- SetupError = _distutils_errors.DistutilsSetupError
25
- TemplateError = _distutils_errors.DistutilsTemplateError
26
- UnknownFileError = _distutils_errors.UnknownFileError
27
-
28
- # The root error class in the hierarchy
29
- BaseError = _distutils_errors.DistutilsError
30
-
31
-
32
- class RemovedCommandError(BaseError, RuntimeError):
33
- """Error used for commands that have been removed in setuptools.
34
-
35
- Since ``setuptools`` is built on ``distutils``, simply removing a command
36
- from ``setuptools`` will make the behavior fall back to ``distutils``; this
37
- error is raised if a command exists in ``distutils`` but has been actively
38
- removed in ``setuptools``.
39
- """
40
-
41
-
42
- class PackageDiscoveryError(BaseError, RuntimeError):
43
- """Impossible to perform automatic discovery of packages and/or modules.
44
-
45
- The current project layout or given discovery options can lead to problems when
46
- scanning the project directory.
47
-
48
- Setuptools might also refuse to complete auto-discovery if an error prone condition
49
- is detected (e.g. when a project is organised as a flat-layout but contains
50
- multiple directories that can be taken as top-level packages inside a single
51
- distribution [*]_). In these situations the users are encouraged to be explicit
52
- about which packages to include or to make the discovery parameters more specific.
53
-
54
- .. [*] Since multi-package distributions are uncommon it is very likely that the
55
- developers did not intend for all the directories to be packaged, and are just
56
- leaving auxiliary code in the repository top-level, such as maintenance-related
57
- scripts.
58
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_appengine_environ.py DELETED
@@ -1,36 +0,0 @@
1
- """
2
- This module provides means to detect the App Engine environment.
3
- """
4
-
5
- import os
6
-
7
-
8
- def is_appengine():
9
- return is_local_appengine() or is_prod_appengine()
10
-
11
-
12
- def is_appengine_sandbox():
13
- """Reports if the app is running in the first generation sandbox.
14
-
15
- The second generation runtimes are technically still in a sandbox, but it
16
- is much less restrictive, so generally you shouldn't need to check for it.
17
- see https://cloud.google.com/appengine/docs/standard/runtimes
18
- """
19
- return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27"
20
-
21
-
22
- def is_local_appengine():
23
- return "APPENGINE_RUNTIME" in os.environ and os.environ.get(
24
- "SERVER_SOFTWARE", ""
25
- ).startswith("Development/")
26
-
27
-
28
- def is_prod_appengine():
29
- return "APPENGINE_RUNTIME" in os.environ and os.environ.get(
30
- "SERVER_SOFTWARE", ""
31
- ).startswith("Google App Engine/")
32
-
33
-
34
- def is_prod_appengine_mvms():
35
- """Deprecated."""
36
- return False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/INSTALL.md DELETED
@@ -1,175 +0,0 @@
1
- ## Installation
2
-
3
- Our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
4
- has step-by-step instructions that install detectron2.
5
- The [Dockerfile](https://github.com/facebookresearch/detectron2/blob/master/docker/Dockerfile)
6
- also installs detectron2 with a few simple commands.
7
-
8
- ### Requirements
9
- - Linux or macOS with Python ≥ 3.6
10
- - PyTorch ≥ 1.3
11
- - [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation.
12
- You can install them together at [pytorch.org](https://pytorch.org) to make sure of this.
13
- - OpenCV, optional, needed by demo and visualization
14
- - pycocotools: `pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'`
15
-
16
-
17
- ### Build Detectron2 from Source
18
-
19
- After having the above dependencies and gcc & g++ ≥ 5, run:
20
- ```
21
- python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
22
- # (add --user if you don't have permission)
23
-
24
- # Or, to install it from a local clone:
25
- git clone https://github.com/facebookresearch/detectron2.git
26
- cd detectron2 && python -m pip install -e .
27
-
28
- # Or if you are on macOS
29
- # CC=clang CXX=clang++ python -m pip install -e .
30
- ```
31
-
32
- To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the
33
- old build first. You often need to rebuild detectron2 after reinstalling PyTorch.
34
-
35
- ### Install Pre-Built Detectron2
36
- ```
37
- # for CUDA 10.1:
38
- python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
39
- ```
40
- You can replace cu101 with "cu{100,92}" or "cpu".
41
-
42
- Note that:
43
- 1. Such installation has to be used with the latest official PyTorch release (currently 1.4).
44
- It will not work with your custom build of PyTorch.
45
- 2. Such installation is out-of-date w.r.t. master branch of detectron2. It may not be
46
- compatible with the master branch of a research project that uses detectron2 (e.g. those in
47
- [projects](./projects) or [meshrcnn](https://github.com/facebookresearch/meshrcnn/)).
48
-
49
- ### Common Installation Issues
50
-
51
- If you met issues using the pre-built detectron2, please uninstall it and try building it from source.
52
-
53
- Click each issue for its solutions:
54
-
55
- <details>
56
- <summary>
57
- Undefined torch/aten/caffe2 symbols, or segmentation fault immediately when running the library.
58
- </summary>
59
- <br/>
60
-
61
- This can happen if detectron2 or torchvision is not
62
- compiled with the version of PyTorch you're running.
63
-
64
- If you use a pre-built torchvision, uninstall torchvision & pytorch, and reinstall them
65
- following [pytorch.org](http://pytorch.org).
66
- If you manually build detectron2 or torchvision, remove the files you built (`build/`, `**/*.so`)
67
- and rebuild them.
68
-
69
- If you cannot resolve the problem, please include the output of `gdb -ex "r" -ex "bt" -ex "quit" --args python -m detectron2.utils.collect_env`
70
- in your issue.
71
- </details>
72
-
73
- <details>
74
- <summary>
75
- Undefined C++ symbols (e.g. `GLIBCXX`) or C++ symbols not found.
76
- </summary>
77
- <br/>
78
- Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime.
79
-
80
- This often happens with old anaconda.
81
- Try `conda update libgcc`. Then rebuild detectron2.
82
-
83
- The fundamental solution is to run the code with sufficiently new C++ runtime
84
- using `LD_PRELOAD=/path/to/libstdc++.so`
85
-
86
- </details>
87
-
88
- <details>
89
- <summary>
90
- "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available".
91
- </summary>
92
- <br/>
93
- CUDA is not found when building detectron2.
94
- You should make sure
95
-
96
- ```
97
- python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
98
- ```
99
-
100
- print valid outputs at the time you build detectron2.
101
-
102
- Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config.
103
- </details>
104
-
105
- <details>
106
- <summary>
107
- "invalid device function" or "no kernel image is available for execution".
108
- </summary>
109
- <br/>
110
- Two possibilities:
111
-
112
- * You build detectron2 with one version of CUDA but run it with a different version.
113
-
114
- To check whether it is the case,
115
- use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
116
- In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
117
- to contain cuda libraries of the same version.
118
-
119
- When they are inconsistent,
120
- you need to either install a different build of PyTorch (or build by yourself)
121
- to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
122
-
123
- * Detectron2 or PyTorch/torchvision is not built for the correct GPU architecture (compute compatibility).
124
-
125
- The GPU architecture for PyTorch/detectron2/torchvision is available in the "architecture flags" in
126
- `python -m detectron2.utils.collect_env`.
127
-
128
- The GPU architecture flags of detectron2/torchvision by default matches the GPU model detected
129
- during building. This means the compiled code may not work on a different GPU model.
130
- To overwrite the GPU architecture for detectron2/torchvision, use `TORCH_CUDA_ARCH_LIST` environment variable during building.
131
-
132
- For example, `export TORCH_CUDA_ARCH_LIST=6.0,7.0` makes it work for both P100s and V100s.
133
- Visit [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) to find out
134
- the correct compute compatibility number for your device.
135
-
136
- </details>
137
-
138
- <details>
139
- <summary>
140
- Undefined CUDA symbols or cannot open libcudart.so.
141
- </summary>
142
- <br/>
143
- The version of NVCC you use to build detectron2 or torchvision does
144
- not match the version of CUDA you are running with.
145
- This often happens when using anaconda's CUDA runtime.
146
-
147
- Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
148
- In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
149
- to contain cuda libraries of the same version.
150
-
151
- When they are inconsistent,
152
- you need to either install a different build of PyTorch (or build by yourself)
153
- to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
154
- </details>
155
-
156
-
157
- <details>
158
- <summary>
159
- "ImportError: cannot import name '_C'".
160
- </summary>
161
- <br/>
162
- Please build and install detectron2 following the instructions above.
163
-
164
- If you are running code from detectron2's root directory, `cd` to a different one.
165
- Otherwise you may not import the code that you installed.
166
- </details>
167
-
168
- <details>
169
- <summary>
170
- ONNX conversion segfault after some "TraceWarning".
171
- </summary>
172
- <br/>
173
- Build and install ONNX from its source code using a compiler
174
- whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`).
175
- </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/utils/optim.py DELETED
@@ -1,73 +0,0 @@
1
- # --------------------------------------------------------
2
- # OpenVQA
3
- # Written by Yuhao Cui https://github.com/cuiyuhao1996
4
- # --------------------------------------------------------
5
-
6
- import torch.optim as Optim
7
-
8
-
9
- class WarmupOptimizer(object):
10
- def __init__(self, lr_base, optimizer, data_size, batch_size, warmup_epoch):
11
- self.optimizer = optimizer
12
- self._step = 0
13
- self.lr_base = lr_base
14
- self._rate = 0
15
- self.data_size = data_size
16
- self.batch_size = batch_size
17
- self.warmup_epoch = warmup_epoch
18
-
19
-
20
- def step(self):
21
- self._step += 1
22
-
23
- rate = self.rate()
24
- for p in self.optimizer.param_groups:
25
- p['lr'] = rate
26
- self._rate = rate
27
-
28
- self.optimizer.step()
29
-
30
-
31
- def zero_grad(self):
32
- self.optimizer.zero_grad()
33
-
34
-
35
- def rate(self, step=None):
36
- if step is None:
37
- step = self._step
38
-
39
- if step <= int(self.data_size / self.batch_size * (self.warmup_epoch + 1) * 0.25):
40
- r = self.lr_base * 1/(self.warmup_epoch + 1)
41
- elif step <= int(self.data_size / self.batch_size * (self.warmup_epoch + 1) * 0.5):
42
- r = self.lr_base * 2/(self.warmup_epoch + 1)
43
- elif step <= int(self.data_size / self.batch_size * (self.warmup_epoch + 1) * 0.75):
44
- r = self.lr_base * 3/(self.warmup_epoch + 1)
45
- else:
46
- r = self.lr_base
47
-
48
- return r
49
-
50
-
51
- def get_optim(__C, model, data_size, lr_base=None):
52
- if lr_base is None:
53
- lr_base = __C.LR_BASE
54
-
55
- std_optim = getattr(Optim, __C.OPT)
56
- params = filter(lambda p: p.requires_grad, model.parameters())
57
- eval_str = 'params, lr=0'
58
- for key in __C.OPT_PARAMS:
59
- eval_str += ' ,' + key + '=' + str(__C.OPT_PARAMS[key])
60
-
61
- optim = WarmupOptimizer(
62
- lr_base,
63
- eval('std_optim' + '(' + eval_str + ')'),
64
- data_size,
65
- __C.BATCH_SIZE,
66
- __C.WARMUP_EPOCH
67
- )
68
-
69
- return optim
70
-
71
-
72
- def adjust_lr(optim, decay_r):
73
- optim.lr_base *= decay_r
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/atomic.cpp DELETED
@@ -1,27 +0,0 @@
1
- //A hacky solution to get around the Ellipse include
2
-
3
- #ifdef WIN32
4
- #include <windows.h>
5
- #include <cstdint>
6
-
7
- float win_atomic_add(float &target, float source) {
8
- union { int i; float f; } old_val;
9
- union { int i; float f; } new_val;
10
- do {
11
- old_val.f = target;
12
- new_val.f = old_val.f + (float)source;
13
- } while (InterlockedCompareExchange((LONG*)&target, (LONG)new_val.i, (LONG)old_val.i) != old_val.i);
14
- return old_val.f;
15
- }
16
-
17
- double win_atomic_add(double &target, double source) {
18
- union { int64_t i; double f; } old_val;
19
- union { int64_t i; double f; } new_val;
20
- do {
21
- old_val.f = target;
22
- new_val.f = old_val.f + (double)source;
23
- } while (InterlockedCompareExchange64((LONG64*)&target, (LONG64)new_val.i, (LONG64)old_val.i) != old_val.i);
24
- return old_val.f;
25
- }
26
-
27
- #endif
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/spmv_script.sh DELETED
@@ -1,30 +0,0 @@
1
- #!/bin/bash
2
-
3
- for i in 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 524288 1048576 2097152 4194304 8388608 16777216
4
- do
5
- echo `date`, `$1 --dense=$i $2 $3 $4 $5 $6 $7`
6
- done
7
-
8
- echo
9
- echo
10
-
11
- for i in `ls /home/dumerrill/graphs/spmv/*.mtx`
12
- do
13
- if [[ ( "`head -n 50 $i | grep complex`" = "" ) && ( "`head -n 50 $i | grep array`" = "" ) ]]
14
- then
15
- echo `date`, `$1 --mtx=$i $2 $3 $4 $5 $6 $7 2>/dev/null`
16
- fi
17
- done
18
-
19
- echo
20
- echo
21
-
22
- for i in `ls /scratch/dumerrill/graphs/mtx/*.mtx`
23
- #for i in `ls /cygdrive/w/Dev/UFget/mtx/*.mtx`
24
- do
25
- if [[ ( "`head -n 50 $i | grep complex`" = "" ) && ( "`head -n 50 $i | grep array`" = "" ) ]]
26
- then
27
- echo `date`, `$1 --mtx=$i $2 $3 $4 $5 $6 $7 2>/dev/null`
28
- fi
29
- done
30
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/addressof.h DELETED
@@ -1,33 +0,0 @@
1
- // Copyright (c) 2018 NVIDIA Corporation
2
- // Author: Bryce Adelstein Lelbach <[email protected]>
3
- //
4
- // Distributed under the Boost Software License v1.0 (boost.org/LICENSE_1_0.txt)
5
-
6
- #pragma once
7
-
8
- #include <thrust/detail/config.h>
9
-
10
- #if THRUST_CPP_DIALECT >= 2011
11
- # include <thrust/detail/memory_wrapper.h>
12
- #endif
13
-
14
- namespace thrust
15
- {
16
-
17
- ///////////////////////////////////////////////////////////////////////////////
18
-
19
- /*! Obtains the actual address of the object or function arg, even in presence of overloaded operator&.
20
- */
21
- template <typename T>
22
- __host__ __device__
23
- T* addressof(T& arg)
24
- {
25
- return reinterpret_cast<T*>(
26
- &const_cast<char&>(reinterpret_cast<const volatile char&>(arg))
27
- );
28
- }
29
-
30
- ///////////////////////////////////////////////////////////////////////////////
31
-
32
- } // end namespace thrust
33
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/data/pose_attr_dataset.py DELETED
@@ -1,109 +0,0 @@
1
- import os
2
- import os.path
3
- import random
4
-
5
- import numpy as np
6
- import torch
7
- import torch.utils.data as data
8
- from PIL import Image
9
-
10
-
11
- class DeepFashionAttrPoseDataset(data.Dataset):
12
-
13
- def __init__(self,
14
- pose_dir,
15
- texture_ann_dir,
16
- shape_ann_path,
17
- downsample_factor=2,
18
- xflip=False):
19
- self._densepose_path = pose_dir
20
- self._image_fnames_target = []
21
- self._image_fnames = []
22
- self.upper_fused_attrs = []
23
- self.lower_fused_attrs = []
24
- self.outer_fused_attrs = []
25
- self.shape_attrs = []
26
-
27
- self.downsample_factor = downsample_factor
28
- self.xflip = xflip
29
-
30
- # load attributes
31
- assert os.path.exists(f'{texture_ann_dir}/upper_fused.txt')
32
- for idx, row in enumerate(
33
- open(os.path.join(f'{texture_ann_dir}/upper_fused.txt'), 'r')):
34
- annotations = row.split()
35
- self._image_fnames_target.append(annotations[0])
36
- self._image_fnames.append(f'{annotations[0].split(".")[0]}.png')
37
- self.upper_fused_attrs.append(int(annotations[1]))
38
-
39
- assert len(self._image_fnames_target) == len(self.upper_fused_attrs)
40
-
41
- assert os.path.exists(f'{texture_ann_dir}/lower_fused.txt')
42
- for idx, row in enumerate(
43
- open(os.path.join(f'{texture_ann_dir}/lower_fused.txt'), 'r')):
44
- annotations = row.split()
45
- assert self._image_fnames_target[idx] == annotations[0]
46
- self.lower_fused_attrs.append(int(annotations[1]))
47
-
48
- assert len(self._image_fnames_target) == len(self.lower_fused_attrs)
49
-
50
- assert os.path.exists(f'{texture_ann_dir}/outer_fused.txt')
51
- for idx, row in enumerate(
52
- open(os.path.join(f'{texture_ann_dir}/outer_fused.txt'), 'r')):
53
- annotations = row.split()
54
- assert self._image_fnames_target[idx] == annotations[0]
55
- self.outer_fused_attrs.append(int(annotations[1]))
56
-
57
- assert len(self._image_fnames_target) == len(self.outer_fused_attrs)
58
-
59
- assert os.path.exists(shape_ann_path)
60
- for idx, row in enumerate(open(os.path.join(shape_ann_path), 'r')):
61
- annotations = row.split()
62
- assert self._image_fnames_target[idx] == annotations[0]
63
- self.shape_attrs.append([int(i) for i in annotations[1:]])
64
-
65
- def _open_file(self, path_prefix, fname):
66
- return open(os.path.join(path_prefix, fname), 'rb')
67
-
68
- def _load_densepose(self, raw_idx):
69
- fname = self._image_fnames[raw_idx]
70
- fname = f'{fname[:-4]}_densepose.png'
71
- with self._open_file(self._densepose_path, fname) as f:
72
- densepose = Image.open(f)
73
- if self.downsample_factor != 1:
74
- width, height = densepose.size
75
- width = width // self.downsample_factor
76
- height = height // self.downsample_factor
77
- densepose = densepose.resize(
78
- size=(width, height), resample=Image.NEAREST)
79
- # channel-wise IUV order, [3, H, W]
80
- densepose = np.array(densepose)[:, :, 2:].transpose(2, 0, 1)
81
- return densepose.astype(np.float32)
82
-
83
- def __getitem__(self, index):
84
- pose = self._load_densepose(index)
85
- shape_attr = self.shape_attrs[index]
86
- shape_attr = torch.LongTensor(shape_attr)
87
-
88
- if self.xflip and random.random() > 0.5:
89
- pose = pose[:, :, ::-1].copy()
90
-
91
- upper_fused_attr = self.upper_fused_attrs[index]
92
- lower_fused_attr = self.lower_fused_attrs[index]
93
- outer_fused_attr = self.outer_fused_attrs[index]
94
-
95
- pose = pose / 12. - 1
96
-
97
- return_dict = {
98
- 'densepose': pose,
99
- 'img_name': self._image_fnames_target[index],
100
- 'shape_attr': shape_attr,
101
- 'upper_fused_attr': upper_fused_attr,
102
- 'lower_fused_attr': lower_fused_attr,
103
- 'outer_fused_attr': outer_fused_attr,
104
- }
105
-
106
- return return_dict
107
-
108
- def __len__(self):
109
- return len(self._image_fnames)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/apis/test.py DELETED
@@ -1,189 +0,0 @@
1
- import os.path as osp
2
- import pickle
3
- import shutil
4
- import tempfile
5
- import time
6
-
7
- import mmcv
8
- import torch
9
- import torch.distributed as dist
10
- from mmcv.image import tensor2imgs
11
- from mmcv.runner import get_dist_info
12
-
13
- from mmdet.core import encode_mask_results
14
-
15
-
16
- def single_gpu_test(model,
17
- data_loader,
18
- show=False,
19
- out_dir=None,
20
- show_score_thr=0.3):
21
- model.eval()
22
- results = []
23
- dataset = data_loader.dataset
24
- prog_bar = mmcv.ProgressBar(len(dataset))
25
- for i, data in enumerate(data_loader):
26
- with torch.no_grad():
27
- result = model(return_loss=False, rescale=True, **data)
28
-
29
- batch_size = len(result)
30
- if show or out_dir:
31
- if batch_size == 1 and isinstance(data['img'][0], torch.Tensor):
32
- img_tensor = data['img'][0]
33
- else:
34
- img_tensor = data['img'][0].data[0]
35
- img_metas = data['img_metas'][0].data[0]
36
- imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg'])
37
- assert len(imgs) == len(img_metas)
38
-
39
- for i, (img, img_meta) in enumerate(zip(imgs, img_metas)):
40
- h, w, _ = img_meta['img_shape']
41
- img_show = img[:h, :w, :]
42
-
43
- ori_h, ori_w = img_meta['ori_shape'][:-1]
44
- img_show = mmcv.imresize(img_show, (ori_w, ori_h))
45
-
46
- if out_dir:
47
- out_file = osp.join(out_dir, img_meta['ori_filename'])
48
- else:
49
- out_file = None
50
- model.module.show_result(
51
- img_show,
52
- result[i],
53
- show=show,
54
- out_file=out_file,
55
- score_thr=show_score_thr)
56
-
57
- # encode mask results
58
- if isinstance(result[0], tuple):
59
- result = [(bbox_results, encode_mask_results(mask_results))
60
- for bbox_results, mask_results in result]
61
- results.extend(result)
62
-
63
- for _ in range(batch_size):
64
- prog_bar.update()
65
- return results
66
-
67
-
68
- def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False):
69
- """Test model with multiple gpus.
70
-
71
- This method tests model with multiple gpus and collects the results
72
- under two different modes: gpu and cpu modes. By setting 'gpu_collect=True'
73
- it encodes results to gpu tensors and use gpu communication for results
74
- collection. On cpu mode it saves the results on different gpus to 'tmpdir'
75
- and collects them by the rank 0 worker.
76
-
77
- Args:
78
- model (nn.Module): Model to be tested.
79
- data_loader (nn.Dataloader): Pytorch data loader.
80
- tmpdir (str): Path of directory to save the temporary results from
81
- different gpus under cpu mode.
82
- gpu_collect (bool): Option to use either gpu or cpu to collect results.
83
-
84
- Returns:
85
- list: The prediction results.
86
- """
87
- model.eval()
88
- results = []
89
- dataset = data_loader.dataset
90
- rank, world_size = get_dist_info()
91
- if rank == 0:
92
- prog_bar = mmcv.ProgressBar(len(dataset))
93
- time.sleep(2) # This line can prevent deadlock problem in some cases.
94
- for i, data in enumerate(data_loader):
95
- with torch.no_grad():
96
- result = model(return_loss=False, rescale=True, **data)
97
- # encode mask results
98
- if isinstance(result[0], tuple):
99
- result = [(bbox_results, encode_mask_results(mask_results))
100
- for bbox_results, mask_results in result]
101
- results.extend(result)
102
-
103
- if rank == 0:
104
- batch_size = len(result)
105
- for _ in range(batch_size * world_size):
106
- prog_bar.update()
107
-
108
- # collect results from all ranks
109
- if gpu_collect:
110
- results = collect_results_gpu(results, len(dataset))
111
- else:
112
- results = collect_results_cpu(results, len(dataset), tmpdir)
113
- return results
114
-
115
-
116
- def collect_results_cpu(result_part, size, tmpdir=None):
117
- rank, world_size = get_dist_info()
118
- # create a tmp dir if it is not specified
119
- if tmpdir is None:
120
- MAX_LEN = 512
121
- # 32 is whitespace
122
- dir_tensor = torch.full((MAX_LEN, ),
123
- 32,
124
- dtype=torch.uint8,
125
- device='cuda')
126
- if rank == 0:
127
- mmcv.mkdir_or_exist('.dist_test')
128
- tmpdir = tempfile.mkdtemp(dir='.dist_test')
129
- tmpdir = torch.tensor(
130
- bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda')
131
- dir_tensor[:len(tmpdir)] = tmpdir
132
- dist.broadcast(dir_tensor, 0)
133
- tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip()
134
- else:
135
- mmcv.mkdir_or_exist(tmpdir)
136
- # dump the part result to the dir
137
- mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl'))
138
- dist.barrier()
139
- # collect all parts
140
- if rank != 0:
141
- return None
142
- else:
143
- # load results of all parts from tmp dir
144
- part_list = []
145
- for i in range(world_size):
146
- part_file = osp.join(tmpdir, f'part_{i}.pkl')
147
- part_list.append(mmcv.load(part_file))
148
- # sort the results
149
- ordered_results = []
150
- for res in zip(*part_list):
151
- ordered_results.extend(list(res))
152
- # the dataloader may pad some samples
153
- ordered_results = ordered_results[:size]
154
- # remove tmp dir
155
- shutil.rmtree(tmpdir)
156
- return ordered_results
157
-
158
-
159
- def collect_results_gpu(result_part, size):
160
- rank, world_size = get_dist_info()
161
- # dump result part to tensor with pickle
162
- part_tensor = torch.tensor(
163
- bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda')
164
- # gather all result part tensor shape
165
- shape_tensor = torch.tensor(part_tensor.shape, device='cuda')
166
- shape_list = [shape_tensor.clone() for _ in range(world_size)]
167
- dist.all_gather(shape_list, shape_tensor)
168
- # padding result part tensor to max length
169
- shape_max = torch.tensor(shape_list).max()
170
- part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda')
171
- part_send[:shape_tensor[0]] = part_tensor
172
- part_recv_list = [
173
- part_tensor.new_zeros(shape_max) for _ in range(world_size)
174
- ]
175
- # gather all result part
176
- dist.all_gather(part_recv_list, part_send)
177
-
178
- if rank == 0:
179
- part_list = []
180
- for recv, shape in zip(part_recv_list, shape_list):
181
- part_list.append(
182
- pickle.loads(recv[:shape[0]].cpu().numpy().tobytes()))
183
- # sort the results
184
- ordered_results = []
185
- for res in zip(*part_list):
186
- ordered_results.extend(list(res))
187
- # the dataloader may pad some samples
188
- ordered_results = ordered_results[:size]
189
- return ordered_results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/ml-talking-face/docs/description.md DELETED
@@ -1,33 +0,0 @@
1
- This system generates a talking face video based on the input text.
2
- You can provide the input text in one of the four languages: Chinese (Mandarin), English, Japanese, and Korean.
3
- You may also select the target language, the language of the output speech.
4
- If the input text language and the target language are different, the input text will be translated to the target language using Google Translate API.
5
-
6
- ### Updates
7
-
8
- (2023.10.20.) It has been a year since the demonstration has suddenly shut down by MINDsLab (MAUM.AI for now).
9
- And today, I'm happy to share that ⭐I have restored the demonstration⭐ in my own lambdalabs instance!
10
- Over the past year, there have been numerous advancements in Gen AI, including multilingual TTS and talking face generation.
11
- This demo may become "old-fashioned" at this time 😅... but I hope that it would help other researchers taking a journey in the same field.
12
-
13
- ⚠️By the way, I'm using A10G instance from lambdalabs with my own expense... I'm sorry, but I don't know when it will shut down again. 😵‍💫 I'll keep you posted on the status.
14
-
15
- <center><a href="https://www.buymeacoffee.com/deepkyu" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 35px !important;width: 160px !important;" ></a></center>
16
-
17
-
18
- (2022.06.17.) Thank you for visiting our demo!😊 This demo attracted a lot more attention than we anticipated. This, unfortunately, means that the computational burden is heavier than this demo was designed for. So, to maximize everyone's experience, we capped the length of the translated texts at:
19
-
20
- - 200 characters for English
21
- - 100 characters for Chinese, Japaense, and Korean.
22
-
23
- (2022.06.17.) We were originally planning to support any input text. However, when checking the logs recently, we found that there were a lot of inappropriate input texts. So, we decided to filter the inputs based on toxicity using [Perspective API @Google](https://developers.perspectiveapi.com/s/). Now, if you enter a possibily toxic text, the video generation will fail. We hope you understand.
24
-
25
- (2022.06.05.) Due to the latency from HuggingFace Spaces and video rendering, it takes 15 ~ 30 seconds to get a video result.
26
-
27
- <details>
28
- <summary><i>Outdated updates</i></summary>
29
-
30
- (2022.09.29.) ~~The core part of the demonstration has been working on the AWS instance of MINDsLab, and I found that it can't connect to the instance now. I want to fix this issue, but I'm sorry to say that I left the company last week. I've contacted the company, but it takes some time to restore the session. If you're in a hurry, please send the e-mail directly to MINDsLab ([email protected]).
31
- Whatever the reason, I'm sorry again. Hope you understand.~~
32
-
33
- </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Caoyunkang/Segment-Any-Anomaly/SAM/README.md DELETED
@@ -1,107 +0,0 @@
1
- # Segment Anything
2
-
3
- **[Meta AI Research, FAIR](https://ai.facebook.com/research/)**
4
-
5
- [Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/)
6
-
7
- [[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)]
8
-
9
- ![SAM design](assets/model_diagram.png?raw=true)
10
-
11
- The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
12
-
13
- <p float="left">
14
- <img src="assets/masks1.png?raw=true" width="37.25%" />
15
- <img src="assets/masks2.jpg?raw=true" width="61.5%" />
16
- </p>
17
-
18
- ## Installation
19
-
20
- The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
21
-
22
- Install Segment Anything:
23
-
24
- ```
25
- pip install git+https://github.com/facebookresearch/segment-anything.git
26
- ```
27
-
28
- or clone the repository locally and install with
29
-
30
- ```
31
- git clone [email protected]:facebookresearch/segment-anything.git
32
- cd segment-anything; pip install -e .
33
- ```
34
-
35
- The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks.
36
- ```
37
- pip install opencv-python pycocotools matplotlib onnxruntime onnx
38
- ```
39
-
40
-
41
- ## <a name="GettingStarted"></a>Getting Started
42
-
43
- First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt:
44
-
45
- ```
46
- from segment_anything import build_sam, SamPredictor
47
- predictor = SamPredictor(build_sam(checkpoint="</path/to/model.pth>"))
48
- predictor.set_image(<your_image>)
49
- masks, _, _ = predictor.predict(<input_prompts>)
50
- ```
51
-
52
- or generate masks for an entire image:
53
-
54
- ```
55
- from segment_anything import build_sam, SamAutomaticMaskGenerator
56
- mask_generator = SamAutomaticMaskGenerator(build_sam(checkpoint="</path/to/model.pth>"))
57
- masks = mask_generator_generate(<your_image>)
58
- ```
59
-
60
- Additionally, masks can be generated for images from the command line:
61
-
62
- ```
63
- python scripts/amg.py --checkpoint <path/to/sam/checkpoint> --input <image_or_folder> --output <output_directory>
64
- ```
65
-
66
- See the examples notebooks on [using SAM with prompts](/notebooks/predictor_example.ipynb) and [automatically generating masks](/notebooks/automatic_mask_generator_example.ipynb) for more details.
67
-
68
- <p float="left">
69
- <img src="assets/notebook1.png?raw=true" width="49.1%" />
70
- <img src="assets/notebook2.png?raw=true" width="48.9%" />
71
- </p>
72
-
73
- ## ONNX Export
74
-
75
- SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with
76
-
77
- ```
78
- python scripts/export_onnx_model.py --checkpoint <path/to/checkpoint> --output <path/to/output>
79
- ```
80
-
81
- See the [example notebook](https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb) for details on how to combine image preprocessing via SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export.
82
-
83
- ## <a name="Models"></a>Model Checkpoints
84
-
85
- Three model versions of the model are available with different backbone sizes. These models can be instantiated by running
86
- ```
87
- from segment_anything import sam_model_registry
88
- sam = sam_model_registry["<name>"](checkpoint="<path/to/checkpoint>")
89
- ```
90
- Click the links below to download the checkpoint for the corresponding model name. The default model in bold can also be instantiated with `build_sam`, as in the examples in [Getting Started](#getting-started).
91
-
92
- * **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)**
93
- * `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth)
94
- * `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth)
95
-
96
- ## License
97
- The model is licensed under the [Apache 2.0 license](LICENSE).
98
-
99
- ## Contributing
100
-
101
- See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).
102
-
103
- ## Contributors
104
-
105
- The Segment Anything project was made possible with the help of many contributors (alphabetical):
106
-
107
- Aaron Adcock, Vaibhav Aggarwal, Morteza Behrooz, Cheng-Yang Fu, Ashley Gabriel, Ahuva Goldstand, Allen Goodman, Sumanth Gurram, Jiabo Hu, Somya Jain, Devansh Kukreja, Robert Kuo, Joshua Lane, Yanghao Li, Lilian Luong, Jitendra Malik, Mallika Malhotra, William Ngan, Omkar Parkhi, Nikhil Raina, Dirk Rowe, Neil Sejoor, Vanessa Stark, Bala Varadarajan, Bram Wasti, Zachary Winstrom
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chris4K/llms_compare/Jumanji-Welcome-To-The-Jungle-English-Dual-Audio-Eng-Hindi-1080p.md DELETED
@@ -1,80 +0,0 @@
1
- ## Jumanji: Welcome to The Jungle (English) dual audio eng hindi 1080p
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
-
10
-
11
- **Jumanji: Welcome To The Jungle (English) Dual Audio Eng Hindi 1080p [https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2txP3J&sa=D&sntz=1&usg=AOvVaw3aRa6XhDCOE--8taplWh7E](https://www.google.com/url?q=https%3A%2F%2Furllie.com%2F2txP3J&sa=D&sntz=1&usg=AOvVaw3aRa6XhDCOE--8taplWh7E)**
12
-
13
-
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
- Here is a possible title and article for your keyword:
24
-
25
- # Jumanji: Welcome to the Jungle - A Fun Adventure Movie with Dual Audio in English and Hindi
26
-
27
-
28
-
29
- If you are looking for a fun and exciting movie to watch with your family or friends, you might want to check out Jumanji: Welcome to the Jungle. This is a 2017 adventure comedy film that is a sequel to the 1995 classic Jumanji. It stars Dwayne Johnson, Jack Black, Kevin Hart, Karen Gillan, Nick Jonas, and Bobby Cannavale as a group of teenagers who get sucked into a video game version of Jumanji and have to survive its dangers as different avatars.
30
-
31
-
32
-
33
- One of the best features of this movie is that it has dual audio in English and Hindi. This means that you can enjoy the movie in your preferred language without missing any of the jokes or dialogues. You can also switch between the languages anytime you want with the help of subtitles. This way, you can experience the movie in a more immersive and engaging way.
34
-
35
-
36
-
37
- Jumanji: Welcome to the Jungle is a movie that has something for everyone. It has action, comedy, romance, fantasy, and drama. It also has amazing visuals and special effects that will make you feel like you are in the game world. The movie has received positive reviews from critics and audiences alike, and has grossed over $962 million worldwide. It is one of the highest-grossing films of 2017 and one of the most successful films in the Jumanji franchise.
38
-
39
-
40
-
41
- If you want to watch Jumanji: Welcome to the Jungle in dual audio with 1080p resolution, you can download it from various online sources. However, you should be careful about the quality and legality of the downloads. Some of them might be fake or contain viruses or malware that can harm your device. To avoid these risks, you should only download from trusted and verified sites that offer high-quality and safe downloads.
42
-
43
-
44
-
45
- One of such sites is **opensubtitles.com**, which is a popular and reliable platform for downloading subtitles and movies in different languages. You can find Jumanji: Welcome to the Jungle subtitles in English[^1^] on this site, as well as other languages like Spanish, French, German, etc. You can also find Jumanji: Welcome to the Jungle movies in dual audio in English and Hindi[^3^] on this site, as well as other formats like BluRay, DVD, etc. You can download these files easily and quickly with just a few clicks.
46
-
47
-
48
-
49
- Jumanji: Welcome to the Jungle is a movie that you don't want to miss. It is a fun and thrilling adventure that will keep you entertained from start to finish. With dual audio in English and Hindi, you can enjoy it even more in your preferred language. Download it today from opensubtitles.com and have a great time watching it!
50
-
51
- Here is a possible continuation of the article:
52
-
53
- But what is Jumanji: Welcome to the Jungle about? And how is it different from the original Jumanji? Well, let's find out.
54
-
55
-
56
-
57
- The movie starts with four high school students who are given detention for various reasons. They are Spencer, a nerdy gamer; Fridge, a football star; Bethany, a self-absorbed beauty; and Martha, a shy and smart girl. They are assigned to clean up an old storage room, where they find an old video game console with a cartridge of Jumanji. Curious, they decide to play the game and choose their avatars. However, they soon realize that they are not just playing the game, but are actually in the game.
58
-
59
-
60
-
61
- They are transported to a jungle setting, where they discover that they have become their avatars. Spencer is now Dr. Smolder Bravestone, a muscular and charismatic explorer; Fridge is now Franklin "Mouse" Finbar, a short and weak zoologist; Bethany is now Professor Sheldon "Shelly" Oberon, an overweight and middle-aged cartographer; and Martha is now Ruby Roundhouse, a sexy and skilled martial artist. They also learn that they have three lives each, and if they lose them all, they die for real.
62
-
63
-
64
-
65
- They meet Nigel, an NPC (non-player character) who gives them their mission: to return a magical jewel called the Jaguar's Eye to a giant statue and lift the curse that has fallen upon Jumanji. The jewel was stolen by Van Pelt, a corrupt explorer who has gained control over the animals of the jungle. Along the way, they encounter various obstacles and enemies, such as snakes, hippos, crocodiles, bikers, and Van Pelt's henchmen. They also meet Alex, another player who has been stuck in the game for 20 years as Jefferson "Seaplane" McDonough, a pilot and adventurer.
66
-
67
-
68
-
69
- As they progress through the game, they learn to work together and use their strengths and weaknesses to their advantage. They also learn more about themselves and each other, and develop friendships and romances. They realize that Jumanji is not just a game, but a test of their courage and character. Will they be able to complete the game and return to their normal lives? Or will they be trapped in Jumanji forever?
70
-
71
-
72
-
73
- To find out the answer, you have to watch Jumanji: Welcome to the Jungle in dual audio with 1080p resolution. It is a movie that will make you laugh, cry, cheer, and gasp. It is a movie that will make you feel like you are part of the adventure. It is a movie that you will love.
74
-
75
- dfd1c89656
76
-
77
-
78
-
79
-
80
-